From 023f3395eeafa3ecb4a4714c5f9d0a41ad1965bf Mon Sep 17 00:00:00 2001 From: rsoulliere Date: Sun, 27 Mar 2011 20:05:28 -0400 Subject: [PATCH] Fix Docbook processing errors. --- 1.6/admin/AdminMisc.xml | 11 +- 1.6/appendices/schema1.6.1.6.xml | 2 +- 1.6/pdf/temp.fo | 7872 ++++++++++++++------------- 2.0/admin/admin-lsa_2.0.xml | 10 +- 2.0/pdf/temp.fo | 7123 ++++++++++++++---------- 2.0/stafftasks/acquisitions_module-GPLS.xml | 690 +-- 2.0/stafftasks/kcls_acquisitions.xml | 441 +- 2.0/stafftasks/serials-gpls.xml | 914 ++-- 8 files changed, 9147 insertions(+), 7916 deletions(-) diff --git a/1.6/admin/AdminMisc.xml b/1.6/admin/AdminMisc.xml index cda4e76251..740237d21c 100644 --- a/1.6/admin/AdminMisc.xml +++ b/1.6/admin/AdminMisc.xml @@ -3,7 +3,7 @@ xmlns:xl="http://www.w3.org/1999/xlink" version="5.0" xml:id="adminmisc"> - Server Operations and Maintenance + Server Operations receipt template editor This chapter deals with basic server operations such as starting and stopping Evergreen as well wall @@ -203,7 +203,8 @@ z39_50_start() { fi sudo -u opensrf bash -c "touch ${SRU_LOG}" sudo bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin z39_50.sh >> ${SRU_LOG} 2>&1" & + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin \ + z39_50.sh >> ${SRU_LOG} 2>&1" & sleep 1 pids=`ps -eo pid,args | grep $sru_name | grep -v grep | cut -c1-6` if [ x"$pids" = x ] ; then @@ -290,7 +291,8 @@ case "$1" in start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl| \ start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all) sudo -u opensrf /bin/bash -c \ - "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a $1" + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 \ + PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a $1" ;; *) echo " * Usage: /etc/init.d/evergreen {start|stop|restart|autogen" @@ -298,7 +300,8 @@ case "$1" in echo " |z39_50_start|z39_50_stop|z39_50_restart" echo " |start_reporter|stop_reporter|restart_reporter" echo " |start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl" - echo " |start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all}" + echo " |start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf + |stop_all|start_all|restart_all}" exit 1 ;; esac; diff --git a/1.6/appendices/schema1.6.1.6.xml b/1.6/appendices/schema1.6.1.6.xml index 5cddc896c0..a0e9a8f76b 100644 --- a/1.6/appendices/schema1.6.1.6.xml +++ b/1.6/appendices/schema1.6.1.6.xml @@ -15897,7 +15897,7 @@ - copy_transparency_loan_duration_check + copy_ transparency_ loan_ duration_ check CHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) diff --git a/1.6/pdf/temp.fo b/1.6/pdf/temp.fo index bbf4361de2..0857074447 100644 --- a/1.6/pdf/temp.fo +++ b/1.6/pdf/temp.fo @@ -1,28 +1,28 @@ -Evergreen 1.6 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 1.6 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. Release NotesPart II. Public Access CatalogChapter 3. Basic SearchChapter 4. Advanced SearchChapter 5. Search TipsChapter 6. Search MethodologyChapter 7. Search URLChapter 8. Search ResultsChapter 9. My AccountChapter 10. Simple Self Check InterfaceUsing the Self Check InterfaceCustomizing the Self Check InterfacePart III. Core Staff TasksChapter 11. Using the Staff ClientLogging in to EvergreenNavigationCustomizing the Staff ClientChapter 12. CirculationPatron RecordsCirculating ItemsBills and PaymentsHoldsTransit ItemsOffline TransactionsChapter 13. CataloguingLocating RecordsAdding New Bibliographic RecordsWorking with the MARC EditorCataloging TemplatesBucketsMerging Bibliographic RecordsAdding holdings to title recordsCataloguing Electronic Resources -- Finding Them in OPAC searchesPrinting Spine and Pocket LabelsDeleting RecordsChapter 14. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsPart IV. AdministrationChapter 15. System Requirements and Hardware ConfigurationsServer Minimum RequirementsServer Hardware Configurations and ClusteringStaff Client RequirementsChapter 16. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.4.x On Ubuntu or DebianInstalling Evergreen 1.6.1.x On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsInstalling In Virtualized Linux EnvironmentsInstalling Virtualization SoftwareInstalling "VirtualBox" Virtualization SoftwareInstalling "VMware" Virtualization SoftwareInstalling Linux / Evergreen on Virtualization SoftwareManually install Linux and EvergreenDownload and install a prebuilt software imageChapter 17. Installation of Evergreen Staff Client SoftwareInstalling the Staff ClientInstalling a Pre-Built Staff ClientInstalling on WindowsInstalling on Mac OSInstalling on LinuxBuilding the Staff ClientAdvanced Build OptionsInstalling and Activating a Manually Built Staff ClientPackaging the Staff ClientStaff Client Automatic UpdatesOther tipsRunning the Staff ClientAssigning Workstation NamesRunning the Staff Client Over An SSH TunnelSetting Up an SSH TunnelConfiguring the Staff Client to Use the SSH TunnelNavigating a Tabbed InterfaceChapter 18. Upgrading Evergreen to 1.6.1Backing Up DataUpgrading OpenSRF to 1.6Upgrade Evergreen from 1.4 to 1.6.1Upgrade Evergreen from 1.6.0 to 1.6.1Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4Chapter 19. Server Operations and MaintenanceStarting, Stopping and RestartingAutomating Evergreen Startup and ShutdownBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 20. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 21. Troubleshooting System ErrorsChapter 22. Languages and LocalizationEnabling and Disabling LanguagesChapter 23. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 24. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 25. Server AdministrationOrganizational Unit Types and Organizational UnitsUser and Group PermissionsStaff AccountsCopy StatusBilling TypesCirculation ModifiersCataloging TemplatesAdjusting Search Relevancy RankingsNotificationsHold NotificationsOverdue and Predue NotificationsChapter 26. Local Administration MenuOverviewReceipt Template EditorGlobal Font and Sound SettingsPrinter Settings EditorClosed Dates EditorCopy Locations EditorLibrary Settings EditorNon-Catalogued Type EditorGroup Penalty ThresholdsStatistical Categories EditorField DocumentationSurveysCash ReportsChapter 27. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 28. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsPart V. ReportsChapter 29. Starting and Stopping the Reporter DaemonChapter 30. FoldersCreating FoldersManaging FoldersChapter 31. Creating TemplatesChoosing Report FieldsApplying FiltersChapter 32. Generating Reports from TemplatesChapter 33. Viewing Report OutputChapter 34. Cloning Shared TemplatesChapter 35. Running Recurring ReportsChapter 36. Template TerminologyPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 37. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 38. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 39. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 40. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 41. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 42. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 43. JSON QueriesChapter 44. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesAppendix A. Evergreen Installation ChecklistChapter 45. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema reporterSchema searchSchema serialSchema statsSchema vandelayAppendix B. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix C. Getting More InformationGlossaryIndex +Evergreen 1.6 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 1.6 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. Release NotesPart II. Public Access CatalogChapter 3. Basic SearchChapter 4. Advanced SearchChapter 5. Search TipsChapter 6. Search MethodologyChapter 7. Search URLChapter 8. Search ResultsChapter 9. My AccountChapter 10. Simple Self Check InterfaceUsing the Self Check InterfaceCustomizing the Self Check InterfacePart III. Core Staff TasksChapter 11. Using the Staff ClientLogging in to EvergreenNavigationCustomizing the Staff ClientChapter 12. CirculationPatron RecordsCirculating ItemsBills and PaymentsHoldsTransit ItemsOffline TransactionsChapter 13. CataloguingLocating RecordsAdding New Bibliographic RecordsWorking with the MARC EditorCataloging TemplatesBucketsMerging Bibliographic RecordsAdding holdings to title recordsCataloguing Electronic Resources -- Finding Them in OPAC searchesPrinting Spine and Pocket LabelsDeleting RecordsChapter 14. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsPart IV. AdministrationChapter 15. System Requirements and Hardware ConfigurationsServer Minimum RequirementsServer Hardware Configurations and ClusteringStaff Client RequirementsChapter 16. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.4.x On Ubuntu or DebianInstalling Evergreen 1.6.1.x On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsInstalling In Virtualized Linux EnvironmentsInstalling Virtualization SoftwareInstalling "VirtualBox" Virtualization SoftwareInstalling "VMware" Virtualization SoftwareInstalling Linux / Evergreen on Virtualization SoftwareManually install Linux and EvergreenDownload and install a prebuilt software imageChapter 17. Installation of Evergreen Staff Client SoftwareInstalling the Staff ClientInstalling a Pre-Built Staff ClientInstalling on WindowsInstalling on Mac OSInstalling on LinuxBuilding the Staff ClientAdvanced Build OptionsInstalling and Activating a Manually Built Staff ClientPackaging the Staff ClientStaff Client Automatic UpdatesOther tipsRunning the Staff ClientAssigning Workstation NamesRunning the Staff Client Over An SSH TunnelSetting Up an SSH TunnelConfiguring the Staff Client to Use the SSH TunnelNavigating a Tabbed InterfaceChapter 18. Upgrading Evergreen to 1.6.1Backing Up DataUpgrading OpenSRF to 1.6Upgrade Evergreen from 1.4 to 1.6.1Upgrade Evergreen from 1.6.0 to 1.6.1Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4Chapter 19. Server Operations and MaintenanceStarting, Stopping and RestartingAutomating Evergreen Startup and ShutdownBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 20. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 21. Troubleshooting System ErrorsChapter 22. Languages and LocalizationEnabling and Disabling LanguagesChapter 23. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 24. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 25. Server AdministrationOrganizational Unit Types and Organizational UnitsUser and Group PermissionsStaff AccountsCopy StatusBilling TypesCirculation ModifiersCataloging TemplatesAdjusting Search Relevancy RankingsNotificationsHold NotificationsOverdue and Predue NotificationsChapter 26. Local Administration MenuOverviewReceipt Template EditorGlobal Font and Sound SettingsPrinter Settings EditorClosed Dates EditorCopy Locations EditorLibrary Settings EditorNon-Catalogued Type EditorGroup Penalty ThresholdsStatistical Categories EditorField DocumentationSurveysCash ReportsChapter 27. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 28. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsPart V. ReportsChapter 29. Starting and Stopping the Reporter DaemonChapter 30. FoldersCreating FoldersManaging FoldersChapter 31. Creating TemplatesChoosing Report FieldsApplying FiltersChapter 32. Generating Reports from TemplatesChapter 33. Viewing Report OutputChapter 34. Cloning Shared TemplatesChapter 35. Running Recurring ReportsChapter 36. Template TerminologyPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 37. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 38. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 39. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 40. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 41. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 42. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 43. JSON QueriesChapter 44. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesAppendix A. Evergreen Installation ChecklistChapter 45. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema reporterSchema searchSchema serialSchema statsSchema vandelayAppendix B. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix C. Getting More InformationGlossaryIndex Report errors in this documentation using Launchpad. Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationDraft VersionDocumentation Interest GroupEvergreen 1.6 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2010 Evergreen Community + Evergreen 1.6 DocumentationDraft VersionDocumentation Interest GroupEvergreen 1.6 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2010 Evergreen Community - This document was updated 2011-03-23. + This document was updated 2011-03-26. Evergreen 1.6 DocumentationEvergreen 1.6 Documentation Report errors in this documentation using Launchpad. Evergreen 1.6 Documentation Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationTable of ContentsI. Introduction 1. About Evergreen 2. Release Notes II. Public Access Catalog 3. Basic Search 4. Advanced Search 5. Search Tips 6. Search Methodology 7. Search URL 8. Search Results 9. My Account 10. Simple Self Check Interface Using the Self Check Interface Customizing the Self Check Interface III. Core Staff Tasks 11. Using the Staff Client Logging in to Evergreen Navigation Customizing the Staff Client 12. Circulation Patron Records Circulating Items Bills and Payments Holds Transit Items Offline Transactions 13. Cataloguing Locating Records Adding New Bibliographic Records Working with the MARC Editor Cataloging Templates Buckets Merging Bibliographic Records Adding holdings to title records Cataloguing Electronic Resources -- Finding Them in OPAC searches Printing Spine and Pocket Labels Deleting Records 14. Using the Booking Module Creating a Booking Reservation Cancelling a Reservation Creating a Pull List Capturing Items for Reservations Picking Up Reservations Returning Reservations IV. Administration 15. System Requirements and Hardware Configurations Server Minimum Requirements Server Hardware Configurations and Clustering Staff Client Requirements 16. Server-side Installation of Evergreen Software Installing Server-Side Software Installing OpenSRF 1.4.x On Ubuntu or + Evergreen 1.6 DocumentationTable of ContentsI. Introduction 1. About Evergreen 2. Release Notes II. Public Access Catalog 3. Basic Search 4. Advanced Search 5. Search Tips 6. Search Methodology 7. Search URL 8. Search Results 9. My Account 10. Simple Self Check Interface Using the Self Check Interface Customizing the Self Check Interface III. Core Staff Tasks 11. Using the Staff Client Logging in to Evergreen Navigation Customizing the Staff Client 12. Circulation Patron Records Circulating Items Bills and Payments Holds Transit Items Offline Transactions 13. Cataloguing Locating Records Adding New Bibliographic Records Working with the MARC Editor Cataloging Templates Buckets Merging Bibliographic Records Adding holdings to title records Cataloguing Electronic Resources -- Finding Them in OPAC searches Printing Spine and Pocket Labels Deleting Records 14. Using the Booking Module Creating a Booking Reservation Cancelling a Reservation Creating a Pull List Capturing Items for Reservations Picking Up Reservations Returning Reservations IV. Administration 15. System Requirements and Hardware Configurations Server Minimum Requirements Server Hardware Configurations and Clustering Staff Client Requirements 16. Server-side Installation of Evergreen Software Installing Server-Side Software Installing OpenSRF 1.4.x On Ubuntu or Debian Installing Evergreen 1.6.1.x On Ubuntu or - Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores Installing In Virtualized Linux Environments 17. Installation of Evergreen Staff Client Software Installing the Staff Client Installing a Pre-Built Staff Client Building the Staff Client Advanced Build Options Installing and Activating a Manually Built Staff Client Packaging the Staff Client Staff Client Automatic Updates Other tips Running the Staff Client Assigning Workstation Names Running the Staff Client Over An SSH Tunnel Navigating a Tabbed Interface 18. Upgrading Evergreen to 1.6.1 Backing Up Data Upgrading OpenSRF to 1.6 Upgrade Evergreen from 1.4 to 1.6.1 Upgrade Evergreen from 1.6.0 to 1.6.1 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 19. Server Operations and Maintenance Starting, Stopping and Restarting Automating Evergreen Startup and Shutdown Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 20. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 21. Troubleshooting System Errors 22. Languages and Localization Enabling and Disabling Languages 23. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 24. SIP Server Installing the SIP Server SIP Communication 25. Server Administration Organizational Unit Types and Organizational Units User and Group Permissions Staff Accounts Copy Status Billing Types Circulation Modifiers Cataloging Templates Adjusting Search Relevancy Rankings Notifications Hold Notifications Overdue and Predue Notifications 26. Local Administration Menu Overview Receipt Template Editor Global Font and Sound Settings Printer Settings Editor Closed Dates Editor Copy Locations Editor Library Settings Editor Non-Catalogued Type Editor Group Penalty Thresholds Statistical Categories Editor Field Documentation Surveys Cash Reports 27. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 28. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions V. Reports 29. Starting and Stopping the Reporter Daemon 30. Folders Creating Folders Managing Folders 31. Creating Templates Choosing Report Fields Applying Filters 32. Generating Reports from Templates 33. Viewing Report Output 34. Cloning Shared Templates 35. Running Recurring Reports 36. Template Terminology VI. Third Party System Integration VII. Development 37. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 38. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 39. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 40. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 41. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 42. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 43. JSON Queries 44. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices A. Evergreen Installation Checklist 45. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema reporter Schema search Schema serial Schema stats Schema vandelay B. About this Documentation About the Documentation Interest Group (DIG) How to Participate C. Getting More Information Glossary Index Evergreen 1.6 DocumentationEvergreen 1.6 Documentation + Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores Installing In Virtualized Linux Environments 17. Installation of Evergreen Staff Client Software Installing the Staff Client Installing a Pre-Built Staff Client Building the Staff Client Advanced Build Options Installing and Activating a Manually Built Staff Client Packaging the Staff Client Staff Client Automatic Updates Other tips Running the Staff Client Assigning Workstation Names Running the Staff Client Over An SSH Tunnel Navigating a Tabbed Interface 18. Upgrading Evergreen to 1.6.1 Backing Up Data Upgrading OpenSRF to 1.6 Upgrade Evergreen from 1.4 to 1.6.1 Upgrade Evergreen from 1.6.0 to 1.6.1 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 19. Server Operations and Maintenance Starting, Stopping and Restarting Automating Evergreen Startup and Shutdown Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 20. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 21. Troubleshooting System Errors 22. Languages and Localization Enabling and Disabling Languages 23. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 24. SIP Server Installing the SIP Server SIP Communication 25. Server Administration Organizational Unit Types and Organizational Units User and Group Permissions Staff Accounts Copy Status Billing Types Circulation Modifiers Cataloging Templates Adjusting Search Relevancy Rankings Notifications Hold Notifications Overdue and Predue Notifications 26. Local Administration Menu Overview Receipt Template Editor Global Font and Sound Settings Printer Settings Editor Closed Dates Editor Copy Locations Editor Library Settings Editor Non-Catalogued Type Editor Group Penalty Thresholds Statistical Categories Editor Field Documentation Surveys Cash Reports 27. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 28. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions V. Reports 29. Starting and Stopping the Reporter Daemon 30. Folders Creating Folders Managing Folders 31. Creating Templates Choosing Report Fields Applying Filters 32. Generating Reports from Templates 33. Viewing Report Output 34. Cloning Shared Templates 35. Running Recurring Reports 36. Template Terminology VI. Third Party System Integration VII. Development 37. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 38. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 39. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 40. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 41. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 42. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 43. JSON Queries 44. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices A. Evergreen Installation Checklist 45. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema reporter Schema search Schema serial Schema stats Schema vandelay B. About this Documentation About the Documentation Interest Group (DIG) How to Participate C. Getting More Information Glossary Index Evergreen 1.6 DocumentationEvergreen 1.6 Documentation Report errors in this documentation using Launchpad. Evergreen 1.6 Documentation Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationList of Figures16.1. Starting the Windows installation of VirtualBox 16.2. Welcome to VirtualBox setup wizard 16.3. Accept the license agreement 16.4. Waiting for installation to complete 16.5. Installation is complete; start VirtualBox 16.6. Starting VirtualBox for the first time 16.7. Selecting the software image in Virtual Media Manager 16.8. New software image added to VirtualBox 16.9. Creating a new VM 16.10. Setting the VM name and OS type 16.11. Setting memory size 16.12. Setting up the Virtual Hard Disk 16.13. Finishing definition of new VM 16.14. Summary of the new VM Evergreen 1.6 DocumentationEvergreen 1.6 Documentation + Evergreen 1.6 DocumentationList of Figures16.1. Starting the Windows installation of VirtualBox 16.2. Welcome to VirtualBox setup wizard 16.3. Accept the license agreement 16.4. Waiting for installation to complete 16.5. Installation is complete; start VirtualBox 16.6. Starting VirtualBox for the first time 16.7. Selecting the software image in Virtual Media Manager 16.8. New software image added to VirtualBox 16.9. Creating a new VM 16.10. Setting the VM name and OS type 16.11. Setting memory size 16.12. Setting up the Virtual Hard Disk 16.13. Finishing definition of new VM 16.14. Summary of the new VM Evergreen 1.6 DocumentationEvergreen 1.6 Documentation Report errors in this documentation using Launchpad. Evergreen 1.6 Documentation Report any errors in this documentation using Launchpad. - Evergreen 1.6 DocumentationList of Tables12.1. Hold Levels Explained: 16.1. Evergreen Software Dependencies 16.2. Keyword Targets for OpenSRF "make" Command 16.3. Sample XPath syntax for editing "opensrf_core.xml" 16.4. Keyword Targets for Evergreen "make" Command 16.5. Sample XPath syntax for editing "opensrf_core.xml" 16.6. Linux / Evergreen Virtual Images 16.7. Default Accounts 17.1. Evergreen / XULrunner Dependencies 17.2. Keywords For Advanced Build Options 17.3. Icon IDs for Packaging a Windows Client 19.1. Suggested configuration values 25.1. Permissions Table 25.2. Copy Status Table 25.3. search.relevance_adjustment table 27.1. Action Trigger Event Definitions 27.2. Hooks 27.3. Action Trigger Reactors 27.4. Action Trigger Validators 37.1. Evergreen Directory Structure 37.2. Key Evergreen Configuration Files 37.3. Useful Evergreen Scripts 42.1. Examples: database object names 42.2. Evergreen schema names 42.3. PostgreSQL data types used by Evergreen 42.4. Example: Some potential natural primary keys for a table of people 42.5. Example: Evergreen’s copy / call number / bibliographic record relationships B.1. Evergreen DIG Participants B.2. Past DIG Participants + Evergreen 1.6 DocumentationList of Tables12.1. Hold Levels Explained: 16.1. Evergreen Software Dependencies 16.2. Keyword Targets for OpenSRF "make" Command 16.3. Sample XPath syntax for editing "opensrf_core.xml" 16.4. Keyword Targets for Evergreen "make" Command 16.5. Sample XPath syntax for editing "opensrf_core.xml" 16.6. Linux / Evergreen Virtual Images 16.7. Default Accounts 17.1. Evergreen / XULrunner Dependencies 17.2. Keywords For Advanced Build Options 17.3. Icon IDs for Packaging a Windows Client 19.1. Suggested configuration values 25.1. Permissions Table 25.2. Copy Status Table 25.3. search.relevance_adjustment table 27.1. Action Trigger Event Definitions 27.2. Hooks 27.3. Action Trigger Reactors 27.4. Action Trigger Validators 37.1. Evergreen Directory Structure 37.2. Key Evergreen Configuration Files 37.3. Useful Evergreen Scripts 42.1. Examples: database object names 42.2. Evergreen schema names 42.3. PostgreSQL data types used by Evergreen 42.4. Example: Some potential natural primary keys for a table of people 42.5. Example: Evergreen’s copy / call number / bibliographic record relationships B.1. Evergreen DIG Participants B.2. Past DIG Participants @@ -38,9 +38,9 @@ Chapter 1. About EvergreenChapter 1. About Evergreen Evergreen is an open source library automation software designed to meet the needs of the very smallest to the very largest libraries and consortia. Through its staff interface, it facilitates the management, cataloging, and circulation of library materials, and through its online public access interface it helps patrons find those materials. - The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. + The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. The community’s development requirements state that Evergreen must be: - •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. + •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. Evergreen, which first launched in 2006 now powers over 544 libraries of every type – public, academic, special, school, and even tribal and home libraries – in over a dozen countries worldwide. Chapter 2. Release NotesChapter 2. Release Notes @@ -51,169 +51,169 @@ - 1.6.0.81.6.0.8 + 1.6.0.81.6.0.8 - New featuresNew features + New featuresNew features - •Added index for case insensitive barcode searching (1.6.0.7) for speed.•Move to BibTemplate for general title detail display, not just overlay of MVR-based display. + •Added index for case insensitive barcode searching (1.6.0.7) for speed.•Move to BibTemplate for general title detail display, not just overlay of MVR-based display. - Bug fixesBug fixes + Bug fixesBug fixes - •Offline transaction timestamp and export fixes.•More configuration interface improvements.•Printing improvements to avoid the dreaded “inner print_tree” errors.•Fix Google Books full-text functionality.•User Editor improvements (addresses, appropriate required fields). + •Offline transaction timestamp and export fixes.•More configuration interface improvements.•Printing improvements to avoid the dreaded “inner print_tree” errors.•Fix Google Books full-text functionality.•User Editor improvements (addresses, appropriate required fields). - 1.6.0.71.6.0.7 + 1.6.0.71.6.0.7 - New featuresNew features + New featuresNew features - •Made barcode searching from the general user search interface case insensitive + •Made barcode searching from the general user search interface case insensitive - Bug fixesBug fixes + Bug fixesBug fixes - •FIFO Holds Org Setting name in the Library Settings Editor did not match that used by the SQL – repaired.•Repaired Authority Record ingest.•Backdating timestamp format bug fixed – patch from James Fournie at SITKA.•Configuration interface bugs addressed (ongoing improvement from 1.6.0.4).•Action/Trigger (notifications, etc) bugs addressed.•In-Database record merging bug fixes (indicators, Located URIs)•In-Database hold testing stored procedure bug fixed – patch from John Craig. + •FIFO Holds Org Setting name in the Library Settings Editor did not match that used by the SQL – repaired.•Repaired Authority Record ingest.•Backdating timestamp format bug fixed – patch from James Fournie at SITKA.•Configuration interface bugs addressed (ongoing improvement from 1.6.0.4).•Action/Trigger (notifications, etc) bugs addressed.•In-Database record merging bug fixes (indicators, Located URIs)•In-Database hold testing stored procedure bug fixed – patch from John Craig. - 1.6.0.61.6.0.6 + 1.6.0.61.6.0.6 - SecuritySecurity + SecuritySecurity - •Address a security vulnerability in open-ils.pcrud that allows retrieval of information beyond the bounds of the permissions for + •Address a security vulnerability in open-ils.pcrud that allows retrieval of information beyond the bounds of the permissions for the targeted objects. - Bug fixesBug fixes + Bug fixesBug fixes - •Remove a call to a non-existent method.•Add debugging messages to the action-trigger script and server code + •Remove a call to a non-existent method.•Add debugging messages to the action-trigger script and server code - 1.6.0.51.6.0.5 + 1.6.0.51.6.0.5 - New featuresNew features + New featuresNew features - • Patch from James Fournie to add a setting for first-in, first-out (FIFO) holds resolution so that items checked in will be assigned to holds + • Patch from James Fournie to add a setting for first-in, first-out (FIFO) holds resolution so that items checked in will be assigned to holds by request date first, rather than proximity. - Bug fixesBug fixes + Bug fixesBug fixes - •Patch from Dan Wells to enable the bookbag menu to show up in Craftsman skin.•Patch from Bill Ott to add missing apostrophe in rdetail.js.•Fix for report editor parameters not consistently showing up.•Log bib search timeouts. + •Patch from Dan Wells to enable the bookbag menu to show up in Craftsman skin.•Patch from Bill Ott to add missing apostrophe in rdetail.js.•Fix for report editor parameters not consistently showing up.•Log bib search timeouts. - 1.6.0.41.6.0.4 + 1.6.0.41.6.0.4 - New featuresNew features + New featuresNew features - •Patch from Dan Wells to add an org-unit setting to restrict renewals when the item in question is needed to fulfill a hold. + •Patch from Dan Wells to add an org-unit setting to restrict renewals when the item in question is needed to fulfill a hold. - Bug fixesBug fixes + Bug fixesBug fixes - •Patch from Jason Stephenson to allow the EVERYTHING permission in permission.usr_has_perm_at_nd.•Patch from Warren Layton to remove a debugging alert in the permission creation interface.•Patch from Warren Layton to sort Z39.50 servers in Z39.50 import interface.•Patch from Galen Charlton to prevent legacy 852 fields from being exported during bib+holdings export.•Patch from Galen Charlton to prevent one bad MARC record from spoiling the rest of the export.•Patch from Galen Charlton to remove empty XML elements and control fields when ingesting a bib record.• Patch from Galen Charlton. This patch adds additional calls to escape_xml to handle cases where patron or library data could contain - ampersand or other characters that need to be converted to entities. Issue discovered by Bibliomation; patch includes contributions by Ben Ostrowsky.•Enable display of barcodes in brief circulation interface even when patron has no middle name (problem diagnosed by - Bill Ott).•Correct the calculation of patron bills.•Fix parsing of colons in search phrases.•Fix handling of horizontal patron summary setting.•Various fixes for server administration interfaces.•Correct date handling in My Account interface.•Prevent an exception from being thrown when a standing penalty is removed.•Fix ISSN quicksearch (bug reported by Dan Wells).•Prevent colons from being incorrectly inserted into titles in search results display.•Fix survey interface in patron editor to enable it to save results correctly.•Corrections in in-database circulation: enable check-out and renewal of pre-cataloged items, process non-cataloged items.•Correct Unicode handling in SRU/Z39.50 server. + •Patch from Jason Stephenson to allow the EVERYTHING permission in permission.usr_has_perm_at_nd.•Patch from Warren Layton to remove a debugging alert in the permission creation interface.•Patch from Warren Layton to sort Z39.50 servers in Z39.50 import interface.•Patch from Galen Charlton to prevent legacy 852 fields from being exported during bib+holdings export.•Patch from Galen Charlton to prevent one bad MARC record from spoiling the rest of the export.•Patch from Galen Charlton to remove empty XML elements and control fields when ingesting a bib record.• Patch from Galen Charlton. This patch adds additional calls to escape_xml to handle cases where patron or library data could contain + ampersand or other characters that need to be converted to entities. Issue discovered by Bibliomation; patch includes contributions by Ben Ostrowsky.•Enable display of barcodes in brief circulation interface even when patron has no middle name (problem diagnosed by + Bill Ott).•Correct the calculation of patron bills.•Fix parsing of colons in search phrases.•Fix handling of horizontal patron summary setting.•Various fixes for server administration interfaces.•Correct date handling in My Account interface.•Prevent an exception from being thrown when a standing penalty is removed.•Fix ISSN quicksearch (bug reported by Dan Wells).•Prevent colons from being incorrectly inserted into titles in search results display.•Fix survey interface in patron editor to enable it to save results correctly.•Corrections in in-database circulation: enable check-out and renewal of pre-cataloged items, process non-cataloged items.•Correct Unicode handling in SRU/Z39.50 server. - 1.6.0.31.6.0.3 + 1.6.0.31.6.0.3 - Bug fixesBug fixes + Bug fixesBug fixes - •Patch from Dan Wells to address a regression in the Reshelving-to-Available method call.•Patch from Warren Layton of NRCAN to address a regression in date calculation code.•Fix for offline identification requirement (relaxed to match on-line patron registration). + •Patch from Dan Wells to address a regression in the Reshelving-to-Available method call.•Patch from Warren Layton of NRCAN to address a regression in date calculation code.•Fix for offline identification requirement (relaxed to match on-line patron registration). - 1.6.0.21.6.0.2 + 1.6.0.21.6.0.2 - New featuresNew features + New featuresNew features - •Support indexing normalization and search of ratio-like strings.•Support specific-index searching via the basic search dropdown. + •Support indexing normalization and search of ratio-like strings.•Support specific-index searching via the basic search dropdown. - Bug fixesBug fixes + Bug fixesBug fixes - •Fix for search bug introduced in 1.6.0.1 which primarily effected Z39.50 searches against Evergreen.•Fix for offline patron blocked list generation (Patch from Joe Atzberger).•General translation and internationalization improvements.•Force at least one non-system billing type to exist (Identified by Dan Wells). + •Fix for search bug introduced in 1.6.0.1 which primarily effected Z39.50 searches against Evergreen.•Fix for offline patron blocked list generation (Patch from Joe Atzberger).•General translation and internationalization improvements.•Force at least one non-system billing type to exist (Identified by Dan Wells). - 1.6.0.11.6.0.1 + 1.6.0.11.6.0.1 - Bug fixesBug fixes + Bug fixesBug fixes - •Overdue notice XML normalization and encoding fixes.•Remove cosmetic issues with Offline Mode.•Backport compatibility-improved triggers for summary data collection. - •(fixed super-simple record extract view issues for isbn and issn) - •Interface fixes for Self Check. - •(prevent login of patrons who are marked as invalid) - •General grid-related interface cleanups. - - •(fixed pixel and alignment issues in table views accessible from admin settings) - •String translation interface fix – translated strings can be removed. - •(the translation windows now perform removals correctly) - •Command-line data extraction script fixes (Galen Charlton). - •(improved batch export) - •Fixed billing time stamp calculation. - •(e.g. a book that circulates for whole days that is technically due at 3pm doesn't accrue fines until after the library is + •Overdue notice XML normalization and encoding fixes.•Remove cosmetic issues with Offline Mode.•Backport compatibility-improved triggers for summary data collection. + •(fixed super-simple record extract view issues for isbn and issn) + •Interface fixes for Self Check. + •(prevent login of patrons who are marked as invalid) + •General grid-related interface cleanups. + + •(fixed pixel and alignment issues in table views accessible from admin settings) + •String translation interface fix – translated strings can be removed. + •(the translation windows now perform removals correctly) + •Command-line data extraction script fixes (Galen Charlton). + •(improved batch export) + •Fixed billing time stamp calculation. + •(e.g. a book that circulates for whole days that is technically due at 3pm doesn't accrue fines until after the library is closed) - •Fix for searches containing colons but no command tag. - •(the : is no longer assumed to be an index specification so title searches for Homeward Bound: the Incredible Journey + •Fix for searches containing colons but no command tag. + •(the : is no longer assumed to be an index specification so title searches for Homeward Bound: the Incredible Journey will return results) - •Fix for Z39.50 searches containing diacritical marks (Dan Scott). - •(the SRU is now better at detecting incoming encoding) - •Horizontal user summary display fix in the Checkout entry point.•Return of Shadowed Record styling in the staff client for records with no items or no items at this location (Bill Ott).•Holdings import fixes (Dan Wells) (see changeset 15353). - •(Found and fixed the Vandelay bug that manifested based on log in type.) - •Holdings import fixes (Dan Wells) (see changeset 15353).•Fixed an error that occurred when renewing multiple items at once in Items Out + •Fix for Z39.50 searches containing diacritical marks (Dan Scott). + •(the SRU is now better at detecting incoming encoding) + •Horizontal user summary display fix in the Checkout entry point.•Return of Shadowed Record styling in the staff client for records with no items or no items at this location (Bill Ott).•Holdings import fixes (Dan Wells) (see changeset 15353). + •(Found and fixed the Vandelay bug that manifested based on log in type.) + •Holdings import fixes (Dan Wells) (see changeset 15353).•Fixed an error that occurred when renewing multiple items at once in Items Out - New features (front end)New features (front end) + New features (front end)New features (front end) - •French translation updates.•Several new translations: - •Russian (from Tigran Zargaryan)•Czech (forward-ported from 1.4)•British English (submitted via Launchpad)•Spanish (submitted via Launchpad)•Brazilian Portuguese (submitted via Launchpad) - •More places to access Record Buckets in the staff client•Virtual due date for non-cataloged circulations honors closed dates•Differentiated messages for inactive vs. non-existent users. - •(error messages in patron OPAC log in are now different for inactive patrons versus bad log in (typo)/non-existent user) + •French translation updates.•Several new translations: + •Russian (from Tigran Zargaryan)•Czech (forward-ported from 1.4)•British English (submitted via Launchpad)•Spanish (submitted via Launchpad)•Brazilian Portuguese (submitted via Launchpad) + •More places to access Record Buckets in the staff client•Virtual due date for non-cataloged circulations honors closed dates•Differentiated messages for inactive vs. non-existent users. + •(error messages in patron OPAC log in are now different for inactive patrons versus bad log in (typo)/non-existent user) - New features (server/administration)New features (server/administration) + New features (server/administration)New features (server/administration) - •Action/Trigger initiator script. - •(1.6.0.1 includes the default script to initiate system scheduling for action/trigger events - for use in cron jobs) - •Improved MFHD (serials) import script. - •(improved instructions in the read me files and relaxed database constraints) - •SIP2 configurable encoding support.•SIP1 renew-or-checkout support for some 3M equipment which support older SIP protocols.•Updated Linux distribution support.•Automatic update of OpenSRF support files when OpenSRF is upgraded. + •Action/Trigger initiator script. + •(1.6.0.1 includes the default script to initiate system scheduling for action/trigger events - for use in cron jobs) + •Improved MFHD (serials) import script. + •(improved instructions in the read me files and relaxed database constraints) + •SIP2 configurable encoding support.•SIP1 renew-or-checkout support for some 3M equipment which support older SIP protocols.•Updated Linux distribution support.•Automatic update of OpenSRF support files when OpenSRF is upgraded. - Features from 1.6.0.0Features from 1.6.0.0 + Features from 1.6.0.0Features from 1.6.0.0 - New features (front end)New features (front end) + New features (front end)New features (front end) - • Added “insert copy above” (CTRL+up) and “insert - copy below” (CTRL+down) functionality in the MARC Editor.•Summary editing in MARC Format for Holdings Data• BibTemplate OPAC templating – Any field from any version of a record that Evergreen can deliver, with or without embedded holdings, + • Added “insert copy above” (CTRL+up) and “insert + copy below” (CTRL+down) functionality in the MARC Editor.•Summary editing in MARC Format for Holdings Data• BibTemplate OPAC templating – Any field from any version of a record that Evergreen can deliver, with or without embedded holdings, is now available for display using a simple template language which is further extended with basic JavaScript. - • Template customization is now supported that allows specific data fields to be pulled from the MARC and displayed in the OPAC.•Examples would be: added author, alternate title, subject links, and URI data. - • BibTemplate OPAC templating – Any field from any version of a record that Evergreen can deliver, with or without embedded holdings, + • Template customization is now supported that allows specific data fields to be pulled from the MARC and displayed in the OPAC.•Examples would be: added author, alternate title, subject links, and URI data. + • BibTemplate OPAC templating – Any field from any version of a record that Evergreen can deliver, with or without embedded holdings, is now available for display using a simple template language which is further extended with basic JavaScript. - • Template customization is now supported that allows specific data fields to be pulled from the MARC and displayed in the OPAC.•Examples would be: added author, alternate title, subject links, and URI data. - •Located URIs – Adding an 856$9 containing the short name of a location will restrict search and display of entirely electronic records + • Template customization is now supported that allows specific data fields to be pulled from the MARC and displayed in the OPAC.•Examples would be: added author, alternate title, subject links, and URI data. + •Located URIs – Adding an 856$9 containing the short name of a location will restrict search and display of entirely electronic records (those with no physical copies) to the location named. - •In other words, the ability to restrict record visibility to a specific location or set of - locations in the same way as copies but without creating dummies.•Since there is no physical location, however, this does affect advanced searches wherein the shelving location limiter is used.•(improved instructions in the read me files and relaxed database constraints) - •SRU(search/retrieval via url) and Z39.50 searches can now be scoped to specific locations. - •As of Evergreen 1.6, you can append an optional organization unit shortname for search scoping purposes, and you + •In other words, the ability to restrict record visibility to a specific location or set of + locations in the same way as copies but without creating dummies.•Since there is no physical location, however, this does affect advanced searches wherein the shelving location limiter is used.•(improved instructions in the read me files and relaxed database constraints) + •SRU(search/retrieval via url) and Z39.50 searches can now be scoped to specific locations. + •As of Evergreen 1.6, you can append an optional organization unit shortname for search scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and to expose its holdings. - •As a benefit of the URI work, Z39.50 now supports a holdings record format.•Improvements in Fixed Field handling within the MARC Editor.•Staff-placed holds for patrons follow patron settings more closely (no longer pull notification preferences from staff settings) – - Patch from Jeff Godin of TADL.•Improved default configuration for LoC Z39.50 target - added support for required truncation specific to LoC.•Added a new default indexing definition for “all subjects” which will return more results when subject searching in the OPAC.•Many new server configuration interfaces for functions such as circulation policies, hold policies, and notifications.•Added time granularity display to Patron Items Out screen in the Staff Client. Due time now displays along with due date.•Added RefWorks (online bibliographic management program) export capability.• Zotero compatability improvements (MODS namespacing). - •For more information on MODS, see this page. - •Ability to import holdings via the standard Record Importer (Vandelay).•Google Book Preview support as added-content•Improvements made to cloned patron search; fixing issues with records not returning due to cloned fields.•Acquisitions Preview includes a sneak peek at the preliminary work for manual funding management, PO creation, cataloging and receiving processes. + •As a benefit of the URI work, Z39.50 now supports a holdings record format.•Improvements in Fixed Field handling within the MARC Editor.•Staff-placed holds for patrons follow patron settings more closely (no longer pull notification preferences from staff settings) – + Patch from Jeff Godin of TADL.•Improved default configuration for LoC Z39.50 target - added support for required truncation specific to LoC.•Added a new default indexing definition for “all subjects” which will return more results when subject searching in the OPAC.•Many new server configuration interfaces for functions such as circulation policies, hold policies, and notifications.•Added time granularity display to Patron Items Out screen in the Staff Client. Due time now displays along with due date.•Added RefWorks (online bibliographic management program) export capability.• Zotero compatability improvements (MODS namespacing). + •For more information on MODS, see this page. + •Ability to import holdings via the standard Record Importer (Vandelay).•Google Book Preview support as added-content•Improvements made to cloned patron search; fixing issues with records not returning due to cloned fields.•Acquisitions Preview includes a sneak peek at the preliminary work for manual funding management, PO creation, cataloging and receiving processes. These are functional but are not intended for insertion into current workflows. This feature was specifically included to solicit feedback from the community on this important feature. - New features (server/administration)New features (server/administration) + New features (server/administration)New features (server/administration) - •Event Triggers – An entirely new subsystem for automatically running arbitrary, user-defined reaction code when presented with an ILS event + •Event Triggers – An entirely new subsystem for automatically running arbitrary, user-defined reaction code when presented with an ILS event defined by the user. Notifications, delayed actions, acquisitions, and many other systems will make use of this new infrastructure. - •Ability to set pre-due and overdue e-mail notices from the Staff Client.•Auto-marking items as lost after specific overdue period.•Makes it easier to add new data to notices.•Can be used for generating and creating delays for the sending of hold pickup notices.•These settings are configurable from the Staff Client per branch or globally. - •Formal support for Postgresql 8.3.•Dojo profile build specific to Evergreen, increasing load speed dramatically for the OPAC and Staff Clients.•Staff Client interfaces for defining circulation and hold policies from the Admin menu. - •Please note that this represents a change from previous versions of Evergreen and for new clients it is recommended to + •Ability to set pre-due and overdue e-mail notices from the Staff Client.•Auto-marking items as lost after specific overdue period.•Makes it easier to add new data to notices.•Can be used for generating and creating delays for the sending of hold pickup notices.•These settings are configurable from the Staff Client per branch or globally. + •Formal support for Postgresql 8.3.•Dojo profile build specific to Evergreen, increasing load speed dramatically for the OPAC and Staff Clients.•Staff Client interfaces for defining circulation and hold policies from the Admin menu. + •Please note that this represents a change from previous versions of Evergreen and for new clients it is recommended to use this interface, for. - •Formal support for IE8, including a bug fix where titles with the “@” symbol would display as a http link.•Spaces in user names are being deprecated as they can cause authentication failure -CamelCase will be supported from this point forward.•Supercat: added support for returning records in Federal Geographic Data Committee (FGDC) Content Standard for - Digital Geospatial Metadata (CSDGM) format.•Increased the re-shelving-complete process speed; making the “flipping” process from re-shelving to available much faster – - on suggestion from Bill Ott of GRPL.•Reporter fix to the display of ISBN and ISSN in some reports, and in some environments (environments which had newer versions - of Perl database drivers that affected some reports).• Bug fixes for Server Administration interfaces such as hours of operation, and generally improving speed of all the SA interfaces.•Removed Spanish translation set from the build environment as no Spanish translation has been contributed to date.•Internationalization improvements in the default skin; there are less “English-only” strings.•Improved output handling for unAPI services; important for popular add-ons like Zotero.•Improved handling of day-granular circulations, and their interaction with penalties – i.e. For a 7 day - circulating item that is checked out at 9am on Sunday, it is not due until closing on the following Saturday..• Evergreen will notify that printer setups need to be checked at Staff Client upgrade time. + •Formal support for IE8, including a bug fix where titles with the “@” symbol would display as a http link.•Spaces in user names are being deprecated as they can cause authentication failure -CamelCase will be supported from this point forward.•Supercat: added support for returning records in Federal Geographic Data Committee (FGDC) Content Standard for + Digital Geospatial Metadata (CSDGM) format.•Increased the re-shelving-complete process speed; making the “flipping” process from re-shelving to available much faster – + on suggestion from Bill Ott of GRPL.•Reporter fix to the display of ISBN and ISSN in some reports, and in some environments (environments which had newer versions + of Perl database drivers that affected some reports).• Bug fixes for Server Administration interfaces such as hours of operation, and generally improving speed of all the SA interfaces.•Removed Spanish translation set from the build environment as no Spanish translation has been contributed to date.•Internationalization improvements in the default skin; there are less “English-only” strings.•Improved output handling for unAPI services; important for popular add-ons like Zotero.•Improved handling of day-granular circulations, and their interaction with penalties – i.e. For a 7 day + circulating item that is checked out at 9am on Sunday, it is not due until closing on the following Saturday..• Evergreen will notify that printer setups need to be checked at Staff Client upgrade time. @@ -229,63 +229,63 @@ Report errors in this documentation using Launchpad. Chapter 3. Basic Search Report any errors in this documentation using Launchpad. - Chapter 3. Basic SearchChapter 3. Basic SearchAbstractFrom the OPAC home, you can conduct a basic search of all materials owned by all libraries in your Evergreen system.This search can be as simple as typing keywords into the search box and clicking Go! Or, you can make your search more precise by limiting your search by + Chapter 3. Basic SearchChapter 3. Basic SearchAbstractFrom the OPAC home, you can conduct a basic search of all materials owned by all libraries in your Evergreen system.This search can be as simple as typing keywords into the search box and clicking Go! Or, you can make your search more precise by limiting your search by fields to search, material type or library location. The Homepage contains a single search box for you to enter search terms. You can get to the Homepage at any time by selecting the Home link from the left-hand - sidebar in the catalogue, or you can enter a search anywhere you see a search box. + sidebar in the catalogue, or you can enter a search anywhere you see a search box. You can select to search by: - • - Keyword—finds the terms you enter anywhere in the + • + Keyword—finds the terms you enter anywhere in the entire record for an item, including title, author, subject, and other information. - • + • Title—finds the terms you enter in the title of - an item. + an item. - • + • Author—finds the terms you enter in the author of - an item. + an item. - • + • Subject—finds the terms you enter in the subject of an item. Subjects are categories assigned to items according to a system such as - the Library of Congress Subject Headings. + the Library of Congress Subject Headings. - • + • Series—finds the terms you enter in the title of - a multi-part series. + a multi-part series. Formats Formats You can limit your search by format: - • + • - Books + Books - • + • - Large Print + Large Print - • + • Audiobooks (books read aloud on CDs or other - media) + media) - • + • Video (VHS tapes, DVDs, and other media) - + - • + • Music (music on CD or other media) - • + • Electronic Resources (databases or other resources available electronically in the library or online) @@ -307,20 +307,20 @@ You can access the Advanced Search by clicking Advanced Search on the catalogue Homepage or search results screen. - + The available search options are the same as on the Home page. But you may use one or many of them simultaneously. If you want to combine more than three search options, use Add Search Row button to add more search input rows. Clicking the X button will close the search input row. Sort CriteriaSort Criteria - + By default, the search results are in order of greatest to least relevance. See Order of Results. In the sort criteria box you may select to order the search results by relevance, title, author, or publication date. Group Formats and EditionsGroup Formats and Editions - + This checkbox is at the bottom line of Sort Criteria. When it is checked, all formats and editions of the same title are grouped as one result. For example, the DVD and the first and second print editions of Harry Potter and the Chamber of Secrets @@ -329,7 +329,7 @@ Search LibrarySearch Library - + The current search library is displayed under Search Library box. Be default it is your library. The search returns results for your local library only. If your library system has multiple branches, use the Search @@ -340,7 +340,7 @@ Limit to AvailableLimit to Available - + This checkbox is at the bottom line of Search Library. Select Limit to Available to limit by item’s current circulation status. Titles without available items in the library will not be displayed. @@ -348,7 +348,7 @@ Search FilterSearch Filter - + You can filter your search by Item Form, Item Type, Literary Form, Language, Audience, Bib Level and Publication Year. Publication year @@ -366,7 +366,7 @@ Quick SearchQuick Search - + If you have details on the exact item you wish to search for, use the Quick Search option on the left of the screen. Use the drop-down menu to select to search by ISBN, ISSN, Call Number, LCCN, TCN, or Item @@ -376,7 +376,7 @@ MARC Expert SearchMARC Expert Search - + If you are familiar with the MARC system, you may search by tag in the MARC Expert Search option on the left of the screen. Enter the three-digit tag number, the subfield if relevant, and the value or text that corresponds to the tag. For @@ -384,7 +384,7 @@ simultaneously, use the Add Row option. Click Submit to run the search. - + Quick Search and MARC Expert Search scope to the entire catalogue. Unlike keyword, author, and subject searches, they cannot be limited to items in a particular library. The only exception is @@ -402,106 +402,106 @@ subject heading. Evergreen is also forgiving about plurals and alternate verb endings, so if you enter dogs, Evergreen will also find items with dog. - • + • Do not use an AND operator to join search terms. - • - An AND operator is automatically used to join all search terms. So, a search + • + An AND operator is automatically used to join all search terms. So, a search for golden compass will search for entries that contain both golden and compass. - • + • o Boolean operators such as and, or, not are not considered special and are searched for like any other word. So, a search for golden and compass will not return the title golden compass. Putting it another way, there are no stop words that are automatically ignored by the search engine. So, a title search for the and or not of (and in - any order) yields a list of titles with those words. + any order) yields a list of titles with those words. - • + • Don’t worry about white space, exact punctuation, or capitalization. - 1. + 1. White spaces before or after a word are ignored. So, search for golden compass gives the same results as a search for golden compass. - 2. + 2. A double dash or a colon between words is reduced to a blank space. So, a title search for golden:compass or golden -- compass is equivalent to golden compass. - 3. + 3. Punctuation marks occurring within a word are removed; the exception is _. So, a title search for gol_den com_pass gives no result. - 4. + 4. Diacritical marks, &, or | located anywhere in the search term are removed. Words or letters linked together by . (dot) are joined together without the dot. So, a search for go|l|den & comp.ass is equivalent to golden compass. - 5. + 5. Upper and lower case letters are equivalent. So, Golden Compass is the same as golden compass. - • + • Enter your search words in any order. So, a search for compass golden gives the same results as a search for golden compass. Adding more search words gives fewer and more specific results. - • + • This is also true for author searches. Both David Suzuki and Suzuki, David will return results for the same author. - • + • Use specific search terms. Evergreen will search for the words you specify, not the meanings, so choose search terms that are likely to appear in an item description. For example, the search luxury hotels will produce more relevant results than nice places to stay. - • + • Search for an exact phrase using double-quotes. For example “golden compass”. - • + • The order of words is important for an exact phrase search. “golden compass” is different than “compass golden”. - • + • White space, punctuation and capitalization are removed from exact phrases as described above. So a phrase retains its search terms and its relative order, but not special characters and not case. - • + • Two phrases are joined by and, so a search for “golden compass” “dark materials” is equivalent to “golden compass” and “dark materials”. - • + • To prevent stemming , use double quotes around a single word or a phrase. So, a search for parenting will also return results for parental but a search for “parenting” will not. - • + • Do not use wildcards. Truncation using wildcards is not supported in Evergreen. So, searching for comp* will not return results for compass. - • + • Exclude a term from the search, using - (minus) or ! (exclamation point). For example, vacations –britain or vacations !britain will search for materials on vacations that do not make reference to Britain. - • + • Two excluded words are joined by and. So, a search for !harry !potter is equivalent to !harry and !potter. - • + • A + (plus) leading a term has no role and is removed. So, +golden +compass is equivalent to golden @@ -524,7 +524,7 @@ Chapter 6. Search MethodologyChapter 6. Search Methodology StemmingStemming - + A search for dogs will also return hits with the word dog and a search for parenting will return results with the words parent and parental. @@ -545,12 +545,12 @@ compass because the search does not recognize gold as a stem of golden. - TruncationTruncation - + TruncationTruncation + Truncation is not currently supported in Evergreen. Order of Results Order of Results - + By default, the results are listed in order of relevance, similar to a search engine like Google. The relevance is determined using a number of factors, including how often and where the search terms appear in the item description, and @@ -572,38 +572,38 @@ When performing a search or clicking on the details links, Evergreen constructs a GET request url with the parameters of the search. The url for searches and details in Evergreen are persistent links in that they can be saved, shared and used later. Here is a basic search URL structure: - + [hostname]/opac/[locale]/skin/default/xml/rresult.xml?rt=keyword&tp=keyword& t=[search term]&l=[location id]&d=0 - l Parameterl Parameter - + l Parameterl Parameter + This is the id of the search location. It is an integer and maches the id of the location the user selected in the location drop down menu. This is accompanied by a d parameter which indicated the depth of the location selected. For example, 0 would be the highest level and 1 would represent the next depth level. - rt Parameterrt Parameter - + rt Parameterrt Parameter + The rt parameter in the URL represents the search type values and represent one of the following search or request types: - •keyword•title•author•subject•series + •keyword•title•author•subject•series These match the options in the search type drop-down box. - SortingSorting - + SortingSorting + The s parameter sorts the results on one of these criteria. - •pubdate (publication date) - chronological order•title - Alphabetical order•author - Alphabetical order on family name first + •pubdate (publication date) - chronological order•title - Alphabetical order•author - Alphabetical order on family name first The sd indicates the direction to sort - •asc - ascending•desc - descending + •asc - ascending•desc - descending In the absence of s or sd parameter, the search results default to sorting by relevance. - Advanced search (multiple fields)Advanced search (multiple fields) + Advanced search (multiple fields)Advanced search (multiple fields) Uses rt=multi; then prepend search field to the search terms (delimited by a colon) in the t parameter: - ...tp=&t=keyword%3Afools title%3Arush&av=&rt=multi + ...tp=&t=keyword%3Afools title%3Arush&av=&rt=multi ISBN and ISSN search include the following in the URL: - ...rt=isbn&adv=... - ...rt=issn&adv=... + ...rt=isbn&adv=... + ...rt=issn&adv=... Call number search will include: - ...cnbrowse.xml?cn=pr5655... + ...cnbrowse.xml?cn=pr5655... Chapter 8. Search ResultsChapter 8. Search Results @@ -614,14 +614,14 @@ - + The search results are a list of relevant works from the catalogue. If there are many results, they are divided into several pages. At the top of the list, you can see the total number of results and go back and forth between the pages by clicking the double arrow on top or bottom of the list. Here is an example: - + Brief information about the title, such as author, edition, publication date, etc. is @@ -632,17 +632,17 @@ author. If you want to place a hold on the title, click Place Hold beside the format icons. - On the top right corner, there is a Limit to Available checkbox. + On the top right corner, there is a Limit to Available checkbox. Checking this box will filter out those titles with no available copies in the library or libraries at the moment. Usually you will see your search results are re-displayed with fewer titles - The Sort Results dropdown list is beside the Limit to Available + The Sort Results dropdown list is beside the Limit to Available checkbox. Clicking an entry on the list will re-sort your search results accordingly. - Formats and EditionsFormats and Editions - - + Formats and EditionsFormats and Editions + + If you have selected Group Formats and Editions with your search, your search results are grouped by various formats and editions of the same title. Multiple format icons may be lit up. Clicking a title will show you the @@ -650,7 +650,7 @@ format. - + @@ -658,8 +658,8 @@ Related Subjects, Authors, and Series Related Subjects, Authors, and Series - - + + At the left, you may see a list of Related Subjects, Authors, and Series. Selecting one of these links searches the catalogue again using that subject, author, or series to @@ -667,7 +667,7 @@ current search. - + @@ -675,9 +675,9 @@ AvailabilityAvailability - + - + The number of available copies and total copies are displayed in the right-hand columns. If you are using a catalogue inside a library or accessing a library’s online catalogue from its homepage, you will see how many copies are available in the @@ -685,18 +685,18 @@ system you will see an extra column under the library system’s name showing how many copies are available in all branches - + - - + + Viewing a recordViewing a record - + Click on a title to view a detailed record of the title, including descriptive information, location and availability, and options for placing holds. @@ -721,7 +721,7 @@ Placing HoldsPlacing Holds - + Holds can be placed on either title results or title details page. If the item is available, it will be pulled from the shelf and held for you. If all copies at your local library are checked out, you will be placed on a waiting list and you will be @@ -747,7 +747,7 @@ You need log into your account to do it. - + @@ -757,18 +757,18 @@ Going backGoing back - + When you are viewing a specific record, you can always go back to your title list by clicking the link My Title Results on the left of the page. - + - + If you have selected Group Formats and Editions with your search, your search results are grouped by various formats and editions of the same title under My Search Results. You can always go back to this @@ -777,7 +777,7 @@ - + @@ -795,120 +795,120 @@ First Login Password UpdateFirst Login Password Update - + Patrons are given temporary passwords when new accounts are created, or forgotten passwords are reset by staff. Patrons MUST change their password to something more secure when they login or for the first time. Once the password is updated, they will not have to repeat this process for subsequent logins. - 1. + 1. Open a web browser and go to your Evergreen OPAC - 2. + 2. Click My Account - 3. + 3. Enter your Username and Password - • + • By default, your username is your library card number. - • + • Your password is a 4 digit code provided when your account was created. If you have forgotten your password, contact your library to have it reset or use the online the section called “Password Reset” tool. - 4. + 4. Click Login. You will be prompted to change your password. - a. + a. Enter your current password. - b. + b. Enter a new password. - c. + c. Enter the new password again. - d. + d. Click Update Password. - e. + e. Click OK. You will be returned to the login screen. - 5. + 5. Enter your Username and new Password - 6. + 6. Your Account Summary page displays. Logging InLogging In - - Logging into your account from the online catalog1. + + Logging into your account from the online catalog1. Open a web browser and navigate to your Evergreen OPAC - 2. + 2. Click My Account - 3. + 3. Enter your Username and Password - •By default, your username is your library card number.•Your password is a 4 digit code provided when your account was created. If you have forgotten your password, contact your local + •By default, your username is your library card number.•Your password is a 4 digit code provided when your account was created. If you have forgotten your password, contact your local library to have it reset or use the the section called “Password Reset” tool. - 4. + 4. Click Login - •At the first login, you will be prompted to change your password.•After updating the password, you must enter your Username and Password again. - 5. + •At the first login, you will be prompted to change your password.•After updating the password, you must enter your Username and Password again. + 5. Your Account Summary page displays To view your account details, click one of the My Account tabs To start a search, enter a term in the search box at the top of the page and click Go! - If using a public computer be sure to log out! + If using a public computer be sure to log out! Password ResetPassword Reset - + Evergreen 1.6.1 introduced a new feature to allow patrons to reset forgotten passwords from the My Account login screen. To reset your password: - 1. + 1. click on the the Forgot your password? link located under the login button - 2. + 2. Fill in the Barcode and User name text boxes. - 3. + 3. A pop up message should appear indicating that your request has been processed and that you will recieve an email with further instructions. - 4. + 4. An email will be sent to the email addressed you have registered with your Evergreen library. You should click on the link included in the email to open the password reset page. Processing time may vary. - You will need to have a valid email account set up in Evergreen for you to reset your password. Otherwise, you will need to + You will need to have a valid email account set up in Evergreen for you to reset your password. Otherwise, you will need to contact your library to have your password reset by library staff. - 5. + 5. At the reset email page you should enter the new password in the New password field and re-enter it in the Re-enter new password field. - 6. + 6. Click Submit - 7. + 7. A message should appear on the page indicating that your password has been reset. - 8. + 8. Login to your account with your new password. Account SummaryAccount Summary - + Users can view Staff Notes, home library, address, and phone numbers. They can also change their username, password, and email. Items Checked OutItems Checked Out @@ -916,63 +916,63 @@ Users can manage items currently checked out, view overdue items and see how many renewals they have remaining for specific item. Items On HoldItems On Hold - + From My Account patrons can manage items currently being requested. Actions include: - • + • Suspend - set a period of time during which the hold will not become active, such as during a vacation - • + • Activate - manually remove the suspension - • + • Set Active Date - specify a date at which the suspension will be lifted - • + • Cancel - remove the hold request Edit options include: - • + • Enable/disable phone notifications - • + • Change telephone number for notification - • + • Enable/disable email notification - • + • Change pick up library - • + • Change expiration date - • + • Suspend - • + • Activate date - To edit items on hold:1. + To edit items on hold:1. Login to My Account, click the Items on Hold tab - 2. + 2. Select the hold to modify - 3. + 3. Click Edit or Actions for Selected Holds - 4. + 4. Select the change to make and follow the instructions. FinesFines - + Clicking on the fines tab will allow you to see your Total Owed, Total Paid and Balance Owed. PreferencesPreferences - + From here you can manage display preferences including: - •Search hits per page - how many items to appear on each page in results.•Default Font Size•Default Hold Notification Method - What is the preferred method for being notified for a hold pick up: email or phone. •Default Search Location - Which is the preferred default location for searching. By default youer home library is selected.•Default Search Range - What is the range of your search (e.g. location, library, system, consortium, etc...) + •Search hits per page - how many items to appear on each page in results.•Default Font Size•Default Hold Notification Method - What is the preferred method for being notified for a hold pick up: email or phone. •Default Search Location - Which is the preferred default location for searching. By default youer home library is selected.•Default Search Range - What is the range of your search (e.g. location, library, system, consortium, etc...) After changing any of these settings, remember to click Save Preference Changes. BookbagsBookbags - + My Bookbags is a feature that allows you to create lists of library materials (books, audiobooks, videos, etc.) These lists create links to records in the catalog, but are otherwise completely private and only accessible by you when logged in to your Account. You have the option to share specific lists with people whom you choose (send them the direct URL), or more generally via RSS feed. Shared bookbags do @@ -980,48 +980,48 @@ You can create as many bookbags and you want. Your bookbags will stay in your account until you delete them. Items remain in bookbags until you remove them. Even if the item record is removed from the catalog, the bookbag entry will remain (but there will be no link to the catalog.) - Create a new Bookbag1. + Create a new Bookbag1. Login to My Account , click My Bookbags - 2. + 2. At Create a new Bookbag, enter the name of the new Bookbag - 3. + 3. Select yes or no for the Share this Bookbag option. - 4. + 4. Click Submit - 5. + 5. Click OK - Add items to a Bookbag1. + Add items to a Bookbag1. Search for an item, open the Title Record - 2. + 2. Open the More Actions... list; click the Bookbag name - 3. + 3. Click OK - Share a Bookbag1. + Share a Bookbag1. Login to My Account, click My Bookbags. - 2. + 2. Find the Bookbag to share, click Share this Bookbag. - 3. + 3. Click OK. - 4. + 4. Click View to open the list as a webpage. - 5. + 5. copy and send this URL to selected recipients or embed in another website. - 6. + 6. Click the RSS icon add the list to an RSS reader. @@ -1032,28 +1032,28 @@ Chapter 10. Simple Self Check Interface Report any errors in this documentation using Launchpad. Chapter 10. Simple Self Check InterfaceChapter 10. Simple Self Check Interface - - + + This section deals with the simple self check front end that comes with Evergreen. For information on setting up a SIP server for communicating with self check hardware, please refer to Setting up a SIP Server. Using the Self Check InterfaceUsing the Self Check Interface - Initializing the self check client.Initializing the self check client. + Initializing the self check client.Initializing the self check client. The selfcheck interface is run through a web browser. Before patrons can use the self check station, a staff member must initilize the interface by logging in. - 1. + 1. Open a web browser and navigate to your self check interface page which is the location of the selfcheck.xml file. By default, the url will be https://[hostname]/opac/extras/selfcheck/selfcheck.xml, where [hostname] is your Evergreen - host.2. + host.2. Login using a staff username or barcode and password. - Using the interface to check out booksUsing the interface to check out books + Using the interface to check out booksUsing the interface to check out books After a staff user has logged into the self check interface, the interface should be ready for patrons to scan their barcodes and check out books. - 1. - Scan your patron barcode to login2. - Scan your books. The item titles should appear below the barcode field as you scan them.3. - Click Done when you are finished. This will print the receipt and log out.4. + 1. + Scan your patron barcode to login2. + Scan your books. The item titles should appear below the barcode field as you scan them.3. + Click Done when you are finished. This will print the receipt and log out.4. Select printer to print a receipt (if a printer is available). @@ -1078,7 +1078,7 @@ Chapter 11. Using the Staff ClientChapter 11. Using the Staff Client - Logging in to EvergreenLogging in to Evergreen + Logging in to EvergreenLogging in to Evergreen To log in you must first install the Evergreen Staff Client, available for download from @@ -1141,43 +1141,43 @@ NavigationNavigation - TabsTabs + TabsTabs Evergreen uses tabs to display functions. Tabs allow all software functionality to be open in one window. You can have up to 9 tabs open at once and you can have more than one tab of a single function open at the same time. You simply move through the tabs to perform your work. - Keyboard shortcuts for working with tabs:• + Keyboard shortcuts for working with tabs:• Ctrl+T new tab - • + • Ctrl+W   close tab - • + • Ctrl+Shift+W   close all tabs - • + • Ctrl+Tab   tabs forward through open tabs - • + • Ctrl+Shift+Tab   tabs backward through open tabs - + In the example below, the MARC Template tab is active. Click on any open tab to bring that screen to the front. You can also use Ctrl+Tab to move to the required tab - + - + Now the Check Out tab is the active screen. - + Once you are in the selected tab, you can use the drop down menus or keyboard shortcuts to perform required functions. Menu functions and corresponding keyboard shortcuts will be demonstrated throughout this manual. - Keyboard ShortcutsKeyboard Shortcuts + Keyboard ShortcutsKeyboard Shortcuts Most menu items have keyboard shorcuts that can greatly increase efficiency. Below is a selected list of commonly used shortcut keys: - KeyFunction + KeyFunction F1 Checkout, or retrieve patron record by barcode F2 @@ -1215,13 +1215,13 @@ Ctrl+V Paste - Copy/PasteCopy/Paste + Copy/PasteCopy/Paste There are several methods of copying and pasting text in Evergreen, depending on where you are in the staff client and the type of information you are copying - 1. - + 1. + Underlined blue text.  Clicking on any of the blue links in the @@ -1230,12 +1230,12 @@ To paste into another location, use Ctrl+V. - - + + - 2. - + 2. + Text displayed in tables.  To copy information from a staff client table, first select the desired @@ -1244,27 +1244,27 @@ - + - + - + Next click the desired information in the popup to copy it to the clipboard - + - 3. - + 3. + Text from catalogue search results.  There is no right-click menu for copying data from staff client search results. To copy the ISBN in the example below, highlight it and click Ctrl+C. To paste into another location use Ctrl+V. - + @@ -1272,13 +1272,13 @@ Customizing the Staff ClientCustomizing the Staff Client - Column PickerColumn Picker + Column PickerColumn Picker - + From many screens and lists, you can click on the column picker icon to change which columns are displayed. - + @@ -1295,53 +1295,53 @@ - + There is an optional toolbar with buttons providing quick access to common staff client functions. When activated the toolbar appears below the menus. - + - + To turn the buttons on or off select Admin (-) → Toggle Button Bar. The buttons can be activated by default for a particular library (see Library Settings for details). - + - Check-boxesCheck-boxes + Check-boxesCheck-boxes - + Most staff client check-boxes are "sticky" -- if you select or deselect them, that status persists. For example, Auto-print, which will print the relevant receipts automatically in certain functions, is sticky. If you select it on one login, it will persist for future logins until you uncheck the box. - + - + Fast Item Add is another "sticky" check box that makes it possible to add volume and item records from the MARC editor. - + - Font and SoundFont and Sound + Font and SoundFont and Sound You may change the size of displayed text or turn staff client sounds on and off. These settings are specific to each physical workstation, not the login account. See @@ -1360,159 +1360,159 @@ Report any errors in this documentation using Launchpad. Chapter 12. CirculationChapter 12. CirculationAbstractThis chapter presents explains the circulation procedures carried out from the staff client. - Patron RecordsPatron Records + Patron RecordsPatron Records Searching PatronsSearching Patrons - •Search one field or combine.•Truncate search terms for more search results.•Include inactive patrons checkbox.•Limit results to patrons in dropdown. + •Search one field or combine.•Truncate search terms for more search results.•Include inactive patrons checkbox.•Limit results to patrons in dropdown. Registering New PatronsRegistering New Patrons - + - •Mandatory fields = Red.•Save and clone user button: the contact info is copied into the next record. Records created using this method - are automatically grouped together with the original record and share the same address, which can only be edited in the original record.•Staff accounts can be added here just like patron accounts. + •Mandatory fields = Red.•Save and clone user button: the contact info is copied into the next record. Records created using this method + are automatically grouped together with the original record and share the same address, which can only be edited in the original record.•Staff accounts can be added here just like patron accounts. Clone User from Existing Group MemberClone User from Existing Group Member - - 1. - Open patron record, click Other.2. - Select Group.3. - Highlight a group member to clone and right click.4. - Select Register a New Group Member by Cloning Selected Patrons.5. - A Register Patron Clone for Group tab will open displaying the Evergreen User Editor.6. - Enter required patron information.7. - Click Save User.8. - After saving the clone record, the User Editor reverts to another clone template; create additional family/group member records.9. + + 1. + Open patron record, click Other.2. + Select Group.3. + Highlight a group member to clone and right click.4. + Select Register a New Group Member by Cloning Selected Patrons.5. + A Register Patron Clone for Group tab will open displaying the Evergreen User Editor.6. + Enter required patron information.7. + Click Save User.8. + After saving the clone record, the User Editor reverts to another clone template; create additional family/group member records.9. Close the Register Patron Clone for Group tab. Updating Patron InformationUpdating Patron Information - - 1. - Retrieve the patron record2. - Edit3. - Finish then click Save User.4. + + 1. + Retrieve the patron record2. + Edit3. + Finish then click Save User.4. Confirmation message, User updating is successful Extend Account Expiration DateExtend Account Expiration Date - + All patron accounts are set to expire in one year – this allows staff to verify patron contact information annually and update any out-of-date information. - There is no warning that the account will soon expire. + There is no warning that the account will soon expire. Loans are NOT shortened if due date is after the account expiration date. NO loans are possible until the account expiration date is extended. - 1. - Access patron account, open Edit function tab.2. - Click 4. Groups and Permissions 3. - At Account Expiration Date, highlight the year and type the new year4. + 1. + Access patron account, open Edit function tab.2. + Click 4. Groups and Permissions 3. + At Account Expiration Date, highlight the year and type the new year4. Click 7. Finish, click Save User Lost Library CardsLost Library Cards - - 1. - Retrieve the patron record.2. - Click Mark Lost button.3. + + 1. + Retrieve the patron record.2. + Click Mark Lost button.3. Finish → Save User. - + A lost card cannot be reinstated (A warning message will display; use the new card to retrieve the user’s record) Resetting a Patron's PasswordResetting a Patron's Password - - 1. - Retrieve the record.2. + + 1. + Retrieve the record.2. Click Reset Button next to password field - + The existing password is not displayed in patron records for security reasons. Barring a PatronBarring a Patron - - 1. - Select 4: Groups and Permissions. Select the Barred checkbox.2. - The Alert Message is required.3. + + 1. + Select 4: Groups and Permissions. Select the Barred checkbox.2. + The Alert Message is required.3. Click Finish → Save User. - Barring a patron from one library bars that patron from all consortium member libraries. + Barring a patron from one library bars that patron from all consortium member libraries. To unbar a patron, uncheck the Barred checkbox and remove the alert message. Barred: Stops patrons from using their library cards; alerts the staff that the patron is banned/barred from the library. The "check-out" functionality is disabled for barred patrons (NO option to override – the checkout window is unusable and the bar must be removed from the account before the patron is able to checkout items).  These patrons may still log in to the OPAC to view their accounts. Blocked: Often, these are system-generated blocks on patron accounts.  Some examples: - •Patron exceeds fine threshold•Patron exceeds max checked out item threshold. + •Patron exceeds fine threshold•Patron exceeds max checked out item threshold. A notice appears when a staff person tries to checkout an item to blocked patrons, but staff may be given permissions to override blocks. Patron AlertsPatron Alerts - + There are two types of Patron Alerts: - •System-generated alerts: once the cause is resolved (e.g. patron's account has been renewed), the message + •System-generated alerts: once the cause is resolved (e.g. patron's account has been renewed), the message will disappear automatically. - •View from Messages or Other → Display Alerts and + •View from Messages or Other → Display Alerts and Messages - • + • Staff-generated alerts: must be removed manually. – Yellow background in summary To insert an alert: - 1. - Select Edit → Groups and Permissions2. - Alert Message field.3. + 1. + Select Edit → Groups and Permissions2. + Alert Message field.3. Finish and Save User. To remove an alert: - 1. - Click Clear button under the Alert Message box.2. + 1. + Click Clear button under the Alert Message box.2. Save the record. A notice appears when a staff person tries to checkout an item to blocked patrons, but staff may be given permissions to override blocks. - Patron NotesPatron Notes + Patron NotesPatron Notes See Notes message appears Notes are strictly communicative and may made visible to the patron, via their account on the OPAC. - To insert/remove a note:1. - Open a patron record, click Other2. - Select Notes.3. - Click Add New Note.4. - Select if notes will be visible for staff only, or visible to the patron when logged into My Account in the OPAC.5. - Add note Title and content.6. - Click Add Note7. - Click OK8. + To insert/remove a note:1. + Open a patron record, click Other2. + Select Notes.3. + Click Add New Note.4. + Select if notes will be visible for staff only, or visible to the patron when logged into My Account in the OPAC.5. + Add note Title and content.6. + Click Add Note7. + Click OK8. To delete a note, go to Other → Notes and use the Delete This Note button under each note. Merging Patron RecordsMerging Patron Records - - Merging patron Records From the patron search screen:Once two records have been merged, the notes, bills, holds and outstanding items under the non-lead record - are brought to the lead record. Staff-inserted alert messages are not transferred.1. - Search by the terms shared by the two records2. - Select the two records to merge by pressing down the CTRL key and clicking each record. 3. - Click Merge Patrons.4. + + Merging patron Records From the patron search screen:Once two records have been merged, the notes, bills, holds and outstanding items under the non-lead record + are brought to the lead record. Staff-inserted alert messages are not transferred.1. + Search by the terms shared by the two records2. + Select the two records to merge by pressing down the CTRL key and clicking each record. 3. + Click Merge Patrons.4. Select the record you want to keep by checking the radio button Lead Record above the appropriate record. - 5. + 5. Click the Merge Patrons button. - Merging patron records from the patron group screen:The merged record will still show under group members. Both members point to the same patron - record.1. - Retrieve one of the two patron records you want to merge. Go to Other → Group.2. + Merging patron records from the patron group screen:The merged record will still show under group members. Both members point to the same patron + record.1. + Retrieve one of the two patron records you want to merge. Go to Other → Group.2. The patron record is displayed as a group member. Choose Action → Move another patron to this - patron group.3. - At the prompt, scan or type the patron's barcode. Click OK.4. + patron group.3. + At the prompt, scan or type the patron's barcode. Click OK.4. Confirm the move by clicking the Move button on top of the screen. Click OK on the confirmation pop-up window. - 5. + 5. Both records are displayed as group members. Select both records by pressing the CTRL key and clicking each - record. Choose Action → Merge Selected Patrons.6. + record. Choose Action → Merge Selected Patrons.6. The merging records window pops up. Choose the lead record and continue to merge records as described in Step 4. Circulating ItemsCirculating Items - + Regular circulation: circulation of items in the regular collection. Pre-cataloged circulation: circulation of items that have a barcode but have not yet been cataloged. These items may be checked out and then sent to cataloging when returned. @@ -1520,143 +1520,143 @@ Check Out (F1)Check Out (F1) - To check out regular items:1. - Click Check Out button or hit F1 to access Retrieve Patron by Barcode2. - Scan the patron barcode3. - Patron Account opens to the Check Out function tab4. - Scan or enter the item barcode. Click Submit or hit Enter (for manual entries).5. - Continue to scan barcodes until all items are charged.6. + To check out regular items:1. + Click Check Out button or hit F1 to access Retrieve Patron by Barcode2. + Scan the patron barcode3. + Patron Account opens to the Check Out function tab4. + Scan or enter the item barcode. Click Submit or hit Enter (for manual entries).5. + Continue to scan barcodes until all items are charged.6. When finished, click Done to generate a receipt or to exit patron record, if not printing slip receipts. Pre-cataloged ItemsPre-cataloged Items - + Pre-cat items are those items that have yet to be added to the database or that have barcode labels, but are not attached to an existing bibliographic record. - ONLY use Pre-Cat Checkout as a last resort, such as when a patron brings the item to the desk from the shelf and MUST have it that day. Otherwise, + ONLY use Pre-Cat Checkout as a last resort, such as when a patron brings the item to the desk from the shelf and MUST have it that day. Otherwise, ask the patron to wait until you can have the item correctly processed. - Checking out pre-cataloged items from the check out screen:1. - Scan the item barcode.2. - An alert will appear stating: Mis-scan or non-cataloged item.3. - To continue with check out, click Pre-cataloged.4. - Enter title and author information and click Checkout.5. + Checking out pre-cataloged items from the check out screen:1. + Scan the item barcode.2. + An alert will appear stating: Mis-scan or non-cataloged item.3. + To continue with check out, click Pre-cataloged.4. + Enter title and author information and click Checkout.5. Item is added to the list of Check Outs - Checking in pre-cataloged itemsThe item MUST be routed to your holdings maintenance staff to be added to the database before further - check outs.1. - Scan the item barcode.2. - An alert will appear stating: "This item needs to be routed to Cataloging"3. - Click OK.4. + Checking in pre-cataloged itemsThe item MUST be routed to your holdings maintenance staff to be added to the database before further + check outs.1. + Scan the item barcode.2. + An alert will appear stating: "This item needs to be routed to Cataloging"3. + Click OK.4. The item is added to the list of Check Ins, marked with: [barcode number] needs to be cataloged, Route To location = Cataloging and Status = Cataloging Non-cataloged ItemsNon-cataloged Items - + Non-cataloged items may be more familiar as Ephemeral items – they are those items that libraries do not wish to catalog, but do wish to track for circulation statistics. Items are checked out with a due date but when the due date expires, the items disappear from the patron's record. No fines accrue. Circulation statistics are collected. - Checking Out Non-cataloged Items From the Check Out screen:1. + Checking Out Non-cataloged Items From the Check Out screen:1. Click Check Out button or hit F1 to access Retrieve - Patron by Barcode.2. - Scan patron barcode.3. - The Patron Account opens to Check Out function tab.4. - Click on Barcode to open the non-cataloged items selection list.5. + Patron by Barcode.2. + Scan patron barcode.3. + The Patron Account opens to Check Out function tab.4. + Click on Barcode to open the non-cataloged items selection list.5. Click the type of item, such as Paperback Book; the box for the barcode will become grayed out and labeled - Non-cataloged.6. - Click Submit.7. - In the pop-up, enter the number of items being checked out.8. - Click OK.9. + Non-cataloged.6. + Click Submit.7. + In the pop-up, enter the number of items being checked out.8. + Click OK.9. The items are added to the Check Out list with a normal due date. Non-cataloged items do not appear in the list of items out unless you select that option. Click OK. Due DatesDue Dates - + Circulation periods are pre-set. When items are checked out, due dates are automatically calculated and inserted into circulation records if the Due Date is set Normal on the Check Out screen. Different due dates may be set to override this circulation period. This process will allow staff to set a non-standard loan period prior to scanning the item in Check Out. - 1. - Click Check Out button or hit F1 to Retrieve Patron.2. - Scan the patron barcode.3. - Scan the item barcode4. + 1. + Click Check Out button or hit F1 to Retrieve Patron.2. + Scan the patron barcode.3. + Scan the item barcode4. In the box labeled Normal, select a pre-set loan period from the list; OR - Highlight Normal and type a specific date in YYYY-MM-DD format5. - The item is checked out with the special due date.6. + Highlight Normal and type a specific date in YYYY-MM-DD format5. + The item is checked out with the special due date.6. The special due date applies to all subsequent items until it is changed or the patron record is exited. Check In (F2)Check In (F2) - - Regular check in:1. + + Regular check in:1. Click Check In button or hit F2 to open Item Check In tab. - 2. - Scan item barcode.3. - Continue to scan barcodes until all items are discharged.4. + 2. + Scan item barcode.3. + Continue to scan barcodes until all items are discharged.4. Close tab when done. Backdated Check InBackdated Check In - + Used for checking items in from bookdrops or for unexpected closings. - 1. - Click the Check In button or hit F2.2. - Enter backdated date in the Effective Date field (YYYY-MM-DD format).3. + 1. + Click the Check In button or hit F2.2. + Enter backdated date in the Effective Date field (YYYY-MM-DD format).3. Click outside of the Effective Date field: the top green bar changes to red. The new effective - date displays at the top of the window.4. - Scan items.5. + date displays at the top of the window.4. + Scan items.5. When finishing backdated check-in, change the Effective Date back to the current date or close tab. Renewal and Editing an Item's Due DateRenewal and Editing an Item's Due Date - + Checked-out items can be renewed according to library policy. The new due date is calculated from the renewal date. Existing loans may be extended to a specific date by editing the due date. - Renewing items1. - Retrieve the patron record.2. - Items Out screen.3. - Select item(s) to renew. 4. - Select Actions for Selected Items → Renew.5. - To renew all items in the account, click Renew All.6. + Renewing items1. + Retrieve the patron record.2. + Items Out screen.3. + Select item(s) to renew. 4. + Select Actions for Selected Items → Renew.5. + To renew all items in the account, click Renew All.6. To view the new due date, click Refresh. Renewal may also be done from the Items Status screen. See the section called “Item Status (F5)”. - Editing the due date of items:1. - From the patron record, open Items Out tab2. - Highlight the item, right click, and select Edit Due Date3. - To update multiple items highlight the first item, press and hold Ctrl, highlight additional items4. - In the pop-up, enter the new due date.5. - Click OK.6. + Editing the due date of items:1. + From the patron record, open Items Out tab2. + Highlight the item, right click, and select Edit Due Date3. + To update multiple items highlight the first item, press and hold Ctrl, highlight additional items4. + In the pop-up, enter the new due date.5. + Click OK.6. Click Refresh to update the list. - Select multiple items by pressing down the CTRL key and clicking each item to edit. - Editing the due date does not affect renewal count. + Select multiple items by pressing down the CTRL key and clicking each item to edit. + Editing the due date does not affect renewal count. Marking Items Lost and Claimed ReturnedMarking Items Lost and Claimed Returned - - To mark items as lost:1. - Retrieve the patron record.2. - Click Items Out tab.3. - Select appropriate item(s).4. - Select Actions for Selected Items → Mark Lost (by Patron).5. + + To mark items as lost:1. + Retrieve the patron record.2. + Click Items Out tab.3. + Select appropriate item(s).4. + Select Actions for Selected Items → Mark Lost (by Patron).5. Refresh to reflect changes. Lost item(s) display in the Lost/Claimed Returned/Long Overdue field. - Marking an item lost will automatically bill the patron the replacement cost of the item, plus a + Marking an item lost will automatically bill the patron the replacement cost of the item, plus a processing fee, as determined by local policy. If the cost is 0.00, a charge may be manually added to the bill. See the section called the section called “Adding New Grocery Bills” for details If the lost item is returned, the bill and payment (if bill has been paid) will not be cancelled/refunded automatically. These bills must be dealt with manually, as per local policy. - Marking items as Claimed Returned:1. - Retrieve the patron record.2. - Click Items Out.3. - Select item(s).4. + Marking items as Claimed Returned:1. + Retrieve the patron record.2. + Click Items Out.3. + Select item(s).4. Right click, and select Mark Claimed Returned. To update multiple items highlight the first item, - press and hold Ctrl, highlight additional items and select Mark Claimed Returned.5. - Enter a return date (YYYY-MM-DD format) and click OK.6. + press and hold Ctrl, highlight additional items and select Mark Claimed Returned.5. + Enter a return date (YYYY-MM-DD format) and click OK.6. The “Claimed Returned” item will display in the Lost/Claimed Returned/Long Overdue field. - + If the item is overdue and the claims returned date is before the original due date, the fines disappear. If the item is overdue and the claims returned date is after the due date, the fines remain. If you do not enter the date claimed returned, the item is moved to the Claimed returned list, but the fines are not stopped. @@ -1665,46 +1665,46 @@ There are no alerts indicating claims returned items. - In-house Use (F6)In-house Use (F6) + In-house Use (F6)In-house Use (F6) May be used to record in-house use for both cataloged and non-cataloged items. - 1. - Select Circulation → Record-In House Use2. + 1. + Select Circulation → Record-In House Use2. Cataloged items: Enter item barcode. When recording more than one use of an item, edit the - number in the # of uses box.3. + number in the # of uses box.3. Non-cataloged items: choose the appropriate item from the dropdown menu in the Barcode box and Submit. The statistics on in-house use are separated from circulation statistics. In-house use count of cataloged items is not included in the items' total use count. - Item Status (F5)Item Status (F5) + Item Status (F5)Item Status (F5) Many functions may be performed from the Item Status screen. This section will cover circulation-related functions: checking item status, viewing past circulations, inserting item alert messages, and marking items missing or damaged. - Checking an item's status:1. - Select Search → Search for copies by Barcode or Circulation → Show Item Status by Barcode.2. - Enter item barcode.3. + Checking an item's status:1. + Select Search → Search for copies by Barcode or Circulation → Show Item Status by Barcode.2. + Enter item barcode.3. Current status of the item displays, along with selected other fields. (Use the column picker to choose which fields to view.) - If an item's status is Available, the displayed due date refers to the previous circulation's due date. + If an item's status is Available, the displayed due date refers to the previous circulation's due date. - Viewing Past Circulations:1. - Retrieve an item (see above).2. - Select Actions for Selected Items → Show Last Few Circulations.3. - The item’s recent circulation history displays.4. - To retrieve the last patron to circulate the item, select Retrieve Last Patron.5. + Viewing Past Circulations:1. + Retrieve an item (see above).2. + Select Actions for Selected Items → Show Last Few Circulations.3. + The item’s recent circulation history displays.4. + To retrieve the last patron to circulate the item, select Retrieve Last Patron.5. Patron record will display in a new Tab. Past circulations can also be retrieved from a patron's Items Out screen. Marking items damaged or missingMarking items damaged or missing - - - 1. - Retrieve the item.2. - Select the item. 3. + + + 1. + Retrieve the item.2. + Select the item. 3. Select Actions for Selected Items → Mark Item Damaged or Mark Item Missing. This procedure also allows items to be checked in or renewed through the Check in Items and Renew Items options on the @@ -1712,22 +1712,22 @@ Item alertsItem alerts - + The Edit Item Attributes function on Actions for Selected Items allows editing of item records, such as inserting item alerts. - 1. - Retrieve record.2. - Highlight the item.3. - Select Actions for Selected Items → Edit Item Attributes.4. - The item record displays in the Copy Editor.5. - Click Alert Message in the Miscellaneous column. 6. - Type in the message and click Apply.7. + 1. + Retrieve record.2. + Highlight the item.3. + Select Actions for Selected Items → Edit Item Attributes.4. + The item record displays in the Copy Editor.5. + Click Alert Message in the Miscellaneous column. 6. + Type in the message and click Apply.7. Click Modify Copies and confirm. Bills and PaymentsBills and Payments - + Circulation vs. Grocery BillsCirculation vs. Grocery Bills There are two types of bills in Evergreen: Circulation bills and Grocery bills. @@ -1738,22 +1738,22 @@ Making PaymentsMaking Payments - - 1. - Retrieve the patron record.2. - Bills3. + + 1. + Retrieve the patron record.2. + Bills3. When bills are paid, the money applied starts at the top of the list of checked-off bills. To pay a specific bill, uncheck the other boxes. - (Note the presence of the Uncheck All and Check All options.)4. - Select a payment type.5. - Enter the amount of payment in the Payment received field.6. - Apply Payment.7. + (Note the presence of the Uncheck All and Check All options.)4. + Select a payment type.5. + Enter the amount of payment in the Payment received field.6. + Apply Payment.7. The patron’s bill screen and owed balance will update. - Items marked with red are still checked out. It is possible for a patron to pay a bill while the item is still out and accruing fines. - You may choose to annotate payment and fill in resulting text box according to library policy. + Items marked with red are still checked out. It is possible for a patron to pay a bill while the item is still out and accruing fines. + You may choose to annotate payment and fill in resulting text box according to library policy. Making ChangeMaking Change - + Change will be calculated if the payment amount is over the selected bill amount. After typing in a payment amount, click into the =Change field. The change amount will display. @@ -1763,133 +1763,133 @@ Void clears all history of the bill, while forgive retains the history. Forgiving BillsForgiving Bills - - 1. - Retrieve the patron record.2. - Choose forgive as the payment type 3. - Enter the amount to be forgiven. 4. + + 1. + Retrieve the patron record.2. + Choose forgive as the payment type 3. + Enter the amount to be forgiven. 4. Apply Payment. Voiding BillsVoiding Bills - + Bills under one transaction are grouped in one bill line. Bills may be voided in part or in whole. - 1. - Click Void All Billings2. + 1. + Click Void All Billings2. confirm. - To void a partial amount:1. - Click Full Details for the transaction.2. - The bill details screen displays.3. - Select the bill to void.4. - Void Selected Billings.5. + To void a partial amount:1. + Click Full Details for the transaction.2. + The bill details screen displays.3. + Select the bill to void.4. + Void Selected Billings.5. Confirm. Adding New Grocery BillsAdding New Grocery Bills - + A grocery bill can be added as a new bill or to an existing bill. - To add as a new bill1. - Retrieve the patron record. 2. - Select Bills.3. - Click Bill Patron.4. - Choose appropriate billing type from the drop down menu. (Grocery is the only available transaction type.)5. - Enter the Amount and Note (as required).6. + To add as a new bill1. + Retrieve the patron record. 2. + Select Bills.3. + Click Bill Patron.4. + Choose appropriate billing type from the drop down menu. (Grocery is the only available transaction type.)5. + Enter the Amount and Note (as required).6. Submit this Bill and confirm. - To add bill to an existing bill line:1. - Select Bills.2. - Click Add Billing at the bottom of the correct bill line.3. + To add bill to an existing bill line:1. + Select Bills.2. + Click Add Billing at the bottom of the correct bill line.3. Choose appropriate billing type from the drop down menu. (Grocery - is the only available transaction type.)4. - Enter the Amount and Note (as required).5. - Submit this Bill and confirm.6. + is the only available transaction type.)4. + Enter the Amount and Note (as required).5. + Submit this Bill and confirm.6. The Money Summary will adjust accordingly. Bill HistoryBill History - + The Bill History view includes specific details about the item as well as information about the bill and payments. - To view a patron’s bill history:1. - From the patron record, open the Bills tab2. - Click History.3. - The Bill History window opens.4. - Highlight a bill in the Bill History pane to view its Item Summary.5. + To view a patron’s bill history:1. + From the patron record, open the Bills tab2. + Click History.3. + The Bill History window opens.4. + Highlight a bill in the Bill History pane to view its Item Summary.5. For more information, select a bill and click Full Details. - Items may be deleted from the catalog even if a charge for that item is still attached to the patron's record. The charge will remain on the patron's account + Items may be deleted from the catalog even if a charge for that item is still attached to the patron's record. The charge will remain on the patron's account after the deletion. RefundRefund - + Sometimes paid bills need to be voided, such as when lost and paid items are returned. A negative balance may be created once such bills are voided. To close such bills, staff may refund the balance amount or add a bill entry. - 1. + 1. To refund, click Refund in the billing line on Bills screen. The amount shows in Pay Bill → Change box. - 2. + 2. Click Apply Payment. A receipt will be printed. - Refund button will automatically show up once a bill has a negative balance. + Refund button will automatically show up once a bill has a negative balance. Refunds are reflected in the Cash Report. HoldsHolds - - Items may be deleted from the catalog even if a charge for that item is still attached to the patron's record. The charge will remain on the patron's account + + Items may be deleted from the catalog even if a charge for that item is still attached to the patron's record. The charge will remain on the patron's account after the deletion. Viewing HoldsViewing Holds - - 1. + + 1. Under Actions for this Record (Alt+A), select View Holds (Alt+S). You can do this from any record view. You do not have to be in - Holdings Maintenance.2. + Holdings Maintenance.2. The View Holds screen opens. Placing HoldsPlacing Holds - + Holds may be placed by staff through the staff client or by patrons through the OPAC. This chapter explains placing holds through the client which can be placed for from several different places. Holds LevelsHolds Levels - + Evergreen has four hold levels. Library staff may place holds at all four levels, while patrons may only place Meta-record and Title-level holds. Table 12.1. Hold Levels Explained:Hold LevelAbbreviationHow ToUsed ByHold Links ToMeta-RecordMClick Place Hold next to the title. From the Holds Confirmation Screen, click Advanced Hold Options and select other applicable formatsPatron or staffGroup of records in different formats (book, video, audiobook, etc) with the same title.Title RecordTClick Place Hold next to the titlePatron or staffA single MARC (title) recordVolumeVClick Place Hold on any item in the holdings list (next to the call number)StaffA call-number-specific volume recordCopyCClick Details to view the barcode. Select Place Hold (next to the barcode)StaffAn item barcode - Meta-Record holds: If you select formats as being Acceptable Alternative Formats, the patron’s hold will be filled with the first + Meta-Record holds: If you select formats as being Acceptable Alternative Formats, the patron’s hold will be filled with the first available item. If Books is selected, for instance, a paperback edition could fill the hold, even when the hold is placed on the hardback record. If there are many different records for the same item, books attached to other records could fill the hold, so this may speed hold fulfillment. If Audiobooks is selected, the patron could also receive the audiobook if that is the first available version of the item. If audiobooks are selected, the patron could receive a cassette or CD version if Evergreen libraries own both. - Placing holds from catalog records:1. - Retrieve the desired title record (Search → Search the Catalog).2. - Scan or type patron’s barcode into the Enter recipient barcode field. Click Submit.3. - Click on an entry to display it’s summary.4. + Placing holds from catalog records:1. + Retrieve the desired title record (Search → Search the Catalog).2. + Scan or type patron’s barcode into the Enter recipient barcode field. Click Submit.3. + Click on an entry to display it’s summary.4. Edit the patron hold notification and expiration date fields as required. (A default hold expiration date will - display if the library has set up a default holds expiration period in its library settings.)5. - Select Advanced Options to create a meta-level hold.6. + display if the library has set up a default holds expiration period in its library settings.)5. + Select Advanced Options to create a meta-level hold.6. Place Hold and confirm. - Uncaptured holds will not be targeted after their expiration dates. If the Suspend this Hold checkbox is selected, the hold will be suspended and will not + Uncaptured holds will not be targeted after their expiration dates. If the Suspend this Hold checkbox is selected, the hold will be suspended and will not be captured until reactivated. - Placing holds from patron records:1. - Open the patron record.2. - Click Holds3. - Click Place Hold (top left top corner).4. - The Advanced Search interface opens within the Holds pane.5. - Enter item search criteria and click Submit Search.6. - Locate the desired item in the Title Results list and click Place hold7. - The patron's account information will retrieve automatically.8. - Verify contact methods and pick up location.9. - Set the notification and expiration date fields as required.10. + Placing holds from patron records:1. + Open the patron record.2. + Click Holds3. + Click Place Hold (top left top corner).4. + The Advanced Search interface opens within the Holds pane.5. + Enter item search criteria and click Submit Search.6. + Locate the desired item in the Title Results list and click Place hold7. + The patron's account information will retrieve automatically.8. + Verify contact methods and pick up location.9. + Set the notification and expiration date fields as required.10. Click Place Hold and confirm. Multiple holds may be placed at one time. Select Holds to return to the Holds screen. Select Refresh to reflect newly placed holds. @@ -1898,158 +1898,158 @@ Managing HoldsManaging Holds - + Holds may be cancelled at any time by staff or patrons. Before holds are captured, staff or patrons may: - •Suspend or set holds as inactive for a period of time without losing the hold queue position;•Activate suspended holds;•Edit the hold notification method, pick-up location, expiration date, or activation date; + •Suspend or set holds as inactive for a period of time without losing the hold queue position;•Activate suspended holds;•Edit the hold notification method, pick-up location, expiration date, or activation date; Staff can edit holds from patron records or title records. Patrons may edit holds from their OPAC account. - Managing holds in patron records:1. - Retrieve the patron record.2. - Select Holds.3. - Highlight the appropriate hold record.4. - Select Actions for Selected Items.5. + Managing holds in patron records:1. + Retrieve the patron record.2. + Select Holds.3. + Highlight the appropriate hold record.4. + Select Actions for Selected Items.5. Manage the hold by choosing an action on the list. - Captured holds with statuses of On Hold Self or Ready for Pickup can be cancelled by staff or patrons. The status + Captured holds with statuses of On Hold Self or Ready for Pickup can be cancelled by staff or patrons. The status of these items will not be change until they are checked in. - Managing holds in title records:1. - Retrieve and display the appropriate title record through the catalog.2. - Choose Actions for this Reord → View Holds.3. - By default, only holds with the pickup location of your library are displayed.4. - Highlight the hold(s) to edit. 5. + Managing holds in title records:1. + Retrieve and display the appropriate title record through the catalog.2. + Choose Actions for this Reord → View Holds.3. + By default, only holds with the pickup location of your library are displayed.4. + Highlight the hold(s) to edit. 5. Click Actions for Selected Holds and the appropriate action. - Holds may be sorted on the View Holds screen. Click Request Date to find the position of a patron in the hold queue. Use the column + Holds may be sorted on the View Holds screen. Click Request Date to find the position of a patron in the hold queue. Use the column picker to display patron barcodes and names. Columns may be saved for a login using the Save Columns button. Transferring HoldsTransferring Holds - - 1. - Open the record you need to transfer the hold from in one tab and the record you need to transfer the hold to in another tab.2. - View the holds on the record where the hold is currently.3. + + 1. + Open the record you need to transfer the hold from in one tab and the record you need to transfer the hold to in another tab.2. + View the holds on the record where the hold is currently.3. You will copy the patron barcode of the hold you need to move. Select Patron barcode in the column picker. Then right click on the - line you need, and select Copy to Clipboard4. - Click on the patron barcode. Make sure you do not click on the item barcode if it is in your box.5. - A box will open up telling you what has been copied to the clipboard.6. - Click OK or press Enter.7. - You can now use this patron barcode to place a hold.8. - Go to the tab where you have opened the record where you need to move the hold.9. + line you need, and select Copy to Clipboard4. + Click on the patron barcode. Make sure you do not click on the item barcode if it is in your box.5. + A box will open up telling you what has been copied to the clipboard.6. + Click OK or press Enter.7. + You can now use this patron barcode to place a hold.8. + Go to the tab where you have opened the record where you need to move the hold.9. Then you will need to cancel the hold on the first record. Canceling HoldsCanceling Holds - - 1. - View the holds for the item.2. - Highlight the hold you need to cancel.3. - Click Actions for Selected Holds (Alt+S)4. - Select Cancel Hold (Alt+C)5. - A Window will open asking if you are sure you wish to cancel the hold.6. - If it is the correct hold, click Yes (Alt+Y).7. + + 1. + View the holds for the item.2. + Highlight the hold you need to cancel.3. + Click Actions for Selected Holds (Alt+S)4. + Select Cancel Hold (Alt+C)5. + A Window will open asking if you are sure you wish to cancel the hold.6. + If it is the correct hold, click Yes (Alt+Y).7. The window will close, and the hold will disappear from the list. Retargeting HoldsRetargeting Holds - + Holds need to be retargeted whenever a new item is added to a record, or after some types of item status changes, for instance when an item is changed from On Order to In Process. The system does not automatically recognize the newly added items as available to fill holds. This also needs to be done if items marked as Damaged or Missing or set to other non-circulating statuses are once again made available for circulation. - 1. - View the holds for the item.2. + 1. + View the holds for the item.2. highlight all the holds for the record, which have a status of Waiting for Copy. If there are a lot of holds, it may be helpful to sort the - holds by Status.3. - Click on the head of the status column.4. + holds by Status.3. + Click on the head of the status column.4. Under Actions for Selected Holds (Alt+S) select Find Another - Target (Alt+T)5. - A window will open asking if you are sure you would like to reset the holds for these items.6. + Target (Alt+T)5. + A window will open asking if you are sure you would like to reset the holds for these items.6. Click Yes (Alt+Y). Nothing may appear to happen, or if you are retargeting - a lot of holds at once, your screen may go blank or seem to freeze for a moment while the holds are retargeted.7. + a lot of holds at once, your screen may go blank or seem to freeze for a moment while the holds are retargeted.7. When the screen refreshes, the holds will be retargeted the system will now recognize the new items and items with a new status as available for holds. Holds Pull ListHolds Pull List - + Holds may have one of three statuses: Waiting for Copy, Waiting for Capture, or Ready for Pickup. Waiting-for-copy: all copies are checked out or otherwise unavailable. Waiting-for-capture: an available copy is assigned to the hold. The item displays on the Holds Pull List. Staff must retrieve and capture the hold. Ready-for-pickup: the hold has been captured and is waiting for patron pickup. - To retrieve the holds pull list:1. - Select Circulation → Pull List for Hold Requests.2. - The Holds Pull List displays. 3. - Sort by clicking the column labels (e.g. Call Number).4. + To retrieve the holds pull list:1. + Select Circulation → Pull List for Hold Requests.2. + The Holds Pull List displays. 3. + Sort by clicking the column labels (e.g. Call Number).4. To print, click Print Page on the top right of the screen. - The Holds Pull List is updated constantly. Once an item on the list is no longer available or a hold on the list is captured, the items will disappear + The Holds Pull List is updated constantly. Once an item on the list is no longer available or a hold on the list is captured, the items will disappear from the list. Capturing HoldsCapturing Holds - + Holds may be captured when a checked-out item is returned (checked in) or when an item on the Holds Pull List is retrieved and captured. When a hold is captured, a hold slip may be printed and an email notification will be sent out, if enabled for the hold. - 1. - Select Circulation → Capture Holds.2. - Scan or type barcode and click Submit.3. + 1. + Select Circulation → Capture Holds.2. + Scan or type barcode and click Submit.3. A hold slip prints automatically. - Holds can also be captured on the Circulation → Check In Items screen. + Holds can also be captured on the Circulation → Check In Items screen. If the Auto-Print Hold and Transit Slips checkboxes are selected, hold slips will print automatically. Holds Shelf ListHolds Shelf List - + Items with Ready-for-pickup status are displayed on the Hold Shelf List. Hold Shelf List can help manage items on the hold shelf. To view the holds shelf list: - 1. - Select Circulation → Browse Holds Shelf2. - Actions for Selected Holds are available, as in the patron record. 3. + 1. + Select Circulation → Browse Holds Shelf2. + Actions for Selected Holds are available, as in the patron record. 3. Expired holds may be deleted from this screen. - If you cancel a ready-for-pickup hold, you must check in the item to make it available for circulation. + If you cancel a ready-for-pickup hold, you must check in the item to make it available for circulation. Transit ItemsTransit Items - + Evergreen’s In Transit feature tracks items transferring among branches. It allows patrons to return items at any branch and for holds to be placed on items at other branches. When will an item go In Transit? - 1.When an item is checked in at a non-owning branch the status changes to In Transit. A transit slip may be printed.2.When a hold is captured for an item with a pickup branch other than location at which the hold is captured, the item’s status will be changed to + 1.When an item is checked in at a non-owning branch the status changes to In Transit. A transit slip may be printed.2.When a hold is captured for an item with a pickup branch other than location at which the hold is captured, the item’s status will be changed to In Transit. If the hold is captured from the Check In screen, a prompt to print the Transit/Hold slip will display. If the hold is captured from the Capture Holds screen, a Transit/Hold slip will be printed automatically. Receiving In Transit ItemsReceiving In Transit Items - + All items received through transit must be checked in by the receiving branch. This changes the items' statuses from In Transit to Reshelving or Ready for Pickup. Transit ListTransit List - + The Transit List report may be used to as a tool to help manage your incoming and outgoing transits. To access and use the Transit List report: - 1. + 1. Select Admin → Local System Administration → Transit List. - 2. - Specify ransit to or Transit from library from the dropdown menu. 3. - Pick a date range in Transit Date falls between fields.4. - Click Transits Retrieve.5. + 2. + Specify ransit to or Transit from library from the dropdown menu. 3. + Pick a date range in Transit Date falls between fields.4. + Click Transits Retrieve.5. Items with an In Transit status for the selected time period are listed. Aborting TransitsAborting Transits - + Transits may be aborted (cancelled) from multiple locations within Evergreen. Use when processing missing in transit items or a patron requests an item that has just been returned and is in transit to its home library for reshelving. This procedure can be performed from the Transit List or from the Item Status screen. - 1. - Select the transit(s) to cancel.2. - Select Actions for Selected Transits → Abort Transits.3. - The transit is cancelled, but will still display in the list. 4. + 1. + Select the transit(s) to cancel.2. + Select Actions for Selected Transits → Abort Transits.3. + The transit is cancelled, but will still display in the list. 4. Click Retrieve Transits. The screen will refresh and the cancelled item(s) will no longer display as transits. Cancelling Transits at CheckoutCancelling Transits at Checkout - + Items with a status of in transit trigger a notification when an attempt is made to check them out. To allow in transit items to be checked out, override the block by clicking Abort Transit on the alert screen. Proceed by clicking Checkout. @@ -2058,18 +2058,18 @@ Items with a status of in transit trigger a notification when an attempt is made to check them out. To allow in transit items to be checked out, override the block by clicking Abort Transit on the alert screen. Proceed by clicking Checkout. - 1. - Click Item Status or hit F52. - Scan Item barcode3. - Right click on the item and select Abort Transit4. - At Aborting Transits pop-up, click Yes5. + 1. + Click Item Status or hit F52. + Scan Item barcode3. + Right click on the item and select Abort Transit4. + At Aborting Transits pop-up, click Yes5. The item now has the status Reshelving. Offline TransactionsOffline Transactions - + Evergreen's Standalone Interface/Offline Interface is designed to log transactions during network outage, which can be uploaded and processed once network operations are restored. @@ -2077,25 +2077,25 @@ The terms Offline Interface and Standalone Interface mean the same thing - a separate program to handle simple circulation tasks while the network is down. - + To access Offline Interface, go to Staff Client login screen. Click Standalone Interface button. - + - + Evergreen Standalone Interface will open. - + - Patron RegistrationPatron Registration + Patron RegistrationPatron Registration Patron registration on Evergreen Offline Interface records the minimum patron information necessary to register a new patron. - + All fields, except Line 2 of Billing Address, on Patron Registration screen are required. If your library does not record information for any field, you need work out a standard fake value for it, e.g. @@ -2107,218 +2107,218 @@ - 1. + 1. - + Click Register Patron on the top menu bar. - + - 2. + 2. - + Patron Registration screen is displayed. - + - 3. + 3. - + Fill in the form with patron information. Use the drop down list if available. Click Save patron registration button. Click OK on the confirmation pop-up window. - + Check OutCheck Out - 1. + 1. - + Click Check Out button to access check out screen. - + - 2. + 2. - + The Standalone Check Out screen will open. - + - 3. + 3. Make sure the date (on the left end of the menu bar) is correct. - 4. + 4. - + Scan the patron's library card barcode in Enter the patron's barcode box. - + - 5. + 5. - + Check that the due date is correct. You may delete then type in a due date in Enter the item due date box. You may also click choose one of these dropdown list to select a relative due date based on the loan period. - + - 6. + 6. - + Scan the items' bacode in Enter the item barcode box. It will appear on the right side of the screen. - + - 7. + 7. - + For non-catalogued items, you may also click choose a non-barcode - option dropdown list to select a non-catalogued category. + option dropdown list to select a non-catalogued category. - + Enter the number of items you want to check out, then click OK on the prompt window. - + - 8. + 8. Scan all items, changing the due date if necessary. - 9. + 9. - + If you want to print receipt, make sure Print receipt? checkbox is selected. - + - 10. + 10. - + Click Save these transactions. - + - + The default dates are based on your computer settings. Pre-catalogued item circulation is not available on Offline Interface. If an existing pre-cat barcode happens to be used, it will be checked out with the previous author and title. If a new pre-cat barcode is attempted, an error of ASSET NOT FOUND (item not found) will be returned upon processing offline transactions. - RenewRenew + RenewRenew To renew, you must know items barcode number. Patron's barcode is optional. - 1. + 1. - + To access renew function, click Renew button on the top menu bar. - + - 2. + 2. - + Renew screen looks very similar to Check Out screen. The differences are patron's barcode is optional on Renew screen, and non-barcoded option is not available as non-barcoded items can not be renewed. - + - 3. + 3. Follow the same procedure as checking out described above. Skip patron barcode if you do not have it. - In House UseIn House Use + In House UseIn House Use - 1. + 1. - + To access In House Use, click In House Use button on the top menu bar. - + - 2. + 2. - + Make sure the date is correct. - + - 3. + 3. - + Type in the number in Enter the number of uses of the item box. - + - 4. + 4. - + Scan or type in the item barcode number in Enter the item barcode box. - + - 5. + 5. Repeat the above 2 steps until all items have been scanned. - 6. + 6. - + Click Save these transactions. Make sure Print receipt? checkbox is selected if you want to print a receipt. - + - Check InCheck In + Check InCheck In - 1. + 1. - + Click Check In button on the top menu bar. - + - 2. + 2. - + Check In screen will open. - + - 3. + 3. - + Make sure the date is correct. - + - 4. + 4. - + Scan the items barcode in Enter item barcode box. The number will be displayed on the right side fo the screen. - + - 5. + 5. Scan all items you want to check in. - 6. + 6. - + Click Save these transactions. If you need to print a receipt, make sure Print receipt? checkbox is selected before you save the transactions. - + - + Without access to Evergreen database, items on holds or with special status will not be captured in offline mode. Sitka Support Team recommends libraries not use check in function on Standalone Interface if possible. - Uploading offline transactionsUploading offline transactions + Uploading offline transactionsUploading offline transactions Once you are able to connect to the server, you need to upload the offline transactions.  It is good practice to do this as soon as possible, but if the local system administrator @@ -2328,13 +2328,13 @@ circulation tasks while the network is down. Once you can connect to the server, there are 3 steps to uploading offline transactions: - 1. + 1. Create a session: to be done by local system administrators at an administration workstation. - 2. + 2. Upload transactions to a session: to be done by circulation staff at circulation workstations. - 3. + 3. Process the uploaded transactions: to be done by local system administrators at an administration workstation. @@ -2343,7 +2343,7 @@ workstations used in offline circ to that session.  Once all of the branch workstations have uploaded their transactions to the session, the manager will process all the transactions from all the workstations at once. - + Circulation Staff uploading transactions to the session does not put the transactions into the Evergreen database. The transactions will not be sent to the Evergreen database until the manager processes @@ -2351,53 +2351,53 @@ Create a SessionCreate a Session - 1. + 1. Log into Evergreen with a local system administrator username and password. - 2. + 2. - + From the menu bar, select Admin (-) → Offline Transaction Management. - + - 3. + 3. The Offline Transactions screen will open. Previously created sessions will be listed in the Offline Sessions section. Otherwise, the Offline Sessions section will be blank. - 4. + 4. - + In the upper Offline Sessions section, click on the Create button to create a new session. - + - 5. + 5. - + Enter a name for the session, like “Internet Down 2009-12-02”.  Click OK. - + - 6. + 6. - + In the Offline Sessions section, highlight the session you just created. An Uploaded Transactions section will appear in the bottom of the screen. Initially, this section will be empty.   - + - 7. + 7. Inform library staff that the session has been created and what the session name is. @@ -2410,44 +2410,44 @@ it's ready for your upload. There may be several sessions shown on the Offline Transaction Management screen, so you will need the name of the correct session from your local system administrator. - 1. + 1. Log into Evergreen with your regular username and password. - 2. + 2. From the menu bar, select Admin (-) → Offline Transaction Management. - 3. + 3. The Offline Transactions screen will open. You should see at least one session in the Offline Sessions section. You may see old sessions listed there, as well. - 4. + 4. - + In the upper Offline Sessions section, highlight the correct session, then click Upload.   - + - 5. + 5. - + When the uploading is finished,select the session in Offline Sessions section. Now the value in the Upload Count column should have been increased by 1. Your workstation should be listed in Uploaded Transactions section now. - + - 6. + 6. Inform your local system administrator that your transaction has been uploaded to the session. - + You will need to do this for each workstation you have used for offline circulation.  If your library has more than one workstations that have been used for offline transactions you will see the other workstation sessions that have already @@ -2461,81 +2461,81 @@ Transactions section. You'll need to be logged into Evergreen as a local system administrator to do the processing step. - 1. + 1. Log into Evergreen with a local system administrator's username and password. - 2. + 2. From the menu bar, select Admin (-) → Offline Transaction Management. - 3. + 3. - + Highlight the correct session and, if necessary, Refresh to verify all the appropriate workstations have uploaded their transactions to your session. - + - 4. + 4. - + Click on the Process button. - + - 5. + 5. - + The processing may take a while, depending on how many transactions you have done. Click the Refresh button to check the status. The processing is complete when the Processing? column shows Completed. - + - + The number in the Transactions Processed column is equal to the number of items checked out or checked in.  For example, if there are 5 transactions processed this could be 5 items checked out, or 3 items checked in and 2 items checked out, or 5 items checked in.   - ExceptionsExceptions + ExceptionsExceptions Exceptions are problems that were encountered during processing.  For example, a mis-scanned patron barcode, an open circulation, or an item that wasn’t checked in before it was checked out to another patron, would be listed as an exception. Those transactions causing exceptions may not be loaded into Evergreen database. Staff should examine the exceptions and take necessary action. - + The example below shows several exceptions: - + These are a few notes about possible exceptions. It is not an all-inclusive list. - 1. + 1. Checking out a DVD with the wrong date (leaving due date set at +2 weeks instead of +1 week) doesn't cause an exception. - 2. + 2. Overdue books are not flagged as exceptions. - 3. + 3. Checking out a reference book doesn't cause an exception. - 4. + 4. Checking out an item belonging to another library doesn't cause an exception. - 5. + 5. The Standalone Interface doesn't recognize books on hold, no exceptions will be generated for that. - 6. + 6. The Standalone Interface will recognize blocked, barred, and expired patrons as well as lost cards, IF you have recently done an Admin (-) → Download Offline Patron List on the workstation on which you're using the Standalone Interface. You will get an error message indicating the patron @@ -2543,8 +2543,8 @@ time. Common error messages: -1.ROUTE-ITEM - Indicates the book should be routed to another branch or library system. - You'll need to find the book and re-check it in (online) to get the Transit Slip to print.2.COPY_STATUS_LOST - Indicates a book previously marked as lost was found and checked in. 3.CIRC_CLAIMS_RETURNED - Indicates a book previously marked as claimed-returned was found and checked in.4. ASSET_COPY_NOT_ FOUND - Indicates the item barcode was mis-scanned/mis-typed.5.ACTOR_CARD_NOT_ FOUND - Indicates the patron's library barcode was mis-scanned/ mis-typed.6.OPEN_CIRCULATION_ EXISTS - Indicates a book was checked out that had never been checked in.7.MAX_RENEWALS_ REACHED - Indicates the item has already been renewed the maximum times allowed (or it’s a video/DVD). +1.ROUTE-ITEM - Indicates the book should be routed to another branch or library system. + You'll need to find the book and re-check it in (online) to get the Transit Slip to print.2.COPY_STATUS_LOST - Indicates a book previously marked as lost was found and checked in. 3.CIRC_CLAIMS_RETURNED - Indicates a book previously marked as claimed-returned was found and checked in.4. ASSET_COPY_NOT_ FOUND - Indicates the item barcode was mis-scanned/mis-typed.5.ACTOR_CARD_NOT_ FOUND - Indicates the patron's library barcode was mis-scanned/ mis-typed.6.OPEN_CIRCULATION_ EXISTS - Indicates a book was checked out that had never been checked in.7.MAX_RENEWALS_ REACHED - Indicates the item has already been renewed the maximum times allowed (or it’s a video/DVD). @@ -2559,18 +2559,18 @@ SearchSearch - + Search functionality may be functioned through: - •Cataloging → Search the Catalog •Search → the Catalog•Presss F3 + •Cataloging → Search the Catalog •Search → the Catalog•Presss F3 Specialized search functionality for catalogers is located on the left-hand side of the search screen (Quick Search). - 1. - Enter search criteria.2. - Click Submit.3. - Click on the title link for the desired record.4. - The complete record will display in the OPAC view.5. + 1. + Enter search criteria.2. + Click Submit.3. + Click on the title link for the desired record.4. + The complete record will display in the OPAC view.5. Use the Actions for this Record dropdown menu to manipulate the record. - Use ocn as a prefix for nine digit OCLC numbers. (e.g. ocn123456789) + Use ocn as a prefix for nine digit OCLC numbers. (e.g. ocn123456789) Use ocm as a prefix for OCLC numbers that are eight digits or shorter. Evergreen will automatically prefix the number with zeros so that it is nine digits. (e.g. ocm01234567, or ocm00123456) Do not use hyphens when searching by LCCN. Substitute a 0 in place of the hyphen. (e.g. 2001001234) @@ -2578,34 +2578,34 @@ MARC Expert SearchMARC Expert Search Located beneath the “Quick Search” box on the catalog search screen. - 1. - Enter tag definitions and search criteria.2. - Click Submit.3. - Search multiple tags by clicking Add Row.4. + 1. + Enter tag definitions and search criteria.2. + Click Submit.3. + Search multiple tags by clicking Add Row.4. Click the title link to display the full record. - To set default record views for a username, select Actions for this Record → Set bottom interface as + To set default record views for a username, select Actions for this Record → Set bottom interface as Default. Adding and Editing ItemsAdding and Editing Items - + Adding Holdings to Title RrcordsAdding Holdings to Title Rrcords - - New boxes will display after Enter or Tab is selected. If a call number exists in the MARC record, use Apply to bring it in to the volume - record.1. - Retrieve an existing bibliographic record.2. - Actions for this Record->Holdings Maintenance.3. + + New boxes will display after Enter or Tab is selected. If a call number exists in the MARC record, use Apply to bring it in to the volume + record.1. + Retrieve an existing bibliographic record.2. + Actions for this Record->Holdings Maintenance.3. The Record opens in record summary view. To display existing volume and copy records, check the boxes for Show Volumes and Show Items. - These boxes are sticky and will remain checked for the login until manually de-selected.4. - Highlight the appropriate library from the display.5. - Select Actions for Selected Rows → Add Volumes.6. - Use Tab or Enter to move through the displayed fields (# of volumes, call number, copies, and barcodes).7. - After entering the barcode number(s), click Edit then Create. 8. - The Copy Editor opens in a new window. Move through the fields to edit information as necessary. Click Apply on every edit. 9. + These boxes are sticky and will remain checked for the login until manually de-selected.4. + Highlight the appropriate library from the display.5. + Select Actions for Selected Rows → Add Volumes.6. + Use Tab or Enter to move through the displayed fields (# of volumes, call number, copies, and barcodes).7. + After entering the barcode number(s), click Edit then Create. 8. + The Copy Editor opens in a new window. Move through the fields to edit information as necessary. Click Apply on every edit. 9. When finished, click Create Copies. - New items are assigned a status of In Process. Items must be checked in to become Available. Alternatively, use Edit Item Attributes from the Actions for Selected + New items are assigned a status of In Process. Items must be checked in to become Available. Alternatively, use Edit Item Attributes from the Actions for Selected Rows to change statuses to Available once records have been created and saved to database. The creation and use of item record templates is recommended. @@ -2614,22 +2614,22 @@ Copy AlertsCopy Alerts - + Copy alerts are useful alerts for physical item copies. Staff must be granted permission to override alerts at checkout or checkin. - Creating copy alerts:1. - Search → for copies by Barcode.2. - Enter an item barcode.3. - Select the row.4. - Actions for Selected Items → Edit Item Attributes.5. - Click in the alert message box and enter text.6. - Click Apply.7. + Creating copy alerts:1. + Search → for copies by Barcode.2. + Enter an item barcode.3. + Select the row.4. + Actions for Selected Items → Edit Item Attributes.5. + Click in the alert message box and enter text.6. + Click Apply.7. Click Modify Copies. - Copy alerts must be manually removed. Follow the same process, but delete the text in the Alert Message box, to remove copy alerts. - Adding or removing copy alerts to or from multiple items:1. - Retrieve items to the Item Status screen.2. - Select all items to be changed by highlighting the first item in the list, holding down the Shift key, and clicking on the last item. Select several, non-sequential items, by holding down the Ctrl key and clicking on the required items.3. + Copy alerts must be manually removed. Follow the same process, but delete the text in the Alert Message box, to remove copy alerts. + Adding or removing copy alerts to or from multiple items:1. + Retrieve items to the Item Status screen.2. + Select all items to be changed by highlighting the first item in the list, holding down the Shift key, and clicking on the last item. Select several, non-sequential items, by holding down the Ctrl key and clicking on the required items.3. Continue to Edit Item Attributes, as above. Viewing Copy AlertsViewing Copy Alerts @@ -2642,205 +2642,205 @@ Copy NotesCopy Notes - + Copy notes are informational only. They may be internal or made available to the public in the OPAC. - Accessing copy notes from the copy editor:1. - Click Copy Notes.2. - If a note exists, it will display with a yellow background.3. + Accessing copy notes from the copy editor:1. + Click Copy Notes.2. + If a note exists, it will display with a yellow background.3. Click Add Note to create a new copy note. Select the Public checkbox to make the note visible in - the OPAC.4. - Click Add Note to display the new note.5. + the OPAC.4. + Click Add Note to display the new note.5. Use the Delete This Note button to remove a note from a copy record. Adding New Bibliographic RecordsAdding New Bibliographic Records - + Evergreen allows new bibliographic records to be added to the database through Z39.50 searching, MARC record file uploads, and original cataloging. Importing MARC Records via the Z39.50 InterfaceImporting MARC Records via the Z39.50 Interface - - Active search fields will adjust to the selected targets. Keyword and Subject will only be active if the local catalog is selected. If multiple - targets are selected for a search, an active box may apply to only one target.The Service column indicates where the record was found. If nativeevergreen-catalog is listed in the service - column, the record is already in the Evergreen database.1. - Cataloging → Import Record from Z39.502. + + Active search fields will adjust to the selected targets. Keyword and Subject will only be active if the local catalog is selected. If multiple + targets are selected for a search, an active box may apply to only one target.The Service column indicates where the record was found. If nativeevergreen-catalog is listed in the service + column, the record is already in the Evergreen database.1. + Cataloging → Import Record from Z39.502. Select single or multiple pre configured Z39.50 targets from the list. Only subscription services require logins. Once databases have been selected, click Save as Default to save the services to be searched and any usernames/passwords. These will be automatically selected the next - time the Z39.50 screen is opened.3. - Fill in search criteria for the item and click Search.4. + time the Z39.50 screen is opened.3. + Fill in search criteria for the item and click Search.4. Search results display in the bottom pane. To view long lists of results, use Hide Top Pane to view the results. Information about - each record retrieved appears on a separate summary line, with various columns of information. 5. + each record retrieved appears on a separate summary line, with various columns of information. 5. From this screen users may: - •Retrieve further results, if applicable•View MARC records•Export MARC records•Import or overlay MARC records + •Retrieve further results, if applicable•View MARC records•Export MARC records•Import or overlay MARC records Importing RecordsImporting Records - - If you did not utilize Fast Item Add as described above, you may now attach holdings as described in Adding holdings to title records.1. - Highlight the record and click MARC Editor for Import.2. - The record opens in the MARC Editor. Edit fixed and bibliographic fields. 3. - When finished, click Import Record.4. - Click OK.5. - If Fast Item Add was used, the copy editor will display. Make necessary adjustments and click Create Copies.6. + + If you did not utilize Fast Item Add as described above, you may now attach holdings as described in Adding holdings to title records.1. + Highlight the record and click MARC Editor for Import.2. + The record opens in the MARC Editor. Edit fixed and bibliographic fields. 3. + When finished, click Import Record.4. + Click OK.5. + If Fast Item Add was used, the copy editor will display. Make necessary adjustments and click Create Copies.6. The record will display in the catalog view. - Select Fast Item Add to input the call number and barcode data from this screen. This box is + Select Fast Item Add to input the call number and barcode data from this screen. This box is sticky for the login. Uploading MARC FilesUploading MARC Files - + Title records that do not already exist in the Evergreen database may be uploaded directly to the catalog through vendor-supplied MARC files. Multiple title records can be uploaded and added at the same time. - The Import Attached Holdings option requires additional server configuration.1. - Cataloging → MARC Batch Import/Export.2. - The MARC File Upload page opens and displays the Import Records form.3. + The Import Attached Holdings option requires additional server configuration.1. + Cataloging → MARC Batch Import/Export.2. + The MARC File Upload page opens and displays the Import Records form.3. Complete the form, creating a new Upload Queue. - a. + a. Select Auto-Import Non-Colliding Records to automatically import MARC records from the file, if they are not already in the Evergreen database. - b. + b. Leave Select a Record Source defaulted to the OCLC setting. - c. + c. Click Browse... to choose the source MARC file. - d. + d. Click Upload. - 4. + 4. Details from the file upload will appear. By default Limit to Non-Imported Records is selected and the table only displays MARC records that conflict with others already in Evergreen. You may click Matches to view the conflicting Evergreen records. If the matched records are not true matches, it is still possible to upload the selected records using the Actions drop-down menu. Creating New MARC RecordsCreating New MARC Records - + New MARC records may be created in Evergreen using MARC templates. For detailed information on MARC standards, visit the Library of Congress website: http://www.loc.gov/marc/ - If the Fast Item Add box was selected, the copy editor will open after Create Record is clicked.1. - Cataloging → Create New Marc Record.2. - The MARC Template screen will open.3. - Select the appropriate template and click Load.4. - A blank MARC record will load. 5. + If the Fast Item Add box was selected, the copy editor will open after Create Record is clicked.1. + Cataloging → Create New Marc Record.2. + The MARC Template screen will open.3. + Select the appropriate template and click Load.4. + A blank MARC record will load. 5. Complete the MARC record according to library policy. Tags and subfields may be added or deleted as required (right click on a field to view available options). - If the Fast Item Add box is selected, enter a call number and barcode. - 6. - Click Create Record.7. + If the Fast Item Add box is selected, enter a call number and barcode. + 6. + Click Create Record.7. The record is created and will open in the current default view. Holdings may now be added. Working with the MARC EditorWorking with the MARC Editor - + The MARC Editor allows MARC tags, sub-fields, and indicators to be edited. OPAC icons for text, moving pictures and sound rely on correct MARC coding in the leader and the 008, as do OPAC search filters such as publication date, item type, or target audience. Bibliographic matching and de-duplicating also rely on correct MARC coding and consistency in use and content in particular MARC tags. Editing MARC RecordsEditing MARC Records - - 1. - Retrieve the record.2. - Actions for this Record → MARC Edit. 3. - The MARC record will display. 4. - Select Stack subfields to alter subfields display.5. - Right click into a tag field to add/remove rows or replace tags.6. + + 1. + Retrieve the record.2. + Actions for this Record → MARC Edit. 3. + The MARC record will display. 4. + Select Stack subfields to alter subfields display.5. + Right click into a tag field to add/remove rows or replace tags.6. To work with the data in a tag or indicator, click or Tab into the required field. Right click to - view acceptable tags or indicators.7. - When finished, click Save Record.8. + view acceptable tags or indicators.7. + When finished, click Save Record.8. Click OK. - The MARC Editor may be navigated using keyboard shortcuts. Click Help to see the shortcut menu from within the MARC Editor. + The MARC Editor may be navigated using keyboard shortcuts. Click Help to see the shortcut menu from within the MARC Editor. MARC Record Leader and MARC fixed field 008MARC Record Leader and MARC fixed field 008 - - + + Parts of the leader and the 008 field can be edited in the MARC Editor via the fixed field editor box displayed above the MARC record. Information about the leader and the 008 can be found on the Library of Congress’s MARC Standards page at http://www.loc.gov/marc/. - To edit the MARC record leader1. - Retrieve and display the appropriate record in MARC Edit view.2. - Click into any box displayed in the fixed field editor.3. - Press Tab or use the mouse to move between fields. 4. - Click Save Record.5. - Click OK to save record edits.6. + To edit the MARC record leader1. + Retrieve and display the appropriate record in MARC Edit view.2. + Click into any box displayed in the fixed field editor.3. + Press Tab or use the mouse to move between fields. 4. + Click Save Record.5. + Click OK to save record edits.6. The OPAC icon for the appropriate material type will display. - The MARC Editor may be navigated using keyboard shortcuts. Click Help to see the shortcut menu from within the MARC Editor. + The MARC Editor may be navigated using keyboard shortcuts. Click Help to see the shortcut menu from within the MARC Editor. Overlaying MARC RecordsOverlaying MARC Records - + Overlaying a MARC record replaces an existing MARC record while leaving all holdings, holds, active circulations, bills, and fines intact. In Evergreen, a record must be marked for overlay. The mark for overlay is by login. Only one record at a time may be marked for overlay. When another record is marked for overlay, the previously marked item is de-marked. Once a record is marked, it remains marked until overlaid or until the user logs out of Evergreen. - Marking a record for overlay1. - Search for and retrieve a record for overlay.2. + Marking a record for overlay1. + Search for and retrieve a record for overlay.2. Select Actions for this Record → Mark for Overlay. Record is now marked. - Overlaying the marked record1. - Once the record is marked for overlay, proceed to search for and import the new record from a Z39.50 target.2. - Select Cataloging → Import Record from Z39.50.3. - Choose targets and enter search terms.4. - Click MARC Editor for Overlay. The TCN of the Evergreen record marked for overlay is displayed.5. - The record displays in MARC Edit view. Edit the record as necessary. 6. - Click Overlay Record.7. - The existing record will display along with a prompt to confirm the overlay. Panes may be moved to view the record in entirety, if required.8. - Click Overlay.9. + Overlaying the marked record1. + Once the record is marked for overlay, proceed to search for and import the new record from a Z39.50 target.2. + Select Cataloging → Import Record from Z39.50.3. + Choose targets and enter search terms.4. + Click MARC Editor for Overlay. The TCN of the Evergreen record marked for overlay is displayed.5. + The record displays in MARC Edit view. Edit the record as necessary. 6. + Click Overlay Record.7. + The existing record will display along with a prompt to confirm the overlay. Panes may be moved to view the record in entirety, if required.8. + Click Overlay.9. Confirm the overlay. The record in Evergreen is overlaid with the new MARC record. All preexisting holdings remain intact. Cataloging TemplatesCataloging Templates - + This simplesect explains creating, using, exporting, and importing item record templates for cataloging. Use of templates enhances item creation and helps ensure consistency in record format in the database. - Creating item templates1. - Search for and retrieve a record.2. - Select Actions for this Record → Holdings Maintenance.3. + Creating item templates1. + Search for and retrieve a record.2. + Select Actions for this Record → Holdings Maintenance.3. Select an item record in list and click Actionsfor Selected Rows → Edit Item Attributes. - 4. + 4. The Copy Editor will open. Select the required template attributes by moving through fields and clicking Apply - for every edit. 5. - Click Save when edits are complete.6. - Enter a template name at the prompt7. - Click OK.8. - The template is now saved. Click OK.9. - This template may now be selected from the drop down menu.10. + for every edit. 5. + Click Save when edits are complete.6. + Enter a template name at the prompt7. + Click OK.8. + The template is now saved. Click OK.9. + This template may now be selected from the drop down menu.10. Click Close to exit the Copy Editor. Once item templates have been created, they may be employed when items are added to the database. - Using item templates:1. - Retrieve a record and display volumes.2. - Select the appropriate volume.3. - Actions for this Row → Add Items.4. - Enter the number of copies and barcode(s).5. - Click Edit then Create to open the Copy Editor.6. - Choose the appropriate template from the drop down menu.7. - Click Apply.8. - Make edit as necessary. When finished, click Create Copies.9. - Items are created.10. + Using item templates:1. + Retrieve a record and display volumes.2. + Select the appropriate volume.3. + Actions for this Row → Add Items.4. + Enter the number of copies and barcode(s).5. + Click Edit then Create to open the Copy Editor.6. + Choose the appropriate template from the drop down menu.7. + Click Apply.8. + Make edit as necessary. When finished, click Create Copies.9. + Items are created.10. Click OK. Saved templates are only viewable by the login that created them. Templates must be exported in order to share templates amongst staff members. Exporting Item Templates - 1. - Click Export in the top left hand corner of the Copy Editor. This will export all templates for the user.2. - Select where the template should be saved on the workstation, name the file, and click Save. 3. + 1. + Click Export in the top left hand corner of the Copy Editor. This will export all templates for the user.2. + Select where the template should be saved on the workstation, name the file, and click Save. 3. Click OK. - Importing templates:1. - Click on Import in the top left hand corner of the Copy Editor.2. - Navigate to the file’s location, select the file and click Open.3. + Importing templates:1. + Click on Import in the top left hand corner of the Copy Editor.2. + Navigate to the file’s location, select the file and click Open.3. Click OK. BucketsBuckets - + The Buckets function in Evergreen groups records together and allows for batch changes and the creation of pull lists. Batch changes allow many records to be grouped together for changes to be enacted on them all at once, instead performing individual edits. Buckets allow materials to be tracked and worked by multiple staff members. @@ -2849,148 +2849,148 @@ Buckets are useful to group records together over a period of time. Evergreen’s bucket functionality allows records to be added to new or existing buckets where they remain until they are manually ungrouped. An item’s location in a bucket does not affect normal library functions such as circulation. Being in a bucket is not an item status. Buckets may be shared or private and are associated with a login. - When working with buckets, it is important to ensure that record type corresponds with bucket type. Copy records may not be added to bibliographic + When working with buckets, it is important to ensure that record type corresponds with bucket type. Copy records may not be added to bibliographic record buckets and vice versa. Buckets may be created independently of accessing records or they may be created from a record view. - 1. - Cataloging → Manage Record Buckets.2. - Bucket Actions → New Bucket.3. - Name the bucket and click OK.4. - Confirm the action.5. + 1. + Cataloging → Manage Record Buckets.2. + Bucket Actions → New Bucket.3. + Name the bucket and click OK.4. + Confirm the action.5. The Bucket View changes to display the new bucket as the active bucket. The bucket is numbered and the creating owner is - identified.6. + identified.6. All buckets created by this login are available in the drop down menu. - Creating record buckets from within a record1. - Search for, retrieve, and display the desired bibliographic record.2. - Choose Actions for this Record → Add to Bucket.3. - Select Add to New Bucket.4. + Creating record buckets from within a record1. + Search for, retrieve, and display the desired bibliographic record.2. + Choose Actions for this Record → Add to Bucket.3. + Select Add to New Bucket.4. Name the bucket and click OK. The results are the same as creating a bucket using the steps above. Once a bucket has been added, records may be added to it. - 1. - Search for, retrieve, and display the desired bibliographic record.2. - Choose Actions for this Record → Add to Bucket.3. - Select the appropriate bucket and click Add to Selected Bucket.4. - To confirm this action, go back to the Record Bucket tab. The bucket now contains the record.5. + 1. + Search for, retrieve, and display the desired bibliographic record.2. + Choose Actions for this Record → Add to Bucket.3. + Select the appropriate bucket and click Add to Selected Bucket.4. + To confirm this action, go back to the Record Bucket tab. The bucket now contains the record.5. Continue to add records, if required. - To work from within the buckets module1. - Choose Cataloging → Manage Record Buckets.2. - Select the Record Query tab on the left side of the screen.3. - Select the appropriate bucket and click Add to Selected Bucket.4. - Use Add All to Pending Records or select individual records and Add Selected to Pending Records.5. - Select the Pending Records tab.6. + To work from within the buckets module1. + Choose Cataloging → Manage Record Buckets.2. + Select the Record Query tab on the left side of the screen.3. + Select the appropriate bucket and click Add to Selected Bucket.4. + Use Add All to Pending Records or select individual records and Add Selected to Pending Records.5. + Select the Pending Records tab.6. Click Add All to current Bucket or Add Selected to current Bucket. - - •The column picker allows the data display to be manipulated within the bucket.•Clicking on List Actions → Save List CSV to File exports all column headers and + + •The column picker allows the data display to be manipulated within the bucket.•Clicking on List Actions → Save List CSV to File exports all column headers and displayed data to the workstation in a text file format. - This feature may be used to create bibliographies or similar lists.•Clicking on List Actions → Print List CSV prints column headers and + This feature may be used to create bibliographies or similar lists.•Clicking on List Actions → Print List CSV prints column headers and displayed data. Adding Copy Records to Copy BucketsAdding Copy Records to Copy Buckets - + While creating copy buckets is similar to creating record buckets (simply choose Copy Buckets in the menu choice), there are significant differences in adding copy records to a bucket. Records must be added to copy buckets from the copy record level. This may be done from several locations within the Evergreen client. - Adding copy records from the holdings maintenance record summary screen:1. - Select the required record and choose Actions for Selected Rows → Add Items to Buckets.2. - Add the record to an existing bucket or create a new bucket on the fly.3. + Adding copy records from the holdings maintenance record summary screen:1. + Select the required record and choose Actions for Selected Rows → Add Items to Buckets.2. + Add the record to an existing bucket or create a new bucket on the fly.3. The copy record is now in the selected bucket. The displayed data differs slightly from the Record Bucket view. - Adding copy records from the item status screen:1. - Select the required record(s) and choose Actions for Catalogers → Add Items to Buckets or choose Actions for Selected Items → Add to Item Bucket.2. + Adding copy records from the item status screen:1. + Select the required record(s) and choose Actions for Catalogers → Add Items to Buckets or choose Actions for Selected Items → Add to Item Bucket.2. Select the desired bucket and click Add to Selected Bucket or Create a New Bucket. - Adding copy records from within the copy buckets module:1. - Enter item barcode(s) into the Pending Copies barcode box.2. - Click Submit.3. - Item(s) will display. 4. + Adding copy records from within the copy buckets module:1. + Enter item barcode(s) into the Pending Copies barcode box.2. + Click Submit.3. + Item(s) will display. 4. Use Add All or select the appropriate items and Add Selected to move items to the bucket displayed in the bottom pane. Working with Records in a BucketWorking with Records in a Bucket - + Once records have been placed in a bucket, a variety of functions may be performed. - To batch edit records:1. - Access the Copy Bucket view by choosing Edit → Copy Buckets.2. - Select the appropriate bucket from the drop down menu.3. - When the bucket is displayed click Edit Item Attributes.4. + To batch edit records:1. + Access the Copy Bucket view by choosing Edit → Copy Buckets.2. + Select the appropriate bucket from the drop down menu.3. + When the bucket is displayed click Edit Item Attributes.4. The Copy Editor window opens. Note that all the barcodes, call numbers, - and shelving locations display.5. - Make the desired edits.6. - Apply each change.7. - Click Modify Copies to save all changes.8. - Click OK.9. + and shelving locations display.5. + Make the desired edits.6. + Apply each change.7. + Click Modify Copies to save all changes.8. + Click OK.9. The desired changes are made for all selected items. - Use caution when using the Transfer to Specific Volume action. - Removing records from buckets1. - select the desired record.2. + Use caution when using the Transfer to Specific Volume action. + Removing records from buckets1. + select the desired record.2. Click Remove Selected from Bucket The same procedure is used for both Record and Copy Buckets - Retrieving 1. - Access the copy or record bucket management screen as described above.2. - In drop down menu beside Choose a bucket… select Retrieve shared bucket.3. - Enter the desired bucket number and click OK.4. + Retrieving 1. + Access the copy or record bucket management screen as described above.2. + In drop down menu beside Choose a bucket… select Retrieve shared bucket.3. + Enter the desired bucket number and click OK.4. The requested bucket now displays. The bucket number (assigned by Evergreen) and owner displays. Merging Bibliographic RecordsMerging Bibliographic Records - + A common application for the merge function in Evergreen is to replace brief records with full records. This is only necessary when a full record cannot be located in a Z39.50 target. Any volume and copy records or holds associated with the brief record will be transferred to the full record upon merging. - 1. - Create a bucket for the records you wish to merge.2. - Identify records to be merged and add them to the bucket.3. - Retrieve the bucket by selecting Edit → Record Buckets.4. - Click Merge All Records.5. - Select one record as the Lead Record. (Generally, the better quality, full record.)6. - Click Merge. 7. + 1. + Create a bucket for the records you wish to merge.2. + Identify records to be merged and add them to the bucket.3. + Retrieve the bucket by selecting Edit → Record Buckets.4. + Click Merge All Records.5. + Select one record as the Lead Record. (Generally, the better quality, full record.)6. + Click Merge. 7. The brief record is subsumed by the full record. All of the volumes, copies and holds associated with the brief record are now attached to the full record. Adding holdings to title recordsAdding holdings to title records - + This lesson demonstrates adding your library’s volume and copy records to a title record. - 1. + 1. Search the catalogue for a record that matches the item in hand, as described in the section called “Locating Records”. - 2. + 2. - + When record is displayed, select Actions for this Record → Holdings Maintenance. - + - 3. + 3. - + Record opens in record summary view. Select your library from the list and click Actions for Selected Rows → Add Volumes. - + - 4. + 4. - + Enter amount in # of volumes field, type in a call number, or if the call number pulled from MARC record is acceptable click Apply to bring call number down to call number field, enter number of copies, scan barcode, and click Edit then Create. Use Tab or Enter to move through fields. - + - 5. + 5. - + The Copy Editor opens. Make all necessary edits by moving through fields and clicking Apply on every edit, and click Create Copies @@ -2999,16 +2999,16 @@ - 6. + 6. - + Click OK. - + - + Once an item is created it is assigned a status of In Process. Item must be checked in to become Available or cataloguer can choose to Edit Item Attributes and change status to @@ -3023,15 +3023,15 @@ pre-existing holds correctly until 24 hours after cataloguing. If your cataloguing turn-around time is shorter than 24 hours you can ensure the new copy is captured correctly at check in with the steps below. - 1. + 1. After adding the item select Actions for this Record → View Holds - 2. + 2. If there are outstanding hold requests, select the hold that is next in line then choose Actions for Selected Holds → Find Another Target. This forces Evergreen to re-target the hold and recognize the newly catalogued item. - 3. + 3. Check in the new item to capture it for the selected hold. @@ -3039,20 +3039,20 @@ Cataloguing Electronic Resources -- Finding Them in OPAC searchesCataloguing Electronic Resources -- Finding Them in OPAC searches - + For electronic resources to be visible in the catalog, you should add the 9 subfield in the 856 data field to indicate which organizational units will be able to find the resource. - 1. + 1. Open the record in the cataloging module - 2. + 2. Add the 9 subfield to the record and enter the short name of the organizational unit for the value. For example: -856 40 $u http://lwn.net $y Linux Weekly News $9 BR1 +856 40 $u http://lwn.net $y Linux Weekly News $9 BR1 would make this item visible to people searching in a library scope that contains BR1. - You can enter more than one 9 subfield or you can enter the parent organizational unit to make this item visible in more than + You can enter more than one 9 subfield or you can enter the parent organizational unit to make this item visible in more than one organizatuional unit under the same parent organizational unit. - 3. + 3. Save the record After a short time the electronic resource should appear in OPAC searches @@ -3060,50 +3060,50 @@ Printing Spine and Pocket LabelsPrinting Spine and Pocket Labels - - + + Copy buckets may be used to group items requiring labels. - 1. - Locate the correct copy bucket.2. - Select Show Status.3. - Items display in the Item Status screen.4. + 1. + Locate the correct copy bucket.2. + Select Show Status.3. + Items display in the Item Status screen.4. Select items requiring labels (hold the Ctrl key down and click the required items to select multiple items; if all items require labels, hold the - Shift key down while clicking the first and last items in the list).5. - Choose Actions for Selected Items → Print Spine Labels.6. - The Spine Labels screen will display.7. - Use the form on the left of the screen to modify spine and pocket label display. 8. + Shift key down while clicking the first and last items in the list).5. + Choose Actions for Selected Items → Print Spine Labels.6. + The Spine Labels screen will display.7. + Use the form on the left of the screen to modify spine and pocket label display. 8. Select Re-Generate to view changes. (Checkbox selections are saved for a login, but Re-Generate must be clicked to view these changes. - On line: selections are not saved.)9. - Click Available Macros to view auto-fill options for custom lines.10. - When finished, click Preview and Print.11. + On line: selections are not saved.)9. + Click Available Macros to view auto-fill options for custom lines.10. + When finished, click Preview and Print.11. From the Print Preview screen, select Print Page. Deleting RecordsDeleting Records - - Batch deletions:1. - Create a copy bucket for the items to be deleted (Cataloging → Manage Copy Buckets; create a New Bucket2. - Enter the barcodes for the to-be-deleted items into the Pending Copies simplesect of the Copy Buckets screen.3. - Add All of the items to the selected bucket.4. - Delete All from Catalog.5. + + Batch deletions:1. + Create a copy bucket for the items to be deleted (Cataloging → Manage Copy Buckets; create a New Bucket2. + Enter the barcodes for the to-be-deleted items into the Pending Copies simplesect of the Copy Buckets screen.3. + Add All of the items to the selected bucket.4. + Delete All from Catalog.5. The Deleted? status for each item will change from No to Yes. When all items have been deleted from a bibliographic record, the bibliographic record is also deleted from the system. The record may still be retrieved through the client, but will display as Deleted. These records will not display in the OPAC. Individual item records may be deleted from the Holdings Maintenance screen. - To delete individual records:1. - Highlight the item (barcode) to be deleted.2. - Select Actions for Selected Rows → Delete Items.3. + To delete individual records:1. + Highlight the item (barcode) to be deleted.2. + Select Actions for Selected Rows → Delete Items.3. Confirm. If the deleted item was the last item attached to the MARC record, the MARC record will be automatically deleted. Occasionally, a bibliographic record may need to be deleted (e.g. an incorrect record was imported to the system). - 1. - Retrieve the record.2. + 1. + Retrieve the record.2. Choose Actions for this Record → Delete Record. - To restore records:1. - Retrieve the record through the staff client.2. - Actions for this record → Undelete Record.3. + To restore records:1. + Retrieve the record through the staff client.2. + Actions for this record → Undelete Record.3. Confirm the action by selecting the checkbox and Undelete in the resulting popup box. @@ -3119,117 +3119,117 @@ Creating a Booking ReservationCreating a Booking Reservation - + Only staff members can create reservations. To initiate a reservation, staff can - •search the catalog,•enter a patron record,•or use the booking module. + •search the catalog,•enter a patron record,•or use the booking module. - Search the catalog to create a reservationSearch the catalog to create a reservation + Search the catalog to create a reservationSearch the catalog to create a reservation - 1. - In the staff client, select Search → Search the Catalog2. - Search for the item to be booked.3. - Click Submit Search.4. - A list of results will appear. Select the title of the item to be reserved.5. + 1. + In the staff client, select Search → Search the Catalog2. + Search for the item to be booked.3. + Click Submit Search.4. + A list of results will appear. Select the title of the item to be reserved.5. After clicking the title, the record summary appears. Beneath the record summary, - the copy summary will appear. In the Actions column, select Copy Details.6. + the copy summary will appear. In the Actions column, select Copy Details.6. The Copy Details will appear in a new row. In the barcode column, click the book now - link.7. - A screen showing the title and barcodes of available copies will appear.8. + link.7. + A screen showing the title and barcodes of available copies will appear.8. Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode does not exist, a pop up box will appear to alert you to the error. After entering the patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.9. + screen.9. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the reservation is set has already passed, the boxes will appear in red. The times must be set correctly for the reservation to be accomplished. If the item has already been reserved at the time for which you are trying to reserve the item, then you will receive - an error message.10. + an error message.10. Finally, select the barcode of the item that you want to reserve. If multiple copies of the item exist, choose the barcode of the copy that you want to reserve, and click Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you will receive an error message. If you do not have a preference, you do not have to select a barcode, and you may click Reserve Any. One of the barcodes will be pulled from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.11. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail.11. + After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. The screen will refresh, and the reservation will appear below the user’s name. - Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation + Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation - 1. - Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. + 1. + Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. The match(es) should appear in the right pane. Click the desired patron’s name. In the left panel, a summary of the patron’s information will appear. Click the Retrieve - Patron button in the right corner to access more options in the patron’s record.3. - Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. + Patron button in the right corner to access more options in the patron’s record.3. + Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. The Copy Details will appear in a new row. In the barcode column, click the book now - link.5. - A screen showing the title and barcodes of available copies will appear.6. + link.5. + A screen showing the title and barcodes of available copies will appear.6. Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode does not exist, a pop up box will appear to alert you to the error. After entering the patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.7. + screen.7. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the reservation is set has already passed, the boxes will appear in red. The times must be set correctly for the reservation to be accomplished. If the item has already been reserved at the time for which you are trying to reserve the item, then you will receive - an error message.8. + an error message.8. Finally, select the barcode of the item that you want to reserve. If multiple copies of the item exist, choose the barcode of the copy that you want to reserve, and click Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you will receive an error message. If you do not have a preference, you do not have to select a barcode, and you may click Reserve Any. One of the barcodes will be pulled from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.9. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail.9. + After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. The screen will refresh, and the reservation will appear below the user’s name. - Use the booking module to create a reservationUse the booking module to create a reservation + Use the booking module to create a reservationUse the booking module to create a reservation - 1. - Select Booking → Create or Edit Reservations2. - Enter the barcode of the item and click Next.3. - A screen showing the name of the available resource will appear.4. + 1. + Select Booking → Create or Edit Reservations2. + Enter the barcode of the item and click Next.3. + A screen showing the name of the available resource will appear.4. Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear.5. + patron’s barcode, the user’s existing reservations will appear.5. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the reservation is set has already passed, the boxes will appear in red. The times must be set correctly for the reservation to be accomplished. If the resource has already been reserved at the time for which you want to reserve the item, then the item will - disappear.6. + disappear.6. Finally, select the resource that you want to reserve. If multiple items or rooms exist, choose the resource that you want to reserve, and click Reserve Selected. If you do not select a resource, and you click Reserve Selected, you will receive an error message. If you do not have a preference, you may click Reserve Any, and one of the - resources will be pulled from the list.7. + resources will be pulled from the list.7. After you have made the reservation, a message will confirm that the action - succeeded. Click OK.8. + succeeded. Click OK.8. The screen will refresh, and the reservation will appear below the user’s name. Cancelling a ReservationCancelling a Reservation - + Staff members can cancel a patron’s reservation through the Create or Cancel Reservations tab available in a patron’s record. Staff members can also cancel a reservation immediately after it has been made. - Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation - - 1. - Search for and retrieve a patron’s record.2. - Select Other → Booking → Create or Cancel Reservations.3. - The existing reservations will appear at the bottom of the screen.4. - To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. - A pop-up window will confirm that you cancelled the reservation. Click OK.6. - The screen will refresh, and the cancelled reservation will disappear.7. + Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation + + 1. + Search for and retrieve a patron’s record.2. + Select Other → Booking → Create or Cancel Reservations.3. + The existing reservations will appear at the bottom of the screen.4. + To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. + A pop-up window will confirm that you cancelled the reservation. Click OK.6. + The screen will refresh, and the cancelled reservation will disappear.7. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the @@ -3238,55 +3238,55 @@ reserved at the time for which you are trying to reserve the item, then you will receive an error message. - Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made + Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made - 1. - Create the reservation.2. - Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. + 1. + Create the reservation.2. + Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. The existing reservations will appear at the bottom of the screen. Creating a Pull ListCreating a Pull List - + Staff members can create a pull list to retrieve items from the stacks. - 1. - To create a pull list, select Booking → Pull List.2. - To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. + 1. + To create a pull list, select Booking → Pull List.2. + To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. You can decide how many days in advance you would like to select reserved items. Enter the number of days in the box adjacent to Generate - list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. - Click Fetch to retrieve the pull list.5. + list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. + Click Fetch to retrieve the pull list.5. The pull list will appear. Click Print to print the pull list. Capturing Items for ReservationsCapturing Items for Reservations - + Staff members can capture items for reservations. - 1. - In the staff client, select Booking → Capture Resources.2. - Enter the barcode of the items to be captured. Click Capture.3. + 1. + In the staff client, select Booking → Capture Resources.2. + Enter the barcode of the items to be captured. Click Capture.3. A Capture Succeeded message will appear to the right. Information about the item will appear below the message. You can print this information as a receipt and add it to the item if desired. Picking Up ReservationsPicking Up Reservations - + Staff members can help users pick up their reservations. - 1. - In the staff client, select Booking → Pick Up Reservations2. - Enter the user’s barcode. Click Go.3. - The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. + 1. + In the staff client, select Booking → Pick Up Reservations2. + Enter the user’s barcode. Click Go.3. + The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. The screen will refresh to show that the patron has picked up the reservation. Returning ReservationsReturning Reservations - + Staff members can help users return their reservations. - 1. - In the staff client, select Booking → Return Reservations.2. + 1. + In the staff client, select Booking → Return Reservations.2. You can return the item by patron or item barcode. Choose Resource or Patron, enter the - barcode, and click Go.3. - A pop up box will tell you that the item was returned. Click OK.4. + barcode, and click Go.3. + A pop up box will tell you that the item was returned. Click OK.4. The screen will refresh to show the reservations that remain out and the resources that have been returned. @@ -3302,12 +3302,12 @@ staff client. It deals with maintaining servers, installation, upgrading, and configuring both system wide and local library settings. Some sections require understanding of Linux system administration while others require an understanding of your system hierarchy of locations and users. Many procedures explained in the following - chapters are accomplished with Linux commands run from the + chapters are accomplished with Linux commands run from the terminal without a Graphical User Interface (GUI).In order to accomplish some of the tasks, prerequisite knowledge or experience will be required and you may need to consult system administration documentation for your specific Linux distribution if you have limited Linux system experience. A vast ammount of free resources can be found on the on the web for various experinece levels. You might also consider consulting - PostgreSQL and - Apache documentation for a greater understanding + PostgreSQL and + Apache documentation for a greater understanding of the software stack on which Evergreen is built. Chapter 15. System Requirements and Hardware ConfigurationsChapter 15. System Requirements and Hardware Configurations Report errors in this documentation using Launchpad. @@ -3320,11 +3320,11 @@ Server Minimum RequirementsServer Minimum Requirements The following are the base requirements setting Evergreen up on a test server: - •An available desktop, server or virtual image•1GB RAM, or more if your server also runs a graphical desktop•Linux Operating System•Ports 80 and 443 should be opened in your firewall for TCP connections to allow OPAC and staff + •An available desktop, server or virtual image•1GB RAM, or more if your server also runs a graphical desktop•Linux Operating System•Ports 80 and 443 should be opened in your firewall for TCP connections to allow OPAC and staff client connections to the Evergreen server. - - Debian and - Ubuntu are the most widely used + + Debian and + Ubuntu are the most widely used Linux distributions for installing Evergreen and most development takes place on Debian based systems. If you are new to Linux, it is strongly recommended that you install Evergreen on the latest stable server edition of Debian (http://www.debian.org/) @@ -3341,22 +3341,22 @@ consortia to share one Evergreen system with hundreds of libraries with millions of records and millions of users, making the scalability of Evergreen almost infinite. Here are some example scenarios for networked server configurations: - •A small library library with 1 location, under 25,000 items and a few thousand users could easily run Evergreen on a single server - (1 machine).•A college or university with 1 million items and 20,000 users could run an Evergreen system using several servers balancing the + •A small library library with 1 location, under 25,000 items and a few thousand users could easily run Evergreen on a single server + (1 machine).•A college or university with 1 million items and 20,000 users could run an Evergreen system using several servers balancing the load on their - system by spreading services over multiple servers. It should host their PostgreSQL + system by spreading services over multiple servers. It should host their PostgreSQL database on a separate server. They could also cluster the Evergreen services strategically to minimize or eliminate any necessary downtown when upgrading Evergreen or other server software. Moreover, system redundancy will reduce the chance of - unplanned catastrophic downtime caused by system failure since Evergreen will be running over several machines.•A large library consortium with several public library systems and/or academic libraries with millions of users and items could run an + unplanned catastrophic downtime caused by system failure since Evergreen will be running over several machines.•A large library consortium with several public library systems and/or academic libraries with millions of users and items could run an Evergreen system over many servers with clusters for Evergreen services as well as a cluster for the Postgresql Database. The key to Evergreen scalability is in the OpenSRF configuration files - /openils/conf/opensrf.xml and - /openils/conf/opensrf_core.xml. + /openils/conf/opensrf.xml and + /openils/conf/opensrf_core.xml. By configuring these files, an administrator could cluster evergreen services over multiple hosts, change the host running a specific service or change the host of the PostgreSQL database. - The default configuration of Evergreen in the installation instructions assumes a single localhost + The default configuration of Evergreen in the installation instructions assumes a single localhost server setup. For more complex multi-server clustered configurations, some server administration and database administration experience or knowledge will be required. @@ -3366,17 +3366,17 @@ Staff terminals connect to the central database using the Evergreen staff client, available for download from The Evergreen download page. The staff client must be installed on each staff workstation and requires at minimum: - •Windows (XP, Vista, or 7), Mac OS X, - or Linux operating system•a reliable high speed Internet connection•512Mb of RAM•The staff client uses the TCP protocal on ports 80 and 443 to + •Windows (XP, Vista, or 7), Mac OS X, + or Linux operating system•a reliable high speed Internet connection•512Mb of RAM•The staff client uses the TCP protocal on ports 80 and 443 to communicate with the Evergreen server. - Barcode ScannersBarcode Scanners + Barcode ScannersBarcode Scanners - Evergreen will work with virtually any barcode scanner – + Evergreen will work with virtually any barcode scanner – if it worked with your legacy system it should work on Evergreen. - PrintersPrinters + PrintersPrinters - Evergreen can use any printer configured for your terminal to print receipts, check-out slips, holds + Evergreen can use any printer configured for your terminal to print receipts, check-out slips, holds lists, etc. The single exception is spine label printing, which is still under development. Evergreen currently formats spine labels for output to a label roll printer. If you do not have a roll printer manual formatting may be required. For more on configuring receipt printers, see Printer Settings. @@ -3413,7 +3413,7 @@ The Evergreen server-side software has dependencies on particular versions of certain major software sub-components. Successful installation of Evergreen software requires that software versions agree with those listed here: - Table 16.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL1.6.1.x1.4.08.2 / 8.31.6.0.x1.28.2 / 8.31.4.x1.08.1 / 8.21.2.x0.98.1 / 8.2 + Table 16.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL1.6.1.x1.4.08.2 / 8.31.6.0.x1.28.2 / 8.31.4.x1.08.1 / 8.21.2.x0.98.1 / 8.2 Installing Server-Side SoftwareInstalling Server-Side Software This section describes the installation of the major components of Evergreen server-side software. @@ -3428,10 +3428,10 @@ Installing OpenSRF 1.4.x On Ubuntu or DebianInstalling OpenSRF 1.4.x On Ubuntu or Debian - + - - + + This section describes the installation of the latest version of the Open Service Request Framework (OpenSRF), a major component of the Evergreen server-side software, on Ubuntu or Debian @@ -3441,7 +3441,7 @@ properly installed and configured. Do not continue with any further Evergreen installation steps until you have verified that OpenSRF has been successfully installed and tested. - + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) platforms. OpenSRF 1.4.0 has been tested on Debian Etch (4.0), Debian Lenny (5.0) and @@ -3450,12 +3450,12 @@ either the root user, the opensrf user, or the postgres user. - • + • Debian -- To become the root user, issue the command su - and enter the password of the root user. - • + • Ubuntu -- To become the root user, issue the command sudo su - and enter the password of the @@ -3469,7 +3469,7 @@ the root user again, simply issue the command exit. - 1. + 1. Add New opensrf User As the root user, add the @@ -3477,30 +3477,30 @@ In the following example, the default shell for the opensrf user is automatically set to /bin/bash to inherit a reasonable environment: - + # as the root user: useradd -m -s /bin/bash opensrf passwd opensrf - 2. + 2. Download and Unpack Latest OpenSRF Version - + The latest version of OpenSRF can be found here: http://evergreen-ils.org/downloads/OpenSRF-1.4.0.tar.gz . As the opensrf user, change to the directory /home/opensrf then download and extract OpenSRF. The new subdirectory /home/opensrf/OpenSRF-1.4.0 will be created: - + # as the opensrf user: cd /home/opensrf wget http://evergreen-ils.org/downloads/OpenSRF-1.4.0.tar.gz tar zxf OpenSRF-1.4.0.tar.gz - 3. + 3. Install Prerequisites to Build OpenSRF In this section you will install and configure a set of prerequisites that will be @@ -3514,7 +3514,7 @@ distribution keywords table Table 16.2, “Keyword Targets for OpenSRF "make" Command” . For example, to install the prerequisites for Ubuntu version 10.04 (Lucid Lynx) you would enter this command: make -f src/extras/Makefile.install ubuntu-lucid . - + # as the root user: cd /home/opensrf/OpenSRF-1.4.0 @@ -3526,16 +3526,16 @@ CPAN configuration prompt to allow it to automatically configure itself to download and install Perl modules from CPAN. The CPAN installer will ask you a number of times whether it should install prerequisite modules - say Yes. - 4. + 4. Build OpenSRF In this section you will configure, build and install the OpenSRF components that support other Evergreen services. - a. + a. Configure OpenSRF - + As the opensrf user, return to the new OpenSRF build directory and use the configure utility to prepare for the next @@ -3543,7 +3543,7 @@ include support for Python and Java, add the configuration options --enable-python and --enable-java, respectively: - + # as the opensrf user: cd /home/opensrf/OpenSRF-1.4.0 @@ -3552,14 +3552,14 @@ This step will take several minutes to complete. - b. + b. Compile, Link and Install OpenSRF As the root user, return to the new OpenSRF build directory and use the make utility to compile, link and install OpenSRF: - + # as the root user: cd /home/opensrf/OpenSRF-1.4.0 @@ -3567,7 +3567,7 @@ This step will take several minutes to complete. - c. + c. Update the System Dynamic Library Path You must update the system dynamic library path to force @@ -3578,7 +3578,7 @@ new library path, then run the command ldconfig to automatically read the file and modify the system dynamic library path: - + # as the root user: echo "/openils/lib" > /etc/ld.so.conf.d/osrf.conf @@ -3600,22 +3600,22 @@ to handle communications for these two domains. As the root user, edit the file /etc/hosts and add the following example domains: - - + + # as the root user: 127.0.1.2 public.localhost public 127.0.1.3 private.localhost private - e. + e. Change File Ownerships Finally, as the root user, change the ownership of all files installed in the directory /openils to the user opensrf: - + # as the root user: chown -R opensrf:opensrf /openils @@ -3625,11 +3625,11 @@ 5. Stop the ejabberd Service - + Before continuing with configuration of ejabberd you must stop that service. As the root user, execute the following command to stop the service: - + # as the root user: /etc/init.d/ejabberd stop @@ -3640,7 +3640,7 @@ beam or epmd you may need to perform the following commands to kill them: - + # as the root user: epmd -kill @@ -3648,7 +3648,7 @@ rm /var/lib/ejabberd/* echo 'ERLANG_NODE=ejabberd@localhost' >> /etc/default/ejabberd - 6. + 6. Edit the ejabberd configuration You must make several configuration changes for the @@ -3657,7 +3657,7 @@ As the root user, edit the file /etc/ejabberd/ejabberd.cfg and make the following changes: - a. + a. Change the line: {hosts, ["localhost"]}. @@ -3665,7 +3665,7 @@ {hosts, ["localhost", "private.localhost", "public.localhost"]}. - b. + b. Change the line: {max_user_sessions, 10} @@ -3677,21 +3677,21 @@ then change it to instead read: {access, max_user_sessions, [{10000, all}]} - c. + c. Change all three occurrences of: max_stanza_size to instead read: 2000000 - d. + d. Change both occurrences of: maxrate to instead read: 500000 - e. + e. Comment out the line: {mod_offline, []} @@ -3706,12 +3706,12 @@ As the root user, restart the ejabberd service to test the configuration changes and to register your users: - + # as the root user: /etc/init.d/ejabberd start - 8. + 8. Register router and opensrf as @@ -3725,11 +3725,11 @@ /etc/hosts in a previous step (see Step 4.d). The users include: - • + • the router user, to whom all requests to connect to an OpenSRF service will be routed; - • + • the opensrf user, which clients use to connect to OpenSRF services (you may name the user anything you like, but we use @@ -3740,7 +3740,7 @@ for the users router and opensrf on each domain (remember to replace NEWPASSWORD with the appropriate password): - + # as the root user: # Note: the syntax for registering a user with ejabberdctl is: @@ -3761,7 +3761,7 @@ execute the following commands to create the new configuration files /openils/conf/opensrf_core.xml and /openils/conf/opensrf.xml from the example templates: - + # as the opensrf user: cd /openils/conf @@ -3811,7 +3811,7 @@ public.localhost domains, these will already be set to the correct values. Otherwise, search and replace to match values for your own systems. - 11. + 11. Set location of the persistent database As the opensrf user, edit the @@ -3822,7 +3822,7 @@ to instead read: /tmp/persist.db Following is a sample modification of that portion of the file: - + <!-- Example of an app-specific setting override --> <opensrf.persist> <app_settings> @@ -3836,7 +3836,7 @@ In this section you will set up a special configuration file for each user who will need to run the srfsh (pronounced surf shell) utility. - + The software installation will automatically create the utility srfsh (surf shell), a command line diagnostic tool for testing and interacting with OpenSRF. It will be used @@ -3847,14 +3847,14 @@ to the home directory of each user who will use srfsh. For instance, do the following for the opensrf user: - + # as the root user: cp /openils/conf/srfsh.xml.example /home/opensrf/.srfsh.xml Edit each user's file ~/.srfsh.xml and make the following changes: - • + • Modify domain to be the router hostname (following our domain examples, private.localhost will give @@ -3862,21 +3862,21 @@ public.localhost will only allow access to those OpenSRF services that are publicly exposed). - • + • Modify username and password to match the opensrf Jabber user for the chosen domain - • + • Modify logfile to be the full path for a log file to which the user has write access - • + • Modify loglevel as needed for testing - • + • Change the owner of the file to match the owner of the home directory Following is a sample of the file: - + <?xml version="1.0"?> <!-- This file follows the standard bootstrap config file layout --> <!-- found in opensrf_core.xml --> @@ -3891,7 +3891,7 @@ <loglevel>4</loglevel> </srfsh> - 13. + 13. Modify the environmental variable PATH for the opensrf user @@ -3899,18 +3899,18 @@ environmental variable PATH by adding a new file path to the opensrf user's shell configuration file ~/.bashrc: - + # as the opensrf user: echo "export PATH=/openils/bin:\$PATH" >> ~/.bashrc - 14. + 14. Start OpenSRF As the root user, start the ejabberd and memcached services: - + # as the root user: /etc/init.d/ejabberd start @@ -3918,7 +3918,7 @@ As the opensrf user, start OpenSRF as follows: - + # as the opensrf user: osrf_ctl.sh -l -a start_all @@ -3929,14 +3929,14 @@ OpenSRF router , Perl , and C services. - • + • You can also start Evergreen without the -l flag, but the osrf_ctl.sh utility must know the fully qualified domain name for the system on which it will execute. That hostname was probably specified in the configuration file opensrf.xml which you configured in a previous step. - • + • If you receive an error message similar to osrf_ctl.sh: command not found, then your environment variable PATH does not include the @@ -3946,7 +3946,7 @@ add the following line: export PATH=$PATH:/openils/bin - 15. + 15. Test connections to OpenSRF Once you have installed and started OpenSRF, as the @@ -3954,7 +3954,7 @@ OpenSRF using the srfsh utility and trying to call the add method on the OpenSRF math service: - + # as the root user: /openils/bin/srfsh @@ -3969,14 +3969,14 @@ For other srfsh commands, type in help at the prompt. - 16. + 16. Stop OpenSRF After OpenSRF has started, you can stop it at any time by using the osrf_ctl.sh again. As the opensrf user, stop OpenSRF as follows: - + # as the opensrf user: osrf_ctl.sh -l -a stop_all @@ -3987,15 +3987,15 @@ DebianInstalling Evergreen 1.6.1.x On Ubuntu or Debian - - + + This section outlines the installation process for the latest stable version of Evergreen. In this section you will download, unpack, install, configure and test the Evergreen system, including the Evergreen server and the PostgreSQL database system. You will make several configuration changes and adjustments to the software, including updates to configure the system for your own locale, and some updates needed to work around a few known issues. - + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) architectures. There may be differences between the Desktop and Server editions of Ubuntu. These instructions assume the Server @@ -4004,12 +4004,12 @@ either the root user, the opensrf user, or the postgres user. - • + • Debian -- To become the root user, issue the command su - and enter the password of the root user. - • + • Ubuntu -- To become the root user, issue the command sudo su - and enter the password of the @@ -4023,7 +4023,7 @@ root user again, simply issue the command exit. - 1. + 1. Install OpenSRF Evergreen software is integrated with and depends on the Open Service @@ -4036,7 +4036,7 @@ not continue with any further Evergreen installation steps until you have verified that OpenSRF has been successfully installed and tested. - 2. + 2. Download and Unpack Latest Evergreen Version The latest version of Evergreen can be found here: @@ -4045,7 +4045,7 @@ the directory /home/opensrf then download and extract Evergreen. The new subdirectory /home/opensrf/Evergreen-ILS-1.6.1.6 will be created: - + # as the opensrf user: cd /home/opensrf @@ -4068,7 +4068,7 @@ For example, to install the prerequisites for Ubuntu version 9.10 (Karmic Koala) you would enter this command: make -f Open-ILS/src/extras/Makefile.install ubuntu-karmic. - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -4078,7 +4078,7 @@ 4. (OPTIONAL) Install the PostgreSQL Server - + Since the PostgreSQL server is usually a standalone server in multi-server production systems, the prerequisite installer Makefile in the previous section (see Step 3) @@ -4090,7 +4090,7 @@ in the section called “Installing PostgreSQL from Source”, or you can visit the official web site http://www.postgresql.org for more information. - + PostgreSQL versions 8.3 or 8.4 are the recommended versions to work with Evergreen version 1.6.1.6 . If you have an older version of PostgreSQL, you should upgrade before installing Evergreen. To find your current version @@ -4099,7 +4099,7 @@ SELECT version(); to get detailed information about your version of PostgreSQL. - 5. + 5. Install Perl Modules on PostgreSQL Server If PostgreSQL is running on the same system as your Evergreen software, @@ -4108,7 +4108,7 @@ You will need to install several Perl modules on the other system. As the root user install the following Perl modules: - + # as the root user: # first, ensure the gcc compiler is installed: @@ -4123,8 +4123,8 @@ For more information on installing Perl Modules vist the official CPAN site. - - 6. + + 6. Update the System Dynamic Library Path You must update the system dynamic library path to force your system to recognize @@ -4132,14 +4132,14 @@ do this by creating the new file /etc/ld.so.conf.d/osrf.conf containing a new library path, then run the command ldconfig to automatically read the file and modify the system dynamic library path: - + # as the root user: echo "/usr/local/lib" >> /etc/ld.so.conf.d/osrf.conf echo "/usr/local/lib/dbd" >> /etc/ld.so.conf.d/osrf.conf ldconfig - 7. + 7. Restart the PostgreSQL Server If PostgreSQL is running on the same system as the rest of Evergreen, as @@ -4150,7 +4150,7 @@ execute the following command (remember to replace PGSQL_VERSION with your installed PostgreSQL version, for example 8.3): - + # as the opensrf user: /etc/init.d/postgresql-PGSQL_VERSION restart @@ -4163,7 +4163,7 @@ and linked later in Step 9. As the opensrf user, return to the Evergreen build directory and execute these commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -4177,7 +4177,7 @@ default Evergreen Staff Client. As the root user, return to the Evergreen build directory and use the make utility as shown below: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -4195,13 +4195,13 @@ directory /openils/var/web/xul that points to the subdirectory /server of the new Staff Client build: - + # as the root user: cd /openils/var/web/xul ln -sf rel_1_6_1_6/server server - 10. + 10. Copy the OpenSRF Configuration Files In this step you will replace some OpenSRF configuration files that you set up in @@ -4212,7 +4212,7 @@ to opensrf. As the root user, execute the following commands: - + # as the root user: cd /openils/conf @@ -4223,10 +4223,10 @@ cp oils_web.xml.example oils_web.xml chown -R opensrf:opensrf /openils/ - 11. + 11. Create and Configure PostgreSQL Database - + In this step you will create the Evergreen database. In the commands below, remember to adjust the path of the contrib repository to match your PostgreSQL server @@ -4236,7 +4236,7 @@ the path would be /usr/share/postgresql/8.3/contrib/ . - a. + a. Create and configure the database @@ -4244,7 +4244,7 @@ As the postgres user on the PostgreSQL system create the PostgreSQL database, then set some internal paths: - + # as the postgres user: createdb evergreen -E UTF8 -T template0 @@ -4256,7 +4256,7 @@ and execute the SQL scripts as shown below (remember to adjust the paths as needed, where PGSQL_VERSION is your installed PostgreSQL version, for example 8.3). - + # as the postgres user: psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tablefunc.sql evergreen @@ -4272,7 +4272,7 @@ named evergreen and assign a password (remember to replace NEWPASSWORD with an appropriate new password): - + # as the postgres user: createuser -P -s evergreen @@ -4281,7 +4281,7 @@ Enter it again: NEWPASSWORD - c. + c. Create database schema In this step you will create the database schema and configure your @@ -4292,7 +4292,7 @@ the following commands and replace HOSTNAME, PORT, PASSWORD and DATABASENAME with appropriate values: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -4310,17 +4310,17 @@ As the command executes, you may see warnings similar to: ERROR: schema SOMENAME does not exist (in fact, you may see one warning per schema) but they can be safely ignored. - If you are entering the above command on a single line, do not + If you are entering the above command on a single line, do not include the \ (backslash) characters. If you are using the bash shell, these should only be used at the end of a line at a bash prompt to indicate that the command is continued on the next line. - 12. + 12. Configure the Apache web server - + In this step you will configure the Apache web server to support Evergreen software. First, you must enable some built-in Apache modules and install some @@ -4328,14 +4328,14 @@ Certificate. Finally, you must make several changes to the Apache configuration file. - a. + a. Enable the required Apache Modules As the root user, enable some modules in the Apache server, then copy the new configuration files to the Apache server directories: - - + + # as the root user: a2enmod ssl # enable mod_ssl @@ -4346,14 +4346,14 @@ Module SOMEMODULE already enabled but you can safely ignore them. - b. + b. Copy Apache configuration files You must copy the Apache configuration files from the Evergreen installation directory to the Apache directory. As the root user, perform the following commands: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -4372,7 +4372,7 @@ accept the warnings in the Staff Client and browser during testing and development. As the root user, perform the following commands: - + # as the root user: mkdir /etc/apache2/ssl @@ -4384,7 +4384,7 @@ server.crt and server.key will be created in the directory /etc/apache2/ssl . - This step generates a self-signed SSL certificate. You must install + This step generates a self-signed SSL certificate. You must install a proper SSL certificate for a public production system to avoid warning messages when users login to their account through the OPAC or when staff login through the Staff Client. For further information on @@ -4399,14 +4399,14 @@ /etc/apache2/sites-available/eg.conf . As the root user, edit the file and make the following changes: - • + • In the section <Directory "/openils/var/cgi-bin"> replace the line: Allow from 10.0.0.0/8 with the line: Allow from all - This change allows access to your configuration + This change allows access to your configuration CGI scripts from any workstation on any network. This is only a temporary change to expedite testing and should be removed after you have finished and successfully tested @@ -4415,7 +4415,7 @@ for further details on removing this change after the Evergreen installation is complete. - • + • Comment out the line: Listen 443 since it conflicts with the same declaration in @@ -4424,7 +4424,7 @@ Debian users should not do this since the conflict does not apply to that operating system. - • + • The following updates are needed to allow the logs to function properly, but it may break other Apache applications on your server: @@ -4444,18 +4444,18 @@ Debian Lenny, as the root user, edit the Apache configuration file and change the lines: - + export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data to instead read: - + export APACHE_RUN_USER=opensrf export APACHE_RUN_GROUP=opensrf - • + • As the root user, edit the Apache configuration file @@ -4463,12 +4463,12 @@ modify the value for KeepAliveTimeout and MaxKeepAliveRequests to match the following: - + KeepAliveTimeout 1 MaxKeepAliveRequests 100 - • + • Further configuration changes to Apache may be necessary for busy systems. These changes increase the number of Apache server processes that are started to @@ -4480,7 +4480,7 @@ and modify the section related to prefork configuration to suit the load on your system: - + <IfModule mpm_prefork_module> StartServers 20 MinSpareServers 5 @@ -4491,7 +4491,7 @@ - e. + e. Enable the Evergreen web site Finally, you must enable the Evergreen web site. As the @@ -4499,7 +4499,7 @@ following Apache configuration commands to disable the default It Works web page and enable the Evergreen web site, and then restart the Apache server: - + # as the root user: # disable/enable web sites @@ -4551,7 +4551,7 @@ password for router user - 14. + 14. (OPTIONAL) Create Configuration Files for Users Needing srfsh When OpenSRF was installed in the section called “Installing OpenSRF 1.4.x On Ubuntu or @@ -4566,34 +4566,34 @@ Modify the OpenSRF Environment In this step you will make some minor modifications to the OpenSRF environment: - • + • Modify the permissions in the directory /openils/var/cgi-bin to make the files executable: - + # as the opensrf user: chmod 755 /openils/var/cgi-bin/*.cgi - • + • As the opensrf user, modify the shell configuration file ~/.bashrc for user opensrf by adding a Perl environmental variable, then execute the shell configuration file to load the new variables into your current environment. - In a multi-server environment, you must add any + In a multi-server environment, you must add any modifications to ~/.bashrc to the top of the file before the line [ -z "$PS1" ] && return . This will allow headless (scripted) logins to load the correct environment. - + # as the opensrf user: echo "export PERL5LIB=/openils/lib/perl5:\$PERL5LIB" >> ~/.bashrc . ~/.bashrc - 16. + 16. (OPTIONAL) Enable and Disable Language Localizations You can load translations such as Armenian (hy-AM), Canadian French @@ -4607,22 +4607,22 @@ In this section you will learn how to start the Evergreen services. For completeness, instructions for stopping Evergreen can be found later in the section called “Stopping Evergreen”. - 1. + 1. As the root user, start the ejabberd and memcached services as follows: - + # as the root user: /etc/init.d/ejabberd start /etc/init.d/memcached start - 2. + 2. As the opensrf user, start Evergreen as follows: - + # as the opensrf user: osrf_ctl.sh -l -a start_all @@ -4633,14 +4633,14 @@ OpenSRF router , Perl , and C services. - • + • You can also start Evergreen without the -l flag, but the osrf_ctl.sh utility must know the fully qualified domain name for the system on which it will execute. That hostname was probably specified in the configuration file opensrf.xml which you configured in a previous step. - • + • If you receive an error message similar to osrf_ctl.sh: command not found, then your environment variable PATH does not include the @@ -4649,7 +4649,7 @@ edit the configuration file ~/.bashrc and add the following line: export PATH=$PATH:/openils/bin - • + • If you receive an error message similar to Can't locate OpenSRF/System.pm in @INC ... BEGIN failed--compilation aborted, then your environment variable @@ -4660,7 +4660,7 @@ add the following line: export PERL5LIB=$PERL5LIB:/openils/lib/perl5 - 3. + 3. In this step you will generate the Web files needed by the Staff Client and catalog, and update the proximity of locations in the Organizational Unit @@ -4669,7 +4669,7 @@ changes to the library hierarchy. As the opensrf user, execute the following command and review the results: - + # as the opensrf user: cd /openils/bin @@ -4692,16 +4692,16 @@ Successfully updated the organization proximity Done - 4. + 4. As the root user, restart the Apache Web server: - + # as the root user: /etc/init.d/apache2 restart - If the Apache Web server was running when you started the OpenSRF + If the Apache Web server was running when you started the OpenSRF services, you might not be able to successfully log into the OPAC or Staff Client until the Apache Web server has been restarted. @@ -4720,7 +4720,7 @@ srfsh commands, type help at the prompt. As the opensrf user, execute the following commands to test your Evergreen connection: - + # as the opensrf user: /openils/bin/srfsh @@ -4748,11 +4748,11 @@ ------------------------------------ If this does not work, try the following: - • + • As the opensrf user, run the settings-tester.pl utility to review your Evergreen installation for any system configuration problems: - + # as the opensrf user: cd /home/opensrf @@ -4761,10 +4761,10 @@ If the output of settings-tester.pl does not help you find the problem, please do not make any significant changes to your configuration. - • + • Follow the steps in the troubleshooting guide in Chapter 21, Troubleshooting System Errors. - • + • If you have followed the entire set of installation steps listed here closely, you are probably extremely close to a working system. Gather your configuration files and log files and contact the @@ -4782,13 +4782,13 @@ version 3.0 and later on Ubuntu and Debian distributions). As the root user, start the Staff Client as shown: - + # as the root user: xulrunner /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client/build/application.ini A login screen for the Staff Client similar to this should appear: - + First, add the name of your Evergreen server to the field Hostname in the Server section. You will probably want to use 127.0.0.1. After adding the server name, click Re-Test @@ -4801,10 +4801,10 @@ Try to log into the Staff Client with the username admin and the password open-ils. If the login is successful, you will see the following screen: - + Otherwise, you may need to click 'Add SSL Exception' in the main window. You should see a popup window titled Add Security Exception: - + Click 'Get Certificate', then click 'Confirm Security Exception', then click 'Re-Test Server' in the main window and try to log in again. @@ -4817,7 +4817,7 @@ commands. Note the use of restart to force the new Evergreen modules to be reloaded even if the Apache server is already running. Any problems found with your configuration files should be displayed: - + # as the root user: apache2ctl configtest && /etc/init.d/apache2 restart @@ -4830,13 +4830,13 @@ Evergreen services. As the opensrf user, stop all Evergreen services by using the following command: - + # as the opensrf user # stop the server; use "-l" to force hostname to be "localhost" osrf_ctl.sh -l -a stop_all - You can also stop Evergreen services without the + You can also stop Evergreen services without the -l flag, but the osrf_ctl.sh utility must know the fully qualified domain name for the system on which it will execute. That hostname may have been specified in the configuration file opensrf.xml, which @@ -4848,7 +4848,7 @@ There are several additional steps you may need to complete after Evergreen has been successfully installed and tested. Some steps may not be needed (e.g., setting up support for Reports). - Remove temporary Apache configuration changes + Remove temporary Apache configuration changes You modified the Apache configuration file /etc/apache2/sites-available/eg.conf in an earlier step as a @@ -4856,7 +4856,7 @@ Step 12.d for further information). Those changes must now be reversed in order to deny unwanted access to your CGI scripts from users on other public networks. - + This temporary network update was done to expedite testing. You must correct @@ -4882,14 +4882,14 @@ system. Some popular projects are listed below; please review them for background information on why you need such a system and how you can provide it: - • + • http://www.openca.org/projects/openca/ - • + • http://sourceforge.net/projects/ejbca/ - • + • http://pki.fedoraproject.org - + The temporary SSL key was only created to expedite testing. You should install a proper SSL certificate for a public @@ -4897,7 +4897,7 @@ - (OPTIONAL) IP-Redirection + (OPTIONAL) IP-Redirection By default, Evergreen is configured so searching the OPAC always starts in the top-level (regional) library rather than in a second-level (branch) library. Instead, @@ -4917,7 +4917,7 @@ Finally, modify the Apache startup script /etc/apache2/startup.pl by uncommenting two lines as shown, then restarting the Apache server: - + # - Uncomment the following 2 lines to make use of the IP redirection code # - The IP file should contain a map with the following format: # - actor.org_unit.shortname <start_ip> <end_ip> @@ -4926,7 +4926,7 @@ use OpenILS::WWW::Redirect qw(/openils/conf/opensrf_core.xml); OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - (OPTIONAL) Set Up Support For Reports + (OPTIONAL) Set Up Support For Reports Evergreen reports are extremely powerful but require some simple configuration. See Chapter 29, Starting and Stopping the Reporter Daemon for information on starting and @@ -4960,7 +4960,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Evergreen environment will execute more slowly than if it were a standalone system. However, it is still a reasonable architecture for smaller experimental systems or as a proof of concept. - Installing Virtualization Software + Installing Virtualization Software As described above, Evergreen can be installed on top of an emulated Linux environment which, in turn, @@ -4985,25 +4985,25 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); next five figures until the software has been successfully installed. The following example shows the installation of VirtualBox version 3.8.2 . - Figure 16.1. Starting the Windows installation of VirtualBox + Figure 16.1. Starting the Windows installation of VirtualBox - + - Figure 16.2. Welcome to VirtualBox setup wizard + Figure 16.2. Welcome to VirtualBox setup wizard - + - Figure 16.3. Accept the license agreement + Figure 16.3. Accept the license agreement - + - Figure 16.4. Waiting for installation to complete + Figure 16.4. Waiting for installation to complete - + - Figure 16.5. Installation is complete; start VirtualBox + Figure 16.5. Installation is complete; start VirtualBox - + At this point, VirtualBox has been installed and started for the first time. Please continue with @@ -5012,7 +5012,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); for further instructions on the next step: installing the Linux / Evergreen distribution. - Installing "VMware" Virtualization Software + Installing "VMware" Virtualization Software For instructions on installing VMware, visit the official website @@ -5030,13 +5030,13 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); two ways to continue with installing Linux and Evergreen software in the new virtualized environment: - 1. + 1. Manually install a Linux guest system, then manually install Evergreen on it (see the section called “Manually install Linux and Evergreen” for details) - 2. + 2. Download and install a prebuilt software image. The following example shows installation of a working Debian "Lenny" (5.0) Linux / Evergreen 1.6.1.4 system @@ -5081,10 +5081,10 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Table 16.6, “Linux / Evergreen Virtual Images” for a list of prebuilt software images that are currently available to download and install. - Evergreen servers and staff clients must match. For example, if + Evergreen servers and staff clients must match. For example, if you are running server version 1.4.0.1, you should use version 1.4.0.1 of the staff client. - DISCLAIMER: The following virtual images have been contributed + DISCLAIMER: The following virtual images have been contributed by members of the Evergreen community for the purposes of testing, evaluation, training, and development. Table 16.6. Linux / Evergreen Virtual ImagesLinux VersionEvergreen VersionImageCommentsDebian "Lenny" (5.0)1.6.1.4 @@ -5120,7 +5120,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); the section called “Installing "VirtualBox" Virtualization Software” for details). Continue with the following steps; refer to the accompanying figures for more information: - 1. + 1. Download software Download the prebuilt software image for Debian @@ -5129,7 +5129,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Create a temporary directory C:\temp, then extract the contents of the .ZIP file there. - 2. + 2. Add new virtual disk You must configure VirtualBox to recognized the new disk @@ -5142,26 +5142,26 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Figure 16.7, “Selecting the software image in Virtual Media Manager” and Figure 16.8, “New software image added to VirtualBox” for details. - 3. + 3. Start virtual machine wizard Click New to start the "Virtual Machine Wizard", then click Next to create a new virtual machine (VM) Figure 16.9, “Creating a new VM”). - 4. + 4. Define new virtual machine Define a name for the new virtual machine, set the operating system type, then click Next (see Figure 16.10, “Setting the VM name and OS type”). - 5. + 5. Set memory size Set the memory size (we chose a default value of 512Mb), then click Next (see Figure 16.11, “Setting memory size”). - 6. + 6. Attach virtual disk Attach the virtual hard disk image by setting the radio boxes @@ -5172,11 +5172,11 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Figure 16.13, “Finishing definition of new VM” and and Figure 16.14, “Summary of the new VM” for details. - 7. + 7. Start new virtual machine Click Start to boot the new VM. - 8. + 8. Manually start Evergreen After the new virtual machine boots up for the first time, @@ -5184,7 +5184,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); as the root user (see the section called “Starting Evergreen” for more information): - + su - # become the root user - enter "evergreen" for the password su - opensrf # as the opensrf user @@ -5201,7 +5201,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); the section called “Installing Evergreen 1.6.1.x On Ubuntu or Debian” in the standard Evergreen installation instructions. - 9. + 9. Start staff client The virtual machine just installed has been configured @@ -5218,7 +5218,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); start the desktop and Evergreen, you can connect to Evergreen using the built-in staff client with the following commands: - + # as the opensrf user cd /home/opensrf/Evergreen-ILS-1.6.1.4/Open-ILS @@ -5230,7 +5230,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); admin / open-ils and begin populating the data in your image. - 10. + 10. (OPTIONAL) Modify network connections This machine was configured with a NAT connection on the @@ -5239,7 +5239,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); tend to map virtual devices to real MAC addresses on their host, you might need to clear that mapping before making a connection. As root, run: - + # as the root user: rm /etc/udev/rules.d/70-persistent-net.rules @@ -5263,7 +5263,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); ifconfig command: /sbin/ifconfig eth1. The IP address will be listed in the inet_addr stanza as something like: inet addr: 192.168.56.101. - 12. + 12. Network connections for external staff clients While you can use the IP address to access the OPAC, the @@ -5279,7 +5279,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Windows, the hosts file can be found in C:\WINDOWS\System32\drivers\etc\hosts. - 13. + 13. External staff clients You can connect a staff client to the virtual Evergreen system @@ -5295,39 +5295,39 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Figure 16.6. Starting VirtualBox for the first time - + Figure 16.7. Selecting the software image in Virtual Media Manager - + Figure 16.8. New software image added to VirtualBox - + Figure 16.9. Creating a new VM - + Figure 16.10. Setting the VM name and OS type - + Figure 16.11. Setting memory size - + Figure 16.12. Setting up the Virtual Hard Disk - + Figure 16.13. Finishing definition of new VM - + Figure 16.14. Summary of the new VM - + @@ -5342,7 +5342,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Installing the Staff ClientInstalling the Staff Client - + Installing a Pre-Built Staff ClientInstalling a Pre-Built Staff Client A pre-built Staff Client is available for Windows, @@ -5350,7 +5350,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); each of these environments is described in the following sections. Installing on Windows - + In this section we describe the process of installing the Staff Client on the Microsoft Windows operating system. Visit the downloads section of the Evergreen website at @@ -5358,14 +5358,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); and find the standard Microsoft Windows Installer that contains the current version of the Staff Client. Download the Installer, then run it. A screen that looks similar to this should appear: - + Click 'Next' to continue through the guided install process. The Install Wizard will ask you to agree to the end-user license, ask you where to install the software, ask about where to place icons, and then will automatically install the software on your workstation. When you run the Staff Client for the first time, a screen similar to this should appear: - + First, add the name of your Evergreen server to the field Hostname in the Server section. For example, the PINES demo system is @@ -5376,18 +5376,18 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); server. The first thing you must do to the Staff Client on every workstation is to assign it a workstation name. This is covered in the section called “Assigning Workstation Names”. - Users must have the REGISTER_WORKSTATION permission and be assigned the appropriate working location(s) in order to + Users must have the REGISTER_WORKSTATION permission and be assigned the appropriate working location(s) in order to register a workstation. To add working locations to a user’s account: - 1. + 1. Retrieve the user through a patron search and select Other → User Permission Editor and select the boxes for the locations necessary. Save the user record. - + Making modifications to Working Locations while changing permission settings does not work – when this workflow is performed, permission changes will not be applied to the database. - 2. + 2. Alternately, from the Admin menu, select User Permission Editor and retrieve the user by barcode. Make changes to working locations as described above. @@ -5395,14 +5395,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Installing on Mac OS - + This section describes Mac OS packages and related versions of XULrunner that can be used to run the Staff Client in a Mac OS environment. - Evergreen Version 1.2.3.0 + Evergreen Version 1.2.3.0 - 1. + 1. A Mac OS package that contains an early version of the Staff Client (version 1.2.3.0) for use with XULrunner is available. You can find @@ -5417,7 +5417,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); (e.g., version 1.9.2.13) have replaced version 1.8.0.4, which has known security holes and is not recommended for applications that deal with public web content. - 2. + 2. A Mac OS Installation package for Staff Client version 1.2.3.0 is available from Evergreen Indiana. Download and install it from @@ -5425,7 +5425,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); evergreen_osx_staff_client_1_2_3.zip . - 3. + 3. To upgrade to a more recent version of the Staff Client, you can copy the build directory from a working Windows installation of @@ -5438,25 +5438,25 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); folder within the Open-ILS package in your Applications directory on the Mac, overwriting files with the same names. - 4.Drag the application's icon to your toolbar for easier + 4.Drag the application's icon to your toolbar for easier access. When you run the Staff Client installer, a screen will appear that looks similar to this: - + Click 'Continue', accept the license, then finish the installation. The application will be located at the destination you selected during installation. You will then be able to drag the application into your toolbar for easier access. - + - Running directly using XULrunner + Running directly using XULrunner - + You must install an appropriate version of XULrunner to match the Evergreen version. See the following table for the recommended version of XULrunner: Table 17.1. Evergreen / XULrunner DependenciesEvergreen VersionXULRunner VersionEvergreen 1.6.x.xXULrunner 1.9.x.xEvergreen 1.4.x.xXULrunner 1.8.0.4 or XULrunner 1.8.0.3Evergreen 1.2.x.xXULrunner 1.8.0.4 or XULrunner 1.8.0.3 - If you have issues removing previously installed + If you have issues removing previously installed XULrunner versions see the section called “(OPTIONAL) Removing previously installed XULRunner versions” for further information. @@ -5466,7 +5466,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); ~/Desktop/Evergreen_Staff_Client). Remember to call XULrunner with the full path to the binary, followed by the install command and the path to the client data: - + /Library/Frameworks/XUL.framework/xulrunner-bin --install-app ~/Desktop/Evergreen_Staff_Client @@ -5476,7 +5476,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); (OPTIONAL) Removing previously installed XULRunner versions - + If you already have a newer version of XULrunner installed, per the release notes, you will need to remove the entire directory @@ -5487,30 +5487,30 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); If file /Library/Receipts/xulrunner-ver-mak.pkg does not exist (possibly in newer Mac OS releases), you need to flush the file receiptdb. - If you install a newer version of XULrunner + If you install a newer version of XULrunner over a previous (older) install, the older install is not removed but the symlinks are changed to the newer one. - (OPTIONAL) Flush Receiptdb file: + (OPTIONAL) Flush Receiptdb file: First, get the package identifier, then purge/forget the build that was initially installed: - + sudo pkgutil --pkgs > /tmp/pkgs.txt sudo pkgutil --forget org.mozilla.xulrunner - It may not be necessary to edit the file + It may not be necessary to edit the file /Library/Receipts/InstallHistory.plist after deleting the folder XUL.framework. See http://lists.apple.com/archives/Installer-dev/2009/Jul/msg00008.html for more information. - Creating an APP file: Staff Client and XULrunner Bundled + Creating an APP file: Staff Client and XULrunner Bundled An APP file is basically a folder. Start with a folder stucture like this: - + Evergreen.app __Contents @@ -5519,18 +5519,18 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); ____MacOS Create an APP folder structure with the following commands: - + mkdir -p Evergreen.app/Contents/Frameworks mkdir -p Evergreen.app/Contents/Resources mkdir -p Evergreen.app/Contents/MacOS - 1. + 1. Create a new file in the folder Evergreen.app/Contents/Info.plist containing the following data (adjust for your version of Evergreen): - + <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> @@ -5556,16 +5556,16 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); </dict> </plist> - 2.Download and install an appropriate + 2.Download and install an appropriate Mac OS package of XULrunner from the Mozilla website https://developer.mozilla.org/en/xulrunner (see - Table 17.1, “Evergreen / XULrunner Dependencies” for recommendations).3. + Table 17.1, “Evergreen / XULrunner Dependencies” for recommendations).3. Make a copy of the folder /Library/Frameworks/XUL.Framework inside your APP file. It should look something like this: - + Evergreen.app/ __Contents/ @@ -5578,13 +5578,13 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); ______libxpcom.dylib -> Versions/Current/libxpcom.dylib ______xulrunner-bin -> Versions/Current/xulrunner-bin - 4.Copy + 4.Copy XUL.Framework/Versions/Current/xulrunner into the folder Evergreen.app/MacOS - (do not symlink; copy the file).5. + (do not symlink; copy the file).5. Make Evergreen.app/Resources the root of your Evergreen application files like this: - + Evergreen.app/ __Contents/ @@ -5595,14 +5595,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); ______components/ ______etc. - 6.Put a Mac + 6.Put a Mac format icon file named Evergreen.icns in Resources. Installing on Linux - Quick Upgrade of the Staff Client + Quick Upgrade of the Staff Client A Linux Staff Client is automatically built on the server as part of the normal make install process for Evergreen server-side @@ -5612,21 +5612,21 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Execute the following commands, replacing USER, WORKSTATION, and SOME_PATH with appropriate values: - + cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client scp -r ./build USER@WORKSTATION:/SOME_PATH/ You should test the newly copied Staff Client on the remote workstation. Log into the workstation and execute the following command: - + xulrunner /SOME_PATH/build/application.ini - Building the Staff Client on the Server + Building the Staff Client on the Server - + A Linux Staff Client is automatically built on the server as part of the normal make install process for Evergreen server-side software. See Step 9 for details of @@ -5650,14 +5650,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); directory containing the server components of the Staff Client, or the name of a symbolic link to that directory. As the root user, make the changes as follows: - + # as the root user: cd /openils/var/web/xul ln -s SERVER_BUILD_ID/ CLIENT_BUILD_ID - Building the Staff Client on a Client Machine + Building the Staff Client on a Client Machine This section is directed toward end-users who wish to use Linux rather than Windows for client machines, but have limited @@ -5669,7 +5669,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); other distributions (the instructions should work as-is for Ubuntu or Ubuntu derivatives). - 1. + 1. Prerequisites Both subversion and @@ -5682,16 +5682,16 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); apt-get. For subversion, select the latest version; for XULrunner, select version 1.8.1.4-2ubuntu5. - + # as the root user: sudo apt-get install subversion sudo apt-get install xulrunner - 2. + 2. Download the Source Code - • + • Determine which version is needed For most end-users, a specific version is required to communicate properly with the Evergreen server. Check @@ -5716,7 +5716,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); appropriate. For example, as you look through the tag list, notice the tag named 'rel_1_6_1_6'. This is the tag you need; make a note of it for the next step. - • + • Download the Code As the opensrf @@ -5724,7 +5724,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); to the directory in which you wish to download the Staff Client. Use the following commands to download the proper version of the source code by tag name: - + # as the opensrf user: cd /DOWNLOAD/DIRECTORY @@ -5733,7 +5733,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Remember to change "rel_1_6_1_6" to the appropriate tag for your installation. - 3. + 3. Build the Staff Client In the following example, navigate to the directory in @@ -5749,7 +5749,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); commands accordingly. As the opensrf user, run the following commands to build the Staff Client: - + # as the opensrf user: wget http://evergreen-ils.org/downloads/Evergreen-ILS-1.6.1.6.tar.gz @@ -5759,19 +5759,19 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); cd ./Open-ILS/xul/staff_client/ make STAFF_CLIENT_BUILD_ID='rel_1_6_1_6' install - 4. + 4. Run the Staff Client As the opensrf user, navigate to the build/ subdirectory and run the following command: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client/build xulrunner application.ini - 5. + 5. (OPTIONAL) Clean Up / Create Shortcuts The source code download included many files that are @@ -5783,7 +5783,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); to replace DOWNLOAD_DIRECTORY and STAGING_DIRECTORY with the appropriate paths): - + # as the opensrf user: mkdir ~/STAGING_DIRECTORY @@ -5792,7 +5792,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Test the Staff Client to verify that all necessary files were copied to the staging directory: - + # as the opensrf user: cd ~/STAGING_DIRECTORY/staff_client/build @@ -5800,13 +5800,13 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); If there were no problems, then finish the cleanup by removing the original download directory as shown: - + # as the opensrf user: rm -r -f ~/DOWNLOAD_DIRECTORY Finally, the command: - + # as the opensrf user: xulrunner ~/STAGING_DIRECTORY/staff_client/build/application.ini @@ -5817,9 +5817,9 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Desktop → StartMenu → K-Menu - Using Wine to Install on Linux + Using Wine to Install on Linux - + The Linux application Wine is another alternative if you wish to install the packaged Windows versions rather than manually @@ -5833,28 +5833,28 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); As the root user, use apt-get to install the package for Wine. You can also use synaptic, the graphical user interface. - 1. + 1. Install wine: - + # as the root user: sudo apt-get install wine - 2. + 2. Visit the downloads section of the Evergreen website at http://www.evergreen-ils.org/downloads.php and find the Microsoft Windows Installer that contains the desired version of the Staff Client. Download the installer and place it in a temporary directory. - 3. + 3. As the opensrf user, navigate to the temporary directory where you downloaded the Windows installer file, then execute it with the wine application (remember to replace VERSION with the release number of the Staff Client you downloaded): - + # as the opensrf user: cd /TEMP_DIRECTORY @@ -5868,7 +5868,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); select Default Settings and choose Windows XP from the drop-down menu, then click 'Apply'. - 4. + 4. Launch the Staff Client A new entry for the Staff Client should now appear somewhere in the All Applications menu of @@ -5884,9 +5884,9 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - Building the Staff ClientBuilding the Staff Client + Building the Staff ClientBuilding the Staff Client - + You can also manually build the Staff Client by using the make utility in the Staff Client source directory (e.g., the directory /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -5894,14 +5894,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); build special versions of the Staff Client on a Linux system. Following is a list of variables that you can pass to make to influence the manual build process: - + Build Variable STAFF_CLIENT_BUILD_ID During the normal make install Evergreen server-side software build process, the variable defaults to an automatically generated date/time string, but you can also override the value of BUILD_ID. You could use the following commands during the normal install process: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -5912,14 +5912,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); As the opensrf user, execute the following commands to build the Staff Client (remember to replace NEW_VERSION with an appropriate value): - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client make STAFF_CLIENT_BUILD_ID=NEW_VERSION build - + Build Variable STAFF_CLIENT_VERSION During the normal make install Evergreen server-side @@ -5929,7 +5929,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); automatically generated. You can override the value of VERSION similarly to the BUILD_ID. You could use the following commands during the normal install process: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -5944,14 +5944,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); versions. As the opensrf user, execute the following commands to build the Staff Client: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client make STAFF_CLIENT_VERSION=0mytest.200 build - + Build Variable STAFF_CLIENT_STAMP_ID During the normal make install Evergreen @@ -5961,7 +5961,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); client-side customizations. You can override the value of STAMP_ID similarly to the BUILD_ID. You could use the following commands during the normal install process: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -5971,7 +5971,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); source directory with a different STAMP_ID. As the opensrf user, execute the following commands to build the Staff Client: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -5979,15 +5979,15 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - Advanced Build OptionsAdvanced Build Options + Advanced Build OptionsAdvanced Build Options - + In addition to the basic options listed above, there are a number of advanced options for building the Staff Client. Most are target names for the make utility and require that you build the Staff Client from the staff_client directory. See the following table for a list of possible make target keywords: - Table 17.2. Keywords For Advanced Build OptionsKeywordDescriptionclientsRuns "make win-client", "make linux-client", and "make + Table 17.2. Keywords For Advanced Build OptionsKeywordDescriptionclientsRuns "make win-client", "make linux-client", and "make generic-client" individuallyclient_dirBuilds a client directory from the build directory, without doing a rebuild. The same as "copy everything but server/".client_appPrerequisite "client_dir"; removes "install.rdf" from @@ -6006,7 +6006,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); updates directory as well; extension-updates-client doesn't exist. Following are descriptions of other special build options: - Developer Build + Developer Build You can create a so-called developer build of the Staff Client by substituting devbuild for @@ -6015,14 +6015,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); extra configuration file that enables some developer options. As the opensrf user, run the following commands from the Staff Client source directory: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client make devbuild - Compressed Javascript + Compressed Javascript You can execute the Google Closure Compiler utility to automatically review and compress Javascript code after the build @@ -6033,7 +6033,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); http://code.google.com/closure/compiler. As the opensrf user, run the following commands from the Staff Client source directory: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6044,7 +6044,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); As the opensrf user, run the following make command from the Staff Client source directory (the order of options is important): - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6056,7 +6056,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); You can override the host used to check for automatic Staff Client updates by specifying the AUTOUPDATE_HOST option. You could use the following commands during the normal install process: - + # as the root user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 @@ -6067,7 +6067,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); update checking. As the opensrf user, execute the following commands to build the Staff Client: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6077,7 +6077,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); the section called “Staff Client Automatic Updates”. - Installing and Activating a Manually Built Staff ClientInstalling and Activating a Manually Built Staff Client + Installing and Activating a Manually Built Staff ClientInstalling and Activating a Manually Built Staff Client The Staff Client is automatically built, installed and activated as part of the normal makeinstall process for Evergreen @@ -6089,7 +6089,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); in the directory /openils/var/web/xul, as the opensrf user execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6097,12 +6097,12 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); cp -R build/server "/openils/var/web/xul/$(cat build/BUILD_ID)" - Packaging the Staff ClientPackaging the Staff Client + Packaging the Staff ClientPackaging the Staff Client Once you have built the Staff Client, you can create several forms of special client packages by using a modified make command in the staff_client directory. - Packaging a Generic Client + Packaging a Generic Client This build creates a Staff Client packaged as an XPI file suitable for use with the --install-app parameter of XULrunner. @@ -6110,7 +6110,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); installed on your system. As the opensrf user, execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6118,7 +6118,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); The output file evergreen_staff_client.xpi will be created. - Packaging a Windows Client + Packaging a Windows Client This build creates a Staff Client packaged as a Windows executable. It requires that @@ -6143,10 +6143,10 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); utility used to modify resources within 32-bit Windows executables. Some useful icon ID strings include the following: - Table 17.3. Icon IDs for Packaging a Windows ClientIDI_APPICONTray icon32512Default window icon + Table 17.3. Icon IDs for Packaging a Windows ClientIDI_APPICONTray icon32512Default window icon As the opensrf user execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6154,7 +6154,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); The output file evergreen_staff_client_setup.exe will be created. - Packaging a Linux Client + Packaging a Linux Client This build creates a Staff Client packaged as a compressed tar archive file with XULrunner @@ -6162,7 +6162,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); bzip2 utility installed on your system. As the opensrf user, execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6170,14 +6170,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); The output file evergreen_staff_client.tar.bz2 will be created. - Packaging a Firefox Extension + Packaging a Firefox Extension This build creates a Staff Client packaged as a Firefox extension. It requires that you already have the zip utility installed on your system. As the opensrf user, execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6188,11 +6188,11 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Staff Client Automatic UpdatesStaff Client Automatic Updates - + It is possible to set up support for automatic Staff Client updates, either during the normal Evergreen server-side build process, or by manually building the Staff Client with certain special options. - + Automatic update server certificate requirements are more strict than normal server requirements. Firefox and XULrunner will both ignore any automatic update @@ -6201,7 +6201,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); WORK. In addition, automatic updates have special requirements for the file update.rdf: - 1.It must be served from an SSL server, or2.It must be signed with the + 1.It must be served from an SSL server, or2.It must be signed with the https://developer.mozilla.org/en/McCoy tool. You can pre-install the signing key into the file install.rdf directly, or install it into a copy as @@ -6209,27 +6209,27 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); into the build instead of the original file install.rdf. - Autoupdate Host + Autoupdate Host You can manually set the name of the automatic update host. If you do not set the name then, by default, the Staff Client will not include an automatic update preference. You can set the autoupdate host name as follows: - • + • At configuration time during the normal make install process for Evergreen server-side software. You can do this when you first configure the Evergreen server-side software (see Step 8). As the opensrf user, execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6 ./configure --prefix=/openils --sysconfdir=/openils/conf --with-updateshost=hostname make - • + • During a manual Staff Client build process. You can override the variable AUTOUPDATE_HOST=hostname and manually build the @@ -6245,7 +6245,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); As the opensrf user, execute the following commands to build the Staff Client (remember to replace SOME_URL with an appropriate value): - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6253,9 +6253,9 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - Building Updates + Building Updates - + Similar to building clients, you can use the targets generic-updates, win-updates, linux-updates, and extension-updates @@ -6273,19 +6273,19 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); As the opensrf user, change directory to the Staff Client source directory, then execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client Command to build all updates at once: - + # as the opensrf user: make updates commands to build updates individually: - + # as the opensrf user: make generic-updates @@ -6294,9 +6294,9 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); make extension-updates - Building updates with clients + Building updates with clients - + To save time and effort you can build updates and manual download clients at the same time by adding the phrase "-client" to each target name (for example, you could specify updates-client to build @@ -6309,14 +6309,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); As the opensrf user, execute one of the following commands: To build all updates at once: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client make updates-client To build updates individually: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client @@ -6325,9 +6325,9 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); make linux-updates-client - Activating the Update Server + Activating the Update Server - + This section reviews scripts associated with the update server, and requires some final adjustments to file permissions. The Apache example configuration creates an updates @@ -6348,7 +6348,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); manualupdate.html, and update.rdf, as the root user execute the following commands: - + # as the root user: cd /openils/var/updates/pub @@ -6356,7 +6356,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - Other tipsOther tips + Other tipsOther tips Multiple workstations on one install @@ -6376,14 +6376,14 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); start menu. As the opensrf user, execute the following commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client make NSIS_EXTRAOPTS=-DPROFILES win-client - Multiple Staff Clients + Multiple Staff Clients It may be confusing if you are not careful, but you can log in to multiple Evergreen servers at the same time, or a single Evergreen server @@ -6401,7 +6401,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Running the Staff ClientRunning the Staff Client - + You can run the Staff Client on a Linux system by using the XULrunner application (installed automatically and by default with Firefox Version 3.0 and later on @@ -6410,7 +6410,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); For example, if the source files for the Evergreen installation are in the directory /home/opensrf/Evergreen-ILS-1.6.1.6/ you can start the Staff Client as shown in the following example: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/xul/staff_client/build @@ -6418,21 +6418,21 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); Assigning Workstation NamesAssigning Workstation Names - + The Staff Client must be assigned to a library and given a unique name before it will connect fully to the Evergreen server. The only restriction is that the workstation's name must be unique within the assigned library. Make sure to select a workstation name that you will remember later, one that reflects the role, purpose, and/or location of a particular computer. These names will come up later in statistical reporting, and can also be handy when troubleshooting. - + In order to assign a workstation a name, a user with appropriate permissions must login to the Staff Client. In PINES, the local system administrator (OPSM) has the ability to assign workstation names in their library system. Library managers (LIBM's) have the ability within their branch. To assign a workstation a name, login to the system. You will be prompted to assign the workstation a library and a name: - + Select the library this workstation physically operates in from the drop down menu. In this example, we have selected "MGRL-MA". Type in a friendly name for the workstation. In this example, we are installing the Staff Client on the @@ -6440,30 +6440,30 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); 'Register'. Once you have registered your workstation with the server, your screen will look like this: - + You are now ready to log into the Staff Client for the first time. Type in your password again, and click 'Login'. Running the Staff Client Over An SSH TunnelRunning the Staff Client Over An SSH Tunnel - + You can configure the Staff Client to communicate with the Evergreen server over an SSH tunnel using a SOCKS 5 proxy server. There are several reasons for sending network traffic for the Staff Client through an SSH proxy: - • + • Firewalls may prevent you from reaching the Evergreen server. This may happen when you are connecting the Staff Client to a test server that should not be available generally, or it may be the result of network design priorities other than ease of use. - • + • You may wish to improve security in situations where Staff Client traffic may be susceptible to network eavesdropping. This is especially true when staff machines connect via wireless links to the network. - Setting Up an SSH Tunnel + Setting Up an SSH Tunnel You will need a server that allows you to log in via SSH and has network access to the @@ -6476,34 +6476,34 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); a free telnet/SSH client. Following are instructions for setting up an SSH session using the PuTTY utility: - - 1. + + 1. Using the menu on the left, find the section: Connection → SSH → Tunnels - 2. + 2. In the section on the right labeled "Source port", enter 9999. - 3. + 3. Set the checkbox "Dynamic". Do not enter anything in the "Destination" text entry box. - 4. + 4. Click 'Add' and notice that "D9999" now appears in the section labeled "Forwarded ports". - 5. + 5. Use the menu on the left, find the "Session" section, then enter the host name of the SSH server. - 6. + 6. A pop-up window will open to allow you to enter your username and password. Once you are logged in, the tunnel is @@ -6514,7 +6514,7 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); setting up SSH for other client operating systems. - Configuring the Staff Client to Use the SSH Tunnel + Configuring the Staff Client to Use the SSH Tunnel In order to tell the Staff Client that all traffic should be sent through the SSH tunnel just configured, @@ -6523,41 +6523,41 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); on a Windows system. Search this file for the word socks to find the appropriate section for the following changes. - + Make the following changes: - • + • Change the value of network.proxy.socks from "" to localhost. - • + • Change the value of network.proxy.socks_port from 0 to 9999. - + If everything is working correctly, you should now be able to run the Staff Client and all its data will be sent encrypted through the SSH tunnel you have just configured. - Navigating a Tabbed InterfaceNavigating a Tabbed Interface + Navigating a Tabbed InterfaceNavigating a Tabbed Interface Like many popular current web browsers and other applications, the Staff Client uses a "tabbed" interface. Tabs allow you to have several pages open at the same time in a single window. This is easier to manage on your computer screen than multiple windows, since you can easily switch between tabs in the same window. - + The "tabs" appear below the menu bar in the Staff Client with a descriptive title. Simply select a tab to bring it to the front and view the page displayed in the tab. You can use tabs to have access to multiple things all at the same time: patron records and searches, bibliographic records and searches, circulation or cataloging interfaces - anything at all in the Staff Client. - •Create a new tab by pressing + •Create a new tab by pressing Ctrl+T on the keyboard or selecting: File → New Tab - from the menu.•Close a tab by pressing + from the menu.•Close a tab by pressing Ctrl+W on the keyboard or selecting File → Close Tab - from the menu.•Switch tabs by pressing + from the menu.•Switch tabs by pressing Ctrl+Tab on the keyboard or selecting the tab in the tab bar. @@ -6573,123 +6573,123 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); service interruptions. All of the steps in this chapter are to be completed from the command line. In the following instructions, you are asked to perform certain steps as either the root or opensrf user. - •Debian: To become the root user, issue the su command and enter the password of the - root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. + •Debian: To become the root user, issue the su command and enter the password of the + root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. To switch from the root user to a different user, issue the su - [user] command; for example, su - opensrf. Once you have become a non-root user, to become the root user again simply issue the exit command. In the following instructions, /path/to/OpenSRF/ represents the path to the OpenSRF source directory. Backing Up DataBacking Up Data - 1. + 1. As root, stop the Apache - web server. - 2. + web server. + 2. As the opensrf user, stop all Evergreen and OpenSRF services: - osrf_ctl.sh -l -a stop_all - 3. + osrf_ctl.sh -l -a stop_all + 3. Back up of the /openils directory. - 4. + 4. Back up the evergreen database. Upgrading OpenSRF to 1.6Upgrading OpenSRF to 1.6 - - 1. + + 1. As the opensrf user, download and extract the source files for OpenSRF 1.6: - + wget http://open-ils.org/downloads/OpenSRF-1.6.2.tar.gz tar xzf OpenSRF-1.6.2.tar.gz A new directory OpenSRF-1.6.2 is created. - For the latest edition of OpenSRF, check the Evergreen download page at + For the latest edition of OpenSRF, check the Evergreen download page at http://www.open-ils.org/downloads.php. - 2. + 2. As the root user, install the software prerequisites using the automatic prerequisite installer. - + aptitude install make cd /home/opensrf/OpenSRF-1.6.2 Replace [distribution] below with the following value for your distribution: - • + • debian-etch for Debian Etch (4.0) - - • + + • debian-lenny for Debian Lenny (5.0) - • + • ubuntu-hardy for Ubuntu Hardy Heron (8.04) - - • + + • ubuntu-intrepid for Ubuntu Intrepid Ibex (8.10) - • + • ubuntu-jaunty for Ubuntu Jaunty Jackalope (9.04) - • + • ubuntu-karmic for Ubuntu Karmic Koala (9.10) - • + • ubuntu-lucid for Ubuntu Lucid Lynx (10.04) - • + • centos for CentOS 5 - + cd /path/to/OpenSRF make -f src/extras/Makefile.install [distribution] This will install a number of packages required by OpenSRF on your system, - including some Perl modules from CPAN. You can type no to the initial CPAN + including some Perl modules from CPAN. You can type no to the initial CPAN configuration prompt to allow it to automatically configure itself to download and install Perl modules from CPAN. The CPAN installer will ask you a number of times whether it should install prerequisite modules - type yes. - 3. + 3. As the opensrf user, configure and compile OpenSRF: - You can include the –enable-python and –enable-java configure options if - you want to include support for Python and Java - , respectively. - + You can include the –enable-python and –enable-java configure options if + you want to include support for Python and Java + , respectively. + cd /home/opensrf/OpenSRF-1.6.2 ./configure --prefix=/openils --sysconfdir=/openils/conf make - 4. + 4. As the root user, return to your OpenSRF build directory and install OpenSRF: - + cd /home/opensrf/OpenSRF-1.6.2 make install - 5. + 5. As the root user, change the ownership of the installed files to the opensrf user: - chown -R opensrf:opensrf /openils - 6. + chown -R opensrf:opensrf /openils + 6. Restart and Test OpenSRF - + osrf_ctl.sh -l -a start_all /openils/bin/srfsh srfsh# request opensrf.math add 2 2 You should see output such as: - + Received Data: 4 ------------------------------------ @@ -6706,86 +6706,86 @@ srfsh# Upgrade Evergreen from 1.4 to 1.6.1Upgrade Evergreen from 1.4 to 1.6.1 - 1. + 1. As the opensrf user, download and extract Evergreen 1.6.2.0 - + wget http://open-ils.org/downloads/Evergreen-ILS-1.6.1.5.tar.gz tar xzf Evergreen-ILS-1.6.1.5.tar.gz - For the latest edition of Evergreen check the Evergreen download page at + For the latest edition of Evergreen check the Evergreen download page at http://www.open-ils.org/downloads.php and adjust upgrading instructions accordingly. - 2. + 2. As the root user, install the prerequisites: - cd /home/opensrf/Evergreen-ILS-1.6.1.5 + cd /home/opensrf/Evergreen-ILS-1.6.1.5 On the next command, replace [distribution] with one of these values for your distribution of Debian or Ubuntu: - • - debian-etch for Debian Etch (4.0) - • + • + debian-etch for Debian Etch (4.0) + • debian-lenny for Debian Lenny (5.0) - • + • ubuntu-hardy for Ubuntu Hardy Heron - (8.04) - • + (8.04) + • ubuntu-intrepid for Ubuntu Intrepid Ibex (8.10) - • + • ubuntu-jaunty for Ubuntu Jaunty Jackalope (9.04) - • + • ubuntu-karmic for Ubuntu Karmic Koala (9.10) or Ubuntu Lucid Lynx (10.04) - make -f Open-ILS/src/extras/Makefile.install [distribution] - 3. + make -f Open-ILS/src/extras/Makefile.install [distribution] + 3. As the opensrf user, configure and compile Evergreen: - cd /home/opensrf/Evergreen-ILS-1.6.1.5 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - 4. + cd /home/opensrf/Evergreen-ILS-1.6.1.5 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + 4. As the root user, install Evergreen: - make STAFF_CLIENT_BUILD_ID=rel_1_6_1_5 install - 5. + make STAFF_CLIENT_BUILD_ID=rel_1_6_1_5 install + 5. Change to the Evergreen installation directory: - cd /home/opensrf/Evergreen-ILS-1.6.1.5 - 6. + cd /home/opensrf/Evergreen-ILS-1.6.1.5 + 6. As the root user, change all files to be owned by the opensrf user and group: - chown -R opensrf:opensrf /openils - 7. + chown -R opensrf:opensrf /openils + 7. As the root user, build live-db-setup.pl for the cgi-bin bootstrapping scripts and offline-config.pl for the offline staff client data uploader: - + cd /home/opensrf/Evergreen-ILS-1.6.1.5 perl Open-ILS/src/support-scripts/eg_db_config.pl --create-bootstrap --create-offline \ --user evergreen --password evergreen --hostname localhost --port 5432 \ --database evergreen - 8. + 8. As the opensrf user, update server symlink in /openils/var/web/xul/: - + cd /openils/var/web/xul/ rm server ln -s rel_1_6_1_5/server - 9. + 9. Update the Evergreen database: - it is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. - + it is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. + psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.4.0.5-1.6.0.0-upgrade-db.sql evergreen psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.0-1.6.0.1-upgrade-db.sql evergreen @@ -6800,59 +6800,59 @@ psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.3-1.6.1.4-upgrade-db psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.4-1.6.1.5-upgrade-db.sql evergreen - 10. + 10. As the opensrf user, copy /openils/conf/oils_web.xml.example to /openils/conf/oils_web.xml - + (needed for acquisitions templates). - cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml - 11. + cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml + 11. Update opensrf_core.xml and opensrf.xml by copying the new example files (/openils/conf/opensrf_core.xml.example and /openils/conf/opensrf.xml). - - cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml - - cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml - 12. + + cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml + + cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml + 12. Update opensrf.xml with the database connection info: - + perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config --service all --user evergreen \ --password evergreen --hostname localhost --port 5432 --database evergreen - 13. + 13. Update /etc/apache2/startup.pl by copying the example from - Open-ILS/examples/apache/startup.pl. - 14. + Open-ILS/examples/apache/startup.pl. + 14. Update /etc/apache2/eg_vhost.conf by copying the example from - Open-ILS/examples/apache/eg_vhost.conf. - 15. + Open-ILS/examples/apache/eg_vhost.conf. + 15. Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/ - examples/apache/eg.conf. - 16. + examples/apache/eg.conf. + 16. - Recover customizations you have made to the Apache + Recover customizations you have made to the Apache configuration files. For example, if you purchased an SSL certificate, you will need to edit eg.conf to point to the appropriate SSL certificate files. - + Upgrade Evergreen from 1.6.0 to 1.6.1Upgrade Evergreen from 1.6.0 to 1.6.1 - 1. + 1. Follow steps 1-8 of the instructions for upgrading Evergreen from 1.4 - 2. + 2. Update the Evergreen database: - It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. - + It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. + psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.0.4-1.6.1.0-upgrade-db.sql evergreen psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.0-1.6.1.1-upgrade-db.sql evergreen @@ -6862,7 +6862,7 @@ psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.3-1.6.1.4-upgrade-db psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.4-1.6.1.5-upgrade-db.sql evergreen - 3. + 3. Follow steps 10-16 of the instructions for upgrading Evergreen from 1.4 @@ -6871,34 +6871,34 @@ psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1.4-1.6.1.5-upgrade-db Restart Evergreen and TestRestart Evergreen and Test - 1. + 1. As the opensrf user, start all Evergreen and OpenSRF services: - osrf_ctl.sh -l -a start_all - 2. + osrf_ctl.sh -l -a start_all + 2. As the opensrf user, run autogen to refresh the static - organizational data files: - + organizational data files: + cd /openils/bin ./autogen.sh -c /openils/conf/opensrf_core.xml -u - 3. + 3. Start srfsh and try logging in using your Evergreen username and password: - + /openils/bin/srfsh srfsh% login username password - 4. + 4. Start the Apache web server. - + If you encounter errors, refer to the troubleshooting section of this documentation for tips on finding solutions and seeking further assistance from the Evergreen community. @@ -6907,21 +6907,21 @@ cd /openils/bin Upgrading PostgreSQL from 8.2 to 8.4Upgrading PostgreSQL from 8.2 to 8.4 Evergreen 1.6.1 supports PostgreSQL version 8.4 and it is recommended that you upgrade PostgreSQL when you upgrade Evergreen to 1.6. - The order of the following steps is very important. - 1. + The order of the following steps is very important. + 1. As opensrf, stop the evergreen and opensrf services: - osrf_ctl.sh -l -a stop_all - 2. + osrf_ctl.sh -l -a stop_all + 2. Backup the Evergreen database data - 3. + 3. Upgrade to Postgresql 8.4 by removing old version and installing Postgresql 8.4 - 4. + 4. Create an empty Evergreen database in postgresql 8.4 by issuing the following commands as the postgres user: - + createdb -E UNICODE evergreen createlang plperl evergreen @@ -6932,24 +6932,24 @@ psql -f /usr/share/postgresql/8.4/contrib/tsearch2.sql evergreen psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen - 5. + 5. As the postgres user on the PostgreSQL server, create a PostgreSQL user named evergreen for the database cluster: - createuser -P -s evergreen + createuser -P -s evergreen Enter the password for the new PostgreSQL superuser (evergreen) - 6. + 6. Restore data from backup created in step 1. - 7. + 7. To point tsearch2 to proper function names in 8.4, run the SQL script /home/opensrf/Evergreen-ILS*/Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql using the psql command. - cd /home/opensrf/Evergreen-ILS* - psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen - 8. + cd /home/opensrf/Evergreen-ILS* + psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen + 8. - Restart Evergreen and OpenSRF services - 9. + Restart Evergreen and OpenSRF services + 9. For additional information regarding upgrading PostgreSQL, see the following documentation in PostgreSQL: http://www.postgresql.org/docs/8.4/static/install-upgrading.html @@ -6972,66 +6972,66 @@ psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen commands to stop and start the Evergreen server. You can start and stop Evergreen from the command line of the server using the osrf_ctl.sh script located in the openils/bin directory. - The osrf_ctl.sh command must be run as the opensrf user. + The osrf_ctl.sh command must be run as the opensrf user. To view help on osrf_ctl.sh and get all of its options, run: - osrf_ctl.sh -h + osrf_ctl.sh -h To start Evergreen, run: - osrf_ctl.sh -l -a start_all + osrf_ctl.sh -l -a start_all The -l flag is used to indicate that Evergreen is configured to use localhost as the host. If you have configured opensrf.xml to use your real hostname, do not use the -l flag. The -a option is required and indicates the action of the command. In this case start_all. - + If you receive the error message: osrf_ctl.sh: command not found, then your environment variable - PATH does not include the + PATH does not include the /openils/bin directory. You can set it using the following command: - export PATH=$PATH:/openils/bin + export PATH=$PATH:/openils/bin If you receive the error message Can't locate OpenSRF/System.pm in @INC … BEGIN - failed–compilation aborted, then your environment variable PERL5LIB does not include the /openils/lib/perl5 directory. You can set it using the following command: - export PERL5LIB=$PERL5LIB:/openils/lib/perl5 + export PERL5LIB=$PERL5LIB:/openils/lib/perl5 It is also possible to start a specific service. For example: - osrf_ctl.sh -l -a start_router + osrf_ctl.sh -l -a start_router will only start the router service. - + If you decide to start each service individually, you need to start them in a specific order for Evergreen to start correctly. Run the commands in this exact order: -osrf_ctl.sh -l -a start_router -osrf_ctl.sh -l -a start_perl -osrf_ctl.sh -l -a start_c +osrf_ctl.sh -l -a start_router +osrf_ctl.sh -l -a start_perl +osrf_ctl.sh -l -a start_c After starting or restarting Evergreen, it is also necessary to restart the Apache web server - for the OPAC to work correctly. + for the OPAC to work correctly. To stop Evergreen, run: - osrf_ctl.sh -l -a stop_all + osrf_ctl.sh -l -a stop_all As with starting, you can choose to stop services individually. To restart Evergreen, run: - osrf_ctl.sh -l -a restart_all - Starting Specific Perl ServicesStarting Specific Perl Services + osrf_ctl.sh -l -a restart_all + Starting Specific Perl ServicesStarting Specific Perl Services It is also possible to start and stop a specific perl service using opensrf-perl.pl. Here is the syntax for starting a perl service with this command: -opensrf-perl.pl --service <service-name> -a start -p <PID-directory> +opensrf-perl.pl --service <service-name> -a start -p <PID-directory> Example (starting the booking module): -opensrf-perl.pl --service open-ils.booking -a start -p /openils/var/run/opensrf +opensrf-perl.pl --service open-ils.booking -a start -p /openils/var/run/opensrf This is the syntax for stopping a perl service with this command: -opensrf-perl.pl --service <service-name> -a stop -p <PID-directory> +opensrf-perl.pl --service <service-name> -a stop -p <PID-directory> Example (stopping the booking module): -opensrf-perl.pl --service open-ils.booking -a stop -p /openils/var/run/opensrf +opensrf-perl.pl --service open-ils.booking -a stop -p /openils/var/run/opensrf These commands can be very useful when you edit Perl modules and only need to restart the specific service for changes to take effect. - + The default for the PID-directory: /openils/var/run/opensrf For a clustered server instance of Evergreen, you must store the PIDs on a directory that is local to each server, or else one of your cluster servers may try killing processes on itself that actually have PIDs on other servers. For services running on the local server use the --localhost to force the hostname to be localhost, instead of the fully qualified domain name for the machine. To see other options run the command with the -h option: -opensrf-perl.pl -h +opensrf-perl.pl -h For a list of Evergreen/OpenSRF perl services see: the section called “Evergreen-specific OpenSRF services”. @@ -7039,13 +7039,13 @@ psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen Automating Evergreen Startup and ShutdownAutomating Evergreen Startup and Shutdown Once you understand starting and stopping Evergreen, you will want to create a start up script for two purposes: - •Allow you to start, restart and stop Evergreen, SIP, reporter and z39.50 services with one command.•Allow Evergreen to stop and start properly during a system restart. + •Allow you to start, restart and stop Evergreen, SIP, reporter and z39.50 services with one command.•Allow Evergreen to stop and start properly during a system restart. The following procedure is for Debian or Ubuntu distributions of Linux. - 1. + 1. Create a bash script for starting Evergreen and all associated services. Here is an example script: - + #!/bin/bash OPENILS_BASE="/openils" @@ -7067,39 +7067,50 @@ fi; start() { sleep 3 echo "Starting Evergreen" - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a start_all" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh \ + -l -a start_all" } stop() { echo "Stopping Evergreen" - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a stop_all" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh \ + -l -a stop_all" } autogen() { echo "Running Autogen Update" - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin autogen.sh -u -c ${OPENILS_CORE}" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin autogen.sh \ + -u -c ${OPENILS_CORE}" } sip_start() { - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ - -d ${SIP_PID} \ - -s ${SIP_CONF} \ - -a start_sip" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ + -d ${SIP_PID} \ + -s ${SIP_CONF} \ + -a start_sip" } sip_stop() { - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ - -d ${SIP_PID} \ - -s ${SIP_CONF} \ - -a stop_sip" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ + -d ${SIP_PID} \ + -s ${SIP_CONF} \ + -a stop_sip" } sip_restart() { - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ - -d ${SIP_PID} \ - -s ${SIP_CONF} \ - -a restart_sip" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin oils_ctl.sh \ + -d ${SIP_PID} \ + -s ${SIP_CONF} \ + -a restart_sip" } + + start_rep() { pids="$(pidof "$REP_NAME")" @@ -7108,8 +7119,9 @@ start_rep() { return 1 fi rm -f $REP_LOCK - sudo -u opensrf bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin clark-kent.pl \ - --lockfile=${REP_LOCK} --boostrap=${OPENILS_CORE} --concurrency=1 --sleep=30 --daemon" ; + sudo -u opensrf bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin clark-kent.pl \ + --lockfile=${REP_LOCK} --boostrap=${OPENILS_CORE} --concurrency=1 --sleep=30 --daemon" ; pids="$(pidof "$REP_NAME")" if [ x"$pids" = x ] ; then echo FAILURE @@ -7143,7 +7155,8 @@ z39_50_start() { return 1 fi sudo -u opensrf bash -c "touch ${SRU_LOG}" - sudo bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin z39_50.sh >> ${SRU_LOG} 2>&1" & + sudo bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin z39_50.sh >> ${SRU_LOG} 2>&1" & sleep 1 pids=`ps -eo pid,args | grep $sru_name | grep -v grep | cut -c1-6` if [ x"$pids" = x ] ; then @@ -7169,7 +7182,8 @@ z39_50_stop() { fi return $RETVAL } - + + case "$1" in start) start @@ -7228,85 +7242,92 @@ case "$1" in ;; start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl| \ start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all) - sudo -u opensrf /bin/bash -c "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a $1" + sudo -u opensrf /bin/bash -c \ + "PERL5LIB=${PERL5LIB}:${OPENILS_BASE}/lib/perl5 PATH=${PATH}:${OPENILS_BASE}/bin osrf_ctl.sh -l -a $1" ;; *) echo " * Usage: /etc/init.d/evergreen {start|stop|restart|autogen" - echo " |sip_start|sip_stop|sip_restart" - echo " |z39_50_start|z39_50_stop|z39_50_restart" - echo " |start_reporter|stop_reporter|restart_reporter" - echo " |start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl" - echo " |start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all}" + echo " |sip_start|sip_stop|sip_restart" + echo " |z39_50_start|z39_50_stop|z39_50_restart" + echo " |start_reporter|stop_reporter|restart_reporter" + echo " |start_router|stop_router|restart_router|start_perl|stop_perl|restart_perl" + echo " |start_c|stop_c|restart_c|start_osrf|stop_osrf|restart_osrf|stop_all|start_all|restart_all}" exit 1 ;; esac; - 2. + 2. - Save file in /etc/bin folder as evergreenstart if you would like this as a manual script for starting Evergreen services. + Save file in /etc/bin folder as evergreenstart + if you would like this as a manual script for starting Evergreen services. - Save file in /etc/init.d folder as evergreenstart if you would like to run this script automatically - during your server's boot process as explained in later steps. - 3. + Save file in /etc/init.d folder as evergreenstart + if you would like to run this script automatically during your server's boot process as explained in later steps. + 3. Ensure that the script is executable. -sudo chmod 755 evergreenstart - 4. +sudo chmod 755 evergreenstart + 4. Test the script by running it from the command line as the root user. -/etc/init.d/evergreenstart restart +/etc/init.d/evergreenstart restart You will also need to restart apache as the root user. -/etc/init.d/apache2 restart - 5. +/etc/init.d/apache2 restart + 5. - + The next steps are optional if you want to automate Evergreen so it starts during your server's boot process. Update runlevel defaults of the new evergreenstart service as the root user: -update-rc.d evergreenstart defaults 80 20 - - For Evergreen to start properly during a reboot, you will want to ensure that the first number (80) is lower than the assigned - starting priority for Apache, so it starts before Apache. It should also have a larger stopping priority number (20) than Apache so it stops +update-rc.d evergreenstart defaults 80 20 + + For Evergreen to start properly during a reboot, you will want to ensure that the first number + (80) is lower than the assigned + starting priority for Apache, so it starts before Apache. It should also have a larger stopping priority number + (20) than Apache so it stops after Apache during a boot cycle. - 6. + 6. Test the startup script by rebooting the Evergreen Server and checking to ensure that all Evergreen sercices started properly. - This has not yet been tested in a Evergreen multi-server, “brick” configuration. - For more information on update-rc.d you should review the documentation on this topic for - Debian or Ubuntu depending on your distribution of Linux. + This has not yet been tested in a Evergreen multi-server, “brick” configuration. + For more information on update-rc.d you should review the documentation on this topic for + + Debian or Ubuntu + depending on your distribution of Linux. Backing UpBacking Up - - + Backing up your system files and data is a critical task for server and database administrators. Having a strategy for backing up and recovery could be the difference between a minor annoyance for users and a complete catastrophe. - Backing up the Evergreen DatabaseBacking up the Evergreen Database - + Backing up the Evergreen DatabaseBacking up the Evergreen Database + Most of the critical data for an Evergreen system – patrons, bibliographic records, holdings, - transactions, bills – is stored in the PostgreSQL database. You can therefore use normal + transactions, bills – is stored in the PostgreSQL database. You can therefore use normal PostgreSQL backup procedures to backup this data. For example, the simplest method of backing up the Evergreen database is to use the pg_dump command to create a live backup of the database without having to - interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file evergreen_db.backup: - pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen + interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file + evergreen_db.backup: + pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen To restore the backed up database into a new database, create a new database using the template0 database template and the UTF8 encoding, and run the psql command, specifying the new database as your target: - createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen - psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen - + createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen + psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen + This method of backup is only suitable for small Evergreen instances. Larger sites should consider implementing continuous archiving (also known as “log shipping”) to provide more granular backups with lower system overhead. More information on backing up PostgreSQL - databases can be found in the official PostgreSQL documentation. + databases can be found in the official PostgreSQL + documentation. - Backing up Evergreen FilesBacking up Evergreen Files + Backing up Evergreen FilesBacking up Evergreen Files - + When you deploy Evergreen, you will probably customize many aspects of your system including the system configuration files, Apache configuration files, OPAC and Staff Client. In order to protect your investment of time, you should carefully consider the best approach to backing up @@ -7314,13 +7335,13 @@ esac; There are a number of ways of tackling this problem. You could create a script that regularly creates a time-stamped tarball of all of these files and copies it to a remote server - but that would build up over time to hundreds of files. You could use rsync - to ensure that the files of + to ensure that the files of interest are regularly updated on a remote server - but then you would lose track of the changes to the files, should you make a change that introduces a problem down the road. Perhaps one of the best options is to use a version control system like - Bazaar, - git - or Subversion to regularly push updates of the files you care about to a repository on a + Bazaar, + git + or Subversion to regularly push updates of the files you care about to a repository on a remote server. This gives you the advantage of quickly being able to run through the history of the changes you made, with a commenting system that reminds you why each change was made, combined with remote storage of the pertinent files in case of disaster on site. In addition, your team can create @@ -7328,58 +7349,59 @@ esac; system. Using a version control system also helps to recover system customizations after an upgrade. - Full System BackupFull System Backup + Full System BackupFull System Backup A full system backup archives every file on the file system. Some basic methods require you - to shut down most system processes; other methods can use mirrored RAID setups or - SAN storage to + to shut down most system processes; other methods can use mirrored RAID setups or + SAN storage to take “snapshot” backups of your full system while the system continues to run. The subject of how to implement full system backups is beyond the scope of this documentation. SecuritySecurity - + As with an ILS and resource accessible from the world wide web careful consideration needs to be given to the security of your Evergreen servers and database. While it is impossible to cover all aspects of security, it is important to take several precautions when setting up production Evergreen site. - 1. + 1. Change the Evergreen admin password and keep it secure. The default admin password is known by anyone who has installed Evergreen. It is not a secret and needs to be changed by the Administrator. It should also only be shared by those who need the highest level of access to your system. - 2. + 2. Create strong passwords using a combination of numerical and alphabetical characters for all of the Administrative passwords including the postgres and opensrf users - 3. - Open ports in the firewall with caution - It is only necessary to open ports + 3. + Open ports in the firewall with caution - It is only necessary to open ports 80 and 443 - for TCP connections to the Evergreen server from the OPAC and the staff client. It is critical for administrators to - understand the concepts of network security and take precautions to minimize vulnerabilities. + for TCP connections to the Evergreen server from the OPAC and the staff client. It is + critical for administrators to understand the concepts of network security and take precautions to minimize vulnerabilities. - 4. - Use permissions and permission groups wisely - it is important to understand the + 4. + Use permissions and permission groups wisely - it is important to understand the purpose of the permissions and to only give users the level of access that they require. Managing Log FilesManaging Log Files - + Evergreen comes with a sophisticated logging system, but it is important to manage the OpenSRF and Evergreen logs. This section will provide a couple of log management techniques and tools. - Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size + Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size - - Fortunately, this is not a new problem for Unix administrators, and there are a number of ways of keeping your logs under control. + + Fortunately, this is not a new problem for Unix administrators, and + there are a number of ways of keeping your logs under control. On Debian and Ubuntu, for example, the logrotate utility controls when old log files are compressed and a new log file is started. logrotate runs once a day and checks all log files that it knows about to see if a threshold of time or size has been reached and rotates the log files if a threshold condition has been met. To teach logrotate to rotate Evergreen logs on a weekly basis, or if they are > 50MB in size, create a new file /etc/logrotate.d/evergreen with the following contents: - + compress /openils/var/log/*.log { # keep the last 4 archived log files along with the current log file @@ -7393,26 +7415,26 @@ size 50M } - Changing Logging Level for EvergreenChanging Logging Level for Evergreen + Changing Logging Level for EvergreenChanging Logging Level for Evergreen - + Change the Log Levels in your config files. Changing the level of logging will help narrow down errors. - + A high logging level is not wise to do in a production environment since it will produce vastly larger log files and thus reduce server performance. Change logging levels by editing the configuration file - /openils/conf/opensrf_core.xml + /openils/conf/opensrf_core.xml you will want to search for lines containing <loglevel>. the default setting for loglevel is 3 which will log errors, warnings and information. The next level is 4 which is for debugging and provides additional information helpful for the debugging process. Thus, lines with: - <loglevel>3</loglevel> + <loglevel>3</loglevel> Should be changed to: - <loglevel>4</loglevel> + <loglevel>4</loglevel> to allow debugging level logging Other logging levels include 0 for no logging, 1 for logging errors and 2 for logging warnings @@ -7421,30 +7443,30 @@ size 50M Installing PostgreSQL from SourceInstalling PostgreSQL from Source - + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 16.1, “Evergreen Software Dependencies” to ensure that your Linux distribution supports the required version of PostgreSQL. - + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 16.1, “Evergreen Software Dependencies” to ensure that your Linux distribution supports the required version of PostgreSQL. - 1. + 1. Install the application stow on your system if it is not already installed. Issue the following command as the root user: - + apt-get install stow - 2. + 2. Download, compile, and install the latest release for PostgreSQL 8.2 (which was version 8.2.12 at the time of this writing). As the root user, follow these steps: - + wget http://wwwmaster.postgresql.org/redir/198/h/source/v8.2.17/postgresql-8.2.17.tar.bz2 tar xzf postgresql-8.2.17.tar.gz @@ -7463,16 +7485,16 @@ stow pgsql - 3. + 3. Create the new user postgres to run the PostgreSQL processes. As the root user, execute this command: - adduser postgres - 4. + adduser postgres + 4. Initialize the database directory and start up PostgreSQL. As the root user, follow these steps: - + mkdir -p /usr/local/pgsql/data chown postgres /usr/local/pgsql/data @@ -7481,7 +7503,7 @@ initdb -D /usr/local/pgsql/data -E UNICODE --locale=C pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start - + If an error occurs during the final step above, review the path of the home directory for the postgres user. It may be /var/lib/postresql instead of /home/postres. @@ -7489,10 +7511,10 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start Configuring PostgreSQLConfiguring PostgreSQL - + The values of several PostreSQL configuration parameters may be changed for enhanced performance. The following table lists the default values and some suggested updates for several useful parameters: - Table 19.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb + Table 19.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb Chapter 20. Migrating DataChapter 20. Migrating Data @@ -7505,7 +7527,7 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start Migrating Bibliographic RecordsMigrating Bibliographic Records - + One of the most important and challenging tasks is migrating your bibliographic records to a new system. The procedure may be different depending on the system from which you are migrating and the content of the marc records exported from the existing system. The procedures in this section deal with the process once the data from the existing system @@ -7514,26 +7536,26 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start (/home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/src/extras/import/ ) and are also available from the Evergreen repository ( http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import). - Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format + Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format - + If you are starting with MARC records from your existing system or another source, use the marc2bre.pl script to create the JSON representation of a bibliographic record entry (hence bre) in Evergreen. marc2bre.pl can perform the following functions: - •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field - actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following + •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field + actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following SQL to determine what this number should be to avoid conflicts: - + psql -U postgres evergreen # SELECT MAX(id)+1 FROM biblio.record_entry; - • + • If you are processing multiple sets of MARC records with marc2bre.plbefore loading the records into the database, you will need to keep track of the starting ID number for each subsequent batch of records that you are importing. For example, if you are processing three files of MARC records with 10000 records each into a clean database, you would use –startid 1, –startid 10001, and –startid 20001 parameters for each respective file. - • + • Ignore “trash” fields that you do not want to retain in Evergreen - • + • If you use marc2bre.pl to convert your MARC records from the MARC-8 encoding to the UTF-8 encoding, it relies on the MARC::Charset Perl module to complete the conversion. When importing a large set of items, you can speed up the process by using a utility like marc4j or marcdumper to convert the records @@ -7543,7 +7565,7 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start horrible thing and ensure that there are no namespace prefixes in front of the element names. marc2bre.pl cannot parse the following example: - + <?xml version="1.0" encoding="UTF-8" ?> @@ -7565,7 +7587,7 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> But marc2bre.pl can parse the same example with the namespace prefixes removed: - + <?xml version="1.0" encoding="UTF-8" ?> <collection xmlns:marc="http://www.loc.gov/MARC21/slim" @@ -7586,9 +7608,9 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> - Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL + Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL - + Once you have your records in Evergreen's BRE JSON format, you then need to use direct_ingest.pl to convert the records into the generic ingest JSON format for Open-ILS. This step uses the open-ils.ingest application to extract the data that will be indexed in the database. @@ -7597,11 +7619,11 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> load the records into PostgreSQL. The –order and –autoprimary command line options (bre, mrd, mfr, etc) map to class IDs defined in /openils/conf/fm_IDL.xml. - Adding Metarecords to the DatabaseAdding Metarecords to the Database + Adding Metarecords to the DatabaseAdding Metarecords to the Database - + One you have loaded the records into PostgreSQL, you can create metarecord entries in the metabib.metarecord table by running the following SQL: - + psql evergreen # \i /home/opensrf/Evergreen-ILS-1.6*/src/extras/import/quick_metarecord_map.sql @@ -7613,18 +7635,18 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> Migrating Bibliographic Records Using the ESI Migration ToolsMigrating Bibliographic Records Using the ESI Migration Tools - + The following procedure explains how to migrate bibliographic records from marc records into Evergreen. This is a general guide and will need to be adjusted for your specific environment. It does not cover exporting records from specific proprietary ILS systems. For assistance with exporting records from your current system please refer to the manuals for your system or you might try to ask for help from the Evergreen community. - 1. + 1. Download the Evergreen migration utilities from the git repository. Use the command git clone git://git.esilibrary.com/git/migration-tools.git to clone the migration tools. Install the migration tools: - + cd migration-tools/Equinox-Migration @@ -7635,12 +7657,12 @@ make install -2. +2. Add environmental variables for migration and import tools. These paths must point to: - •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) + •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) - + export PATH=[path to Evergreen]/Open-ILS/src/extras/import: \ /[path to migration-tools]/migration-tools:$PATH:. @@ -7648,11 +7670,11 @@ export PERL5LIB=/openils/lib/perl5: \ /[path to migration-tools directory]/migration-tools/Equinox-Migration/lib -3. +3. Dump marc records into MARCXML using yaz-marcdump - + echo '<?xml version="1.0" encoding="UTF-8" ?>' > imported_marc_records.xml @@ -7660,10 +7682,10 @@ yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> im -4. +4. Test validity of XML file using xmllint - + @@ -7671,56 +7693,56 @@ yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> im -5. +5. Clean up the marc xml file using the marc_cleanup utility: - + marc_cleanup --marcfile=imported_marc_records.xml --fullauto [--renumber-from #] -ot 001 The --renumber-from is required if you have bibliographic records already in your system. Use this to set the starting id number higher then the last id in the biblio.record_entry table. The marc_cleanup command will generate a file called clean.marc.xml -6. +6. Create a fingerprinter file using the fingerprinter utility: - + fingerprinter -o incumbent.fp -x incumbent.ex clean.marc.xml fingerprinter is used for deduplification of the incumbent records. The -o option specifies the output file and the -x option is used to specify the error output file. -7. +7. Create a fingerprinter file for existing Evergreen bibliographic records using the fingerprinter utility if you have existing bibliographic records in your system previously imported: - + fingerprinter -o production.fp -x production.fp.ex --marctype=MARC21 existing_marc_records.mrc \ --tag=901 --subfield=c fingerprinter is used for deduplification of the incumbant records. -8. +8. Create a merged fingerprint file removing duplicate records. - + cat cat production.fp incumbent.fp | sort -r > dedupe.fp match_fingerprints [-t start id] -o records.merge dedupe.fp -9. +9. Create a new import XML file using the extract_loadset utility - + extract_loadset -l 1 -i clean.marc.xml -o merged.xml records.merge -10. +10. Extract all of the currently used TCN's an generate the .bre and .ingest files to prepare for the bibliographic record load. - + psql -U evergreen -c "select tcn_value from biblio.record_entry where not deleted" \ | perl -npe 's/^\s+//;' > used_tcns @@ -7728,14 +7750,14 @@ marc2bre.pl --idfield 903 [--startid=#] --marctype=XML -f final.xml \ --used_tcn_file=used_tcns > evergreen_bre_import_file.bre - + The option --startid needs to match the start id used in earlier steps and must be higher than largest id value in the biblio.record_entry table. the option --idfield should match the marc datafield used to store your records ids. -11. +11. Ingest the bibliographic records into the Evergreen database. - + direct_ingest.pl < evergreen_bre_import_file.bre > evergreen_ingest_file.ingest @@ -7758,10 +7780,10 @@ parallel_pg_loader.pl \ - 12. + 12. Load the records using psql and the sql scripts generated from the previous step. - + psql -U evergreen < pg_loader-output.sql > load_pg_loader-output @@ -7769,10 +7791,10 @@ psql -U evergreen < ~/Ever*/Open-ILS/src/extras/import/quick_metarecord_map.s - 13. + 13. Extract holdings from marc records for importing copies into Evergreen using the extract_holdings utility. - + extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map holdings.map @@ -7780,7 +7802,7 @@ extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map hol This command would extract holdings based on the 949 datafield in the marc records. The copy id is generated from the subfile i in the 999 datafield. You may need to adjust these options based on the field used for holdings informatiom in your marc records. The map option holdings.map refers to a file to be used for mapping subfields to the holdings data you would like extracted. Here is an example based on mapping holdings data to the 999 data field: - + callnum 999 a barcode 999 i @@ -7790,7 +7812,7 @@ circ_modifier 999 t Running the extract holdings script should produce an sql script HOLDINGS.pg similar to: - + BEGIN; egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, @@ -7801,7 +7823,7 @@ egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, Edit the holdings.pg sql script like so: - + BEGIN; TRUNCATE TABLE staging_items; @@ -7821,15 +7843,15 @@ COMMIT; Adding Copies to Bibliographic RecordsAdding Copies to Bibliographic Records - + Before bibliographic records can be found in an OPAC search copies will need to be created. It is very important to understand how various tables related to each other in regards to holdings maintenance. The following procedure will guide you through the process of populating Evergreen with volumes and copies. This is a very simple example. The SQL queries may need to be adjusted for the specific data in your holdings. - 1. + 1. Create a staging_items staging table to hold the holdings data: - + CREATE TABLE staging_items ( l_callnum text, -- call number label hseq int, @@ -7841,26 +7863,26 @@ CREATE TABLE staging_items ( l_owning_lib text -- actor.org_unit.shortname ); - 2. + 2. Import the items using the HOLDINGS.pg SQL script created using the extract_holdings utility. - + psql -U evergreen -f HOLDINGS.pg evergreen the file HOLDINGS.pg and/or the COPY query may need to be adjusted for your particular circumstances. - 3. + 3. Generate shelving locations from your staging table. - + INSERT INTO asset.copy_location (name, owning_lib) SELECT DISTINCT l.l_location, ou.id FROM staging_items l JOIN actor.org_unit ou ON (l.l_owning_lib = ou.shortname); - 4. + 4. Generate circulation modifiers from your staging table. - + INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magnetic_media) SELECT DISTINCT l_circ_modifier AS code, l_circ_modifier AS name, @@ -7870,19 +7892,19 @@ INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magn FROM staging_items WHERE l_circ_modifier NOT IN (SELECT code FROM config.circ_modifier); - 5. + 5. Generate call numbers from your staging table: - + INSERT INTO asset.call_number (creator,editor,record,label,owning_lib) SELECT DISTINCT 1, 1, l.egid, l.l_callnum, ou.id FROM staging_items l JOIN actor.org_unit ou ON (l.l_owning_lib = ou.shortname); - 6. + 6. Generate copies from your staging table: - + INSERT INTO asset.copy ( circ_lib, creator, editor, create_date, barcode, STATUS, location, loan_duration, fine_level, circ_modifier, deposit, ref, call_number) @@ -7920,28 +7942,28 @@ SELECT DISTINCT ou.id AS circ_lib, Migrating Patron DataMigrating Patron Data - + - This section will explain the task of migrating your patron data from comma delimited files into Evergreen. + This section will explain the task of migrating your patron data from comma delimited files into Evergreen. It does not deal with the process of exporting from the non-Evergreen system since this process may vary depending on where you are extracting your patron records. Patron could come from an ILS or it could come from a student database in the case of academic records. When importing records into Evergreen you will need to populate 3 tables in your Evergreen database: - •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. + •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. Before following the procedures below to import patron data into Evergreen, it is a good idea to examine the fields in these tables in order to decide on a strategy for data to include in your import. It is important to understand the data types and constraints on each field. - 1. + 1. Export the patron data from your existing ILS or from another source into a comma delimited file. The comma delimited file used for importing - the records should use Unicode (UTF8) character encoding. - 2. + the records should use Unicode (UTF8) character encoding. + 2. - Create a staging table. A staging table will allow you to tweak the data before importing. + Create a staging table. A staging table will allow you to tweak the data before importing. Here is an example sql statement: - - + + CREATE TABLE students ( student_id int, barcode text, last_name text, first_name text, program_number text, program_name text, email text, address_type text, street1 text, street2 text, @@ -7952,19 +7974,19 @@ CREATE TABLE students ( Note the DEFAULT variables. These allow you to set default for your library or to populate required fields if you data allows NULL values where fields are required in Evergreen. - 3. + 3. Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example of sql to adjust phone numbers in the staging table to fit the evergreen field: - + UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || substring(phone from 10), '(', ''), ')', ''), ' ', '-'); Data “massaging” may be required to fit formats used in Evergreen. - 4. + 4. Insert records from the staging table into the actor.usr Evergreen table: - + INSERT INTO actor.usr ( profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, day_phone, home_ou, claims_returned_count, net_access_level) @@ -7972,10 +7994,10 @@ substring(phone from 10), '(', ''), ')', ''), ' ', '-'); first_name, last_name, phone, home_ou, claims_returned_count, net_access_level FROM students; - 5. + 5. insert records into actor.card from actor.usr. - + INSERT INTO actor.card (usr, barcode) SELECT actor.usr.id, students.barcode FROM students @@ -7983,29 +8005,29 @@ INSERT INTO actor.card (usr, barcode) ON students.usrname = actor.usr.usrname; This assumes a one to one card patron relationship. If your patron data import has multiple cards assigned to one patron more complex import scripts may be required which look for inactive or active flags. - 6. + 6. Update actor.usr.card field with actor.card.id to associate active card with the user: - + UPDATE actor.usr SET card = actor.card.id FROM actor.card WHERE actor.card.usr = actor.usr.id; - 7. + 7. Insert records into actor.usr_address to add address information for users: - + INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, students.country, students.postal_code FROM students INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; - 8. + 8. update actor.usr.address with address id from address table. - + UPDATE actor.usr SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id FROM actor.usr_address @@ -8013,18 +8035,18 @@ UPDATE actor.usr This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. - Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons + Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons The procedure for importing patron can be automated with the help of an sql script. Follow these steps to create an import script: - 1. + 1. Create an new file and name it import.sql - 2. + 2. Edit the file to look similar to this: - + BEGIN; -- Create staging table. @@ -8078,9 +8100,9 @@ COMMIT; each sql statement is doing and are not processed. - Batch Updating Patron DataBatch Updating Patron Data + Batch Updating Patron DataBatch Updating Patron Data - + For academic libraries, doing batch updates to add new patrons to the Evergreen database is a critical task. The above procedures and import script can be easily adapted to create an update script for importing new patrons from external databases. If the data import file contains only new patrons, then, the above procedures will work well to insert those patrons. However, if the data load contains all patrons, a second staging table and a procedure to remove existing patrons from that second staging table may be required before importing the new patrons. Moreover, additional steps to update address information and perhaps delete @@ -8094,56 +8116,56 @@ COMMIT; If you've done a test import of records and you want to quickly get Evergreen back to a pristine state, you can create a clean Evergreen database schema by performing the following: - 1. + 1. - + cd ILS/Open-ILS/src/sql/Pg/ - 2. + 2. Rebuild the database schema: - + ./build-db.sh [db-hostname> [db-port] [db-name] [db-user] [db-password] [db-version] - This will remove all of your data from the database and restore the default values. + This will remove all of your data from the database and restore the default values. Exporting Bibliographic Records into MARC filesExporting Bibliographic Records into MARC files - + The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the opensrf user from your Evergreen server. - + Processing time for exporting records will depond on several factors such as the number of records you are exporting. It is recommended that you divide the export id files (records.txt) into manageable number of records if you are exporting a large number of records. - 1. + 1. Create a text file list of the Bibliographic record ids you would like to export from Evergreen. One way to do this is using SQL: - + SELECT DISTINCT bre.id FROM biblio.record_entry AS bre JOIN asset.call_number AS acn ON acn.record-bre.id WHERE bre.deleted='false' and ownling_lib=101 \g /home/opensrf/records.txt; This query will create a file called records.txt containing a column of distinct ids of items owned by the organizational unit with the id 101. - 2. + 2. Navigate to the support-scripts folder - + cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ - 3. + 3. Run marc_export, using the id file you created in step 1 to define which files to export. - + cat /home/opensrf/records.txt | ./marc_export -i -c /openils/conf/opensrf_core.xml \ -x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml The example above export the records into MARCXML format. - + For help or for more options when running marc_export, run marc_export with the -h option: - + ./marc_export -h @@ -8152,82 +8174,82 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre Importing Authority RecordsImporting Authority Records - + The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the opensrf user from your Evergreen server. - Importing Authority Records from Command LineImporting Authority Records from Command Line + Importing Authority Records from Command LineImporting Authority Records from Command Line The major advantages of the command line approach are its speed and its convenience for system administrators who can perform bulk loads of authority records in a controlled environment. - 1. + 1. Run marc2are.pl against the authority records, specifying the user name, password, MARC type (USMARC or XML). Use STDOUT redirection to either pipe the output directly into the next command or into an output file for inspection. For example, to process a set of authority records named auth_small.xml using the default user name and password and directing the output into a file named auth.are: - + cd Open-ILS/src/extras/import/ perl marc2are.pl --user admin --pass open-ils auth_small.xml > auth.are - 2. + 2. Run direct_ingest.pl to ingest records. - + perl direct_ingest.pl -a auth.are > ~/auth.ingest - 3. + 3. Run pg_loader.pl to generate the SQL necessary for importing the authority records into your system. - + cd Open-ILS/src/extras/import/ perl pg_loader.pl-or are -or afr -a afr --output=auth < ~/auth.ingest - 4. + 4. Load the authority records from the SQL file that you generated in the last step into your Evergreen database using the psql tool. Assuming the default user name, host name, and database name for an Evergreen instance, that command looks like: - + psql -U evergreen -h localhost -d evergreen -f auth.sql - Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client + Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client Good for loading batches of up to 5,000 records (roughly) at a time, the major advantages to importing authority records using the MARC Batch Import/Export interface are that it does not require command-line or direct database access – good for both security in that it minimizes the number of people who need this access and for spreading the effort around to others in the library – and it does most of the work (for example, figuring out whether the batch of records is in XML or USMARC format) for you. To import a set of MARC authority records from the MARC Batch Import/Export interface: - 1. + 1. From the Evergreen staff client, select Cataloging → MARC Batch Import/Export. The Evergreen MARC File Upload screen opens, with Import Records as the highlighted tab. - 2. + 2. From the Bibliographic records drop-down menu, select Authority records. - 3. + 3. Enter a name for the queue (batch import job) in the Create a new upload queue field. - 4. + 4. Select the Auto-Import Non-Colliding Records checkbox. - 5. + 5. Click the Browse… button to select the file of MARC authorities to import. - 6. + 6. Click the Upload button to begin importing the records. The screen displays Uploading… Processing… to show that the records are being transferred to the server, then displays a progress bar to show the actual import progress. When the staff client displays the progress bar, you can disconnect your staff client safely. Very large batches of records might time out at this stage. - 7. + 7. Once the import is finished, the staff client displays the results of the import process. You can manually display the import progress by selecting the Inspect Queue tab of the MARC Batch Import/Export interface and selecting the queue name. By default, the staff client does not @@ -8249,18 +8271,18 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre If you have Evergreen installed and are encountering systematic errors, here is the steps to find the cause and solution to most problems. These instructions assume standard locations and file names for Evergreen installations, and may also include commands for specific Linux distributions. - Systematic Evergreen Restart to Isolate Errors1. + Systematic Evergreen Restart to Isolate Errors1. - Stop Apache: - /etc/init.d/apache2 stop + Stop Apache: + /etc/init.d/apache2 stop or - apache2ctl stop - 2. + apache2ctl stop + 2. - Stop OpenSRF: - osrf_ctl.sh -l -a stop_all + Stop OpenSRF: + osrf_ctl.sh -l -a stop_all You should get either output simlar to this: - + Stopping OpenSRF C process 12515... Stopping OpenSRF C process 12520... Stopping OpenSRF C process 12526... @@ -8268,49 +8290,49 @@ Stopping OpenSRF Perl process 12471... Stopping OpenSRF Router process 12466... Or, if services have already been stopped, output may look like this: - OpenSRF C not running - OpenSRF Perl not running - OpenSRF Router not running + OpenSRF C not running + OpenSRF Perl not running + OpenSRF Router not running Occasionally osrf_ctl.sh fails to kill OpenSRF processes, so we should check to make sure that none are still running with the command: - ps -aef | grep OpenSRF + ps -aef | grep OpenSRF You should manually kill any OpenSRF processes. If you were unable to stop OpenSRF with the above methods, you could also try this command: - rm –R /openils/var/run/*.pid + rm –R /openils/var/run/*.pid This will remove the temporary OpenSRF process files from the run directory which may have been left over from a previous system boot cycle. - 3. + 3. Restart Ejabberd and - Memcached with the following commands: - sudo /etc/init.d/ejabberd restart - sudo /etc/init.d/memcached restart - 4. + Memcached with the following commands: + sudo /etc/init.d/ejabberd restart + sudo /etc/init.d/memcached restart + 4. Start the OpenSRF router and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_router + + /openils/bin/osrf_ctl.sh -l -a start_router If the router started correctly, output will be: - Starting OpenSRF Router + Starting OpenSRF Router If router does not start correctly, you should check the router error log files for error information. Evergreen 1.6 uses two routers, a public one and a private one, with two different logfiles: - /openils/var/log/private.router.log + /openils/var/log/private.router.log /openils/var/log/public.router.log A quick way to find error information in the logs is with the grep command. - grep ERR /openils/var/log/*router.log + grep ERR /openils/var/log/*router.log As a final sanity check, look for router processes using the process status command: - ps -aef | grep Router - 5. + ps -aef | grep Router + 5. Start the OpenSRF perl services and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_perl + + /openils/bin/osrf_ctl.sh -l -a start_perl You should see the output similar to the following: - + Starting OpenSRF Perl * starting all services for ... * starting service pid=7484 opensrf.settings @@ -8324,35 +8346,35 @@ Starting OpenSRF Perl If the perl services do not start correctly or you receive errors, search for errors in the following log files: - •/openils/var/log/router.log•/openils/var/log/osrfsys.log + •/openils/var/log/router.log•/openils/var/log/osrfsys.log At this point you can use the grep command to find errors in any of the Evergreen log files: - grep ERR /openils/var/log/*.log + grep ERR /openils/var/log/*.log As a final sanity check, look for OpenSRF processes: - ps -aef | grep -i opensrf - 6. + ps -aef | grep -i opensrf + 6. Start the OpenSRF c services and check for errors:] - - /openils/bin/osrf_ctl.sh -l -a start_c + + /openils/bin/osrf_ctl.sh -l -a start_c And output should be: - Starting OpenSRF C (host=localhost) + Starting OpenSRF C (host=localhost) If the c service does not start, check for errors by grepping the log files for errors: - grep ERR /openils/var/log/*.log + grep ERR /openils/var/log/*.log Check for OpenSRF processes: - ps -aef | grep -i opensrf - 7. + ps -aef | grep -i opensrf + 7. - Smoke test with autogen.sh + Smoke test with autogen.sh The autogen tool will take some dynamic information from the database and generate - static JavaScript files for use by the OPAC and staff client. It is also able to refresh - the proximity map between libraries for the purpose of efficiently routing hold + static JavaScript files for use by the OPAC and staff client. It is also able to refresh + the proximity map between libraries for the purpose of efficiently routing hold requests. As user opensrf, you invoke autogen with the command: - /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u If Autogen completes successfully, the output will be: - + Updating fieldmapper Updating web_fieldmapper Updating OrgTree @@ -8366,22 +8388,22 @@ Done If Autogen does not complete its task and you receive errors, use grep to find errors in the log files: - grep ERR /openils/var/log/*.log - 8. + grep ERR /openils/var/log/*.log + 8. - Connect to Evergreen using the srfsh command-line OpenSRF client - /openils/bin/srfsh - + Connect to Evergreen using the srfsh command-line OpenSRF client + /openils/bin/srfsh + In order for you to connect using srfsh, you will need to have set up the .srfsh.xml configuration file in your home directory as as described in the installation chapter. You will then see the srfsh prompt: - srfsh# + srfsh# At the srfsh prompt, enter this command: - login admin open-ils + login admin open-ils You should the request verification: - + Received Data: "6f63ff5542da1fead4431c6c280efc75" ------------------------------------ Request Completed Successfully @@ -8407,48 +8429,48 @@ Request Time in seconds: 0.552430 Login Session: 28804. Session timeout: 420.000 srfsh# If you encounter errors or if you are unable to connect, you should consult the - srfsh.log file. The location of this file is configured in your + srfsh.log file. The location of this file is configured in your .srfsh.xml configuration file and is /openils/var/log/srfsh.log by default. Pressing Ctrl+D or entering “exit” will terminate srfsh. - 9. + 9. Start Apache and check for errors: - - /etc/init.d/apache2 start + + /etc/init.d/apache2 start or - apache2ctl start + apache2ctl start You should see output: - + * Starting web server apache2 ...done. the Apache OpenSRF modules write to the - /openils/var/log/gateway.log + /openils/var/log/gateway.log However, you should check all of the log files for errors: - grep ERR /openils/var/log/*.log + grep ERR /openils/var/log/*.log Another place to check for errors is the Apache error logs - generally located in in the /var/log/Apache2 - + generally located in in the /var/log/Apache2 + directory If you encounter errors with Apache, a common source of potential problems are the Evergreen site configuration files /etc/apache2/eg_vhost.conf and /etc/apache2/sites-available/eg.conf - - - 10. + + + 10. Testing with settings-tester.pl As the opensrf user, run the script settings-tester.pl to see if it finds any system configuration problems. - + cd /home/opensrf/Evergreen-ILS-1.6.0.0 perl Open-ILS/src/support-scripts/settings-tester.pl Here is example output from running settings-tester.pl: - + LWP::UserAgent version 5.810 XML::LibXML version 1.70 XML::LibXML::XPathContext version 1.70 @@ -8482,7 +8504,7 @@ Parse::RecDescent version 1.95.1 SRU version 0.99 JSON::XS version 2.27 - + Checking Jabber connection for user opensrf, domain private.localhost * Jabber successfully connected @@ -8523,13 +8545,13 @@ $ If the output from the script does not help you find the problem, please do not make any further significant changes to your configuration. Follow the steps in the troubleshooting guide in Chapter 21, Troubleshooting System Errors. - 11. + 11. - Try to login from the staff client + Try to login from the staff client 12. Testing the Catalog - + By default, the OPAC will live at the URL http://my.domain.com/opac/. Navigate to this URL and the front page of the OPAC should load. There is a basic text entry field with some extra search options. If you have any problems loading this page, check the Apache error logs. If the page loads but does not function correctly, then check for possible javascript errors. We @@ -8550,56 +8572,56 @@ $ split between static files that are automatically installed with Evergreen, and dynamic labels that can be stored in the Evergreen database. Evergreen is installed with additional SQL files that contain translated dynamic labels for a number of languages, and to make the set of translated labels available in all interfaces. Only a few steps are required to enable or disable one or more languages. - Enabling a LocalizationEnabling a Localization + Enabling a LocalizationEnabling a Localization - + To enable the translated labels for a given language to display in Evergreen, just populate the database with the translated labels and enable the localization. The following example illustrates how to enable Canadian French (fr-CA) support in the database. These same steps can be used with any of the languages bundled with Evergreen, or you can create and add your own localization. - 1. + 1. The translated labels for each locale are stored in SQL files named "950.data.seed-values-xx-YY.sql" where "xx-YY" represents the locale code for the translation. Load the translated labels into the Evergreen database using the command psql, substituting your user, host and database connection information accordingly: - + $ psql -U <username> -h <hostname> -d <database> \ -f /path/to/Evergreen-source/Open-ILS/src/sql/Pg/950.data.seed-values-fr-CA.sql - 2. + 2. Ensure the locale is enabled in the Evergreen database by using the utility psql to check for the existence of the locale in the table config.i18n_locale: - + SELECT code, marc_code, name, description FROM config.i18n_locale WHERE code = 'fr-CA'; As shown in the following example, if one row of output is returned, then the locale is already enabled: - + code | marc_code | name | description ------+-----------+-----------------+----------------- fr-CA | fre | French (Canada) | Canadian French (1 row) If zero rows of output are returned, then the locale is not enabled: - + code | marc_code | name | description ------+-----------+------+------------- (0 rows) To enable a locale, use psql to insert a row into the table config.i18n_locale as follows: - + INSERT INTO config.i18n_locale (code, marc_code, name, description) VALUES ('fr-CA', 'fre', 'French (Canada)', 'Canadian French'); - Disabling a LocalizationDisabling a Localization + Disabling a LocalizationDisabling a Localization - + You might not want to offer all of the localizations that are preconfigured in Evergreen. If you choose to disable the dynamic labels for a locale, just delete those entries from the table config.i18n_locale using the psql utility: - + DELETE FROM config.i18n_locale WHERE code = 'fr-CA'; @@ -8617,50 +8639,50 @@ WHERE code = 'fr-CA'; Testing SRU with yaz-clientTesting SRU with yaz-client yaz-client is installed as a part of Index Data's YAZ software. Recent versions include support for querying SRU servers. - Evergreen ships an SRU configuration + Evergreen ships an SRU configuration that works out of the box. To search Evergreen with yaz-client, choose the GET query method and issue the find command. In the following example, we connect to the Evergreen test server dev.gapines.org - substitute this hostname with your own Evergreen server hostname: - Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from + Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from http://www.indexdata.com/yaz. - $ yaz-client http://dev.gapines.org/opac/extras/sru - Z> sru GET 1.1 - Z> find hemingway + $ yaz-client http://dev.gapines.org/opac/extras/sru + Z> sru GET 1.1 + Z> find hemingway If your database has records that match that term, you will get the corresponding MARCXML records - in your response from yaz-client. + in your response from yaz-client. Here's what the SRU request looks like as sent to the Evergreen web server: - GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 + GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 You can see what the response looks like by hitting the same URL in your Web browser: http://dev.gapines.org/opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 CQL queries - Evergreen supports some CQL index-sets for advanced queries such as a subset of + Evergreen supports some CQL index-sets for advanced queries such as a subset of Dublin Core (DC) elements. Those DC elements that are - supported map to Evergreen default indexes as follows: + supported map to Evergreen default indexes as follows: DC element Evergreen indextitletitlecreator authorcontributorauthorpublisherkeywordsubjectsubjectidentifierkeywordtypenoneformatnonelanguagelang Here are a few examples of SRU searches against some of these indexes: - •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone + •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone Setting up Z39.50 server supportSetting up Z39.50 server support - You must have Evergreen's SRU server running before you can enable Z39.50 server support. + You must have Evergreen's SRU server running before you can enable Z39.50 server support. This support uses an Z39.50-to-SRU translator service supplied - by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. + by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. You could run the Z39.50 server on a different machine. It just needs to be able to connect to the Evergreen SRU server. - Setting up the Z39.50 server1. + Setting up the Z39.50 server1. - Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. + Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. - Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. + Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. Create a Simple2ZOOM configuration file. Something like the following is a good start, and is based on the Simple2ZOOM documentation example. We'll name the file dgo.conf for our example: - + <client> <database name="gapines"> <zurl>http://dev.gapines.org/opac/extras/sru</zurl> @@ -8687,14 +8709,14 @@ WHERE code = 'fr-CA'; scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and to expose its holdings. - 4. + 4. Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the Z39.50 server will be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org n port 210: - + <yazgfs> <server id="server1"> <retrievalinfo> @@ -8708,15 +8730,15 @@ WHERE code = 'fr-CA'; </server> </yazgfs> - 5. + 5. Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the Z39.50 server will be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org on port 210: - simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 + simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 To test the Z39.50 server, we can use yaz-client again: - + yaz-client Z> open localhost:2210/gapines Connecting...OK. @@ -8759,12 +8781,12 @@ Records: 1 SIP, standing for Standard Interchange Protocol, was developed by the 3Mcorporation to be a common protocol for data transfer between ILS' (referred to in SIP as an ACS, or Automated Circulation System) - and a + and a third party device. Originally, the protocol was developed for use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find - SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. + SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. Some examples include: - •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or + •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or book carts, based on shelving location or other programmable criteria Installing the SIP ServerInstalling the SIP Server @@ -8773,35 +8795,35 @@ Records: 1 Getting the codeGetting the code Current SIP code lives at github: - cd /opt - git clone git://github.com/atz/SIPServer.git SIPServer + cd /opt + git clone git://github.com/atz/SIPServer.git SIPServer Or use the old style: - $ cd /opt - $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login + $ cd /opt + $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login When prompted for the CVS password, just hit Enter (sudo password may be req'd) - $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer + $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer Configuring the ServerConfiguring the Server - 1. + 1. - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. + Type the following commands from the command prompt: + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + 2. - Edit oils_sip.xml. + Edit oils_sip.xml. Change the commented out <server-params> section to this: - + <server-params min_servers='1' min_spare_servers='0' max_servers='25' /> - 3. + 3. max_servers will directly correspond to the number of allowed SIP clients. Set the number accordingly, but bear in mind that too many connections can @@ -8811,26 +8833,26 @@ max_servers='25' Adding SIP UsersAdding SIP Users - 1. + 1. - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. + Type the following commands from the command prompt: + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + 2. in the <accounts> section, add SIP client login information. Make sure that all <logins> use the same institution attribute, and make sure the institution is listed in <institutions>. All attributes in the <login> section will be used by the SIP client. - 3. + 3. In Evergreen, create a new profile group called SIP. This group should be a sub-group of Users (not Staff or Patrons). Set Editing Permission as group_application.user.sip_client and give the group the following permissions: - + COPY_CHECKIN COPY_CHECKOUT RENEW_CIRC @@ -8842,7 +8864,7 @@ max_servers='25' VIEW_USER_TRANSACTIONS OR use SQL like: - + INSERT INTO permission.grp_tree (id,name,parent,description,application_perm) VALUES (8, 'SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); @@ -8853,7 +8875,7 @@ VALUES (8,15,0),(8,16,0),(8,17,0),(8,31,0),(8,32,0),(8,48,0),(8,54,0),(8,75,0),( Verify: - + SELECT * FROM permission.grp_perm_map JOIN permission.perm_list ON @@ -8863,118 +8885,118 @@ WHERE grp=8; Keep in mind that the id (8) may not necessarily be available on your system. - 4. + 4. For each account created in the <login> section of oils_sip.xml, create a user (via the staff client user editor) that has the same username and password and put that user into the SIP group. - The expiration date will affect the SIP users' connection so you might want to make a note of + The expiration date will affect the SIP users' connection so you might want to make a note of this somewhere. Running the serverRunning the server To start the SIP server type the following commands from the command prompt: - $ sudo su opensrf - $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip + $ sudo su opensrf + $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip Logging-SIPLogging-SIP - - SyslogSyslog + + SyslogSyslog It is useful to log SIP requests to a separate file especially during initial setup by modifying your - syslog config file. - 1. + syslog config file. + 1. Edit syslog.conf. - $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf - 2. + $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf + 2. Add this: - local6.* -/var/log/SIP_evergreen.log - 3. + local6.* -/var/log/SIP_evergreen.log + 3. Syslog expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. + $ sudo touch /var/log/SIP_evergreen.log + 4. Restart sysklogd. - $ sudo /etc/init.d/sysklogd restart + $ sudo /etc/init.d/sysklogd restart - Syslog-NGSyslog-NG + Syslog-NGSyslog-NG - 1. + 1. - Edit logging config. - sudo vi /etc/syslog-ng/syslog-ng.conf - 2. + Edit logging config. + sudo vi /etc/syslog-ng/syslog-ng.conf + 2. Add: - + # SIP2 for Evergreen filter f_eg_sip { level(warn, err, crit) and facility(local6); }; destination eg_sip { file("/var/log/SIP_evergreen.log"); }; log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; - 3. + 3. Syslog-ng expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. + $ sudo touch /var/log/SIP_evergreen.log + 4. Restart syslog-ng - $ sudo /etc/init.d/syslog-ng restart + $ sudo /etc/init.d/syslog-ng restart Testing Your SIP ConnectionTesting Your SIP Connection - - • + + • In the top level CVS checkout of the SIPServer code. - $ cd SIPServer/t - • + $ cd SIPServer/t + • Edit SIPtest.pm, change the $instid, $server, $username, and $password variables. This will be enough to test connectivity. To run all tests, you'll need to change all the variables in the Configuration section. - $ PERL5LIB=../ perl 00sc_status.t + $ PERL5LIB=../ perl 00sc_status.t This should produce something like: - + 1..4 ok 1 - Invalid username ok 2 - Invalid username ok 3 - login ok 4 - SC status - • + • Don't be dismayed at Invalid Username. That's just one of the many tests that are run. More TestingMore Testing - 1. + 1. Once you have opened up either the SIP OR SIP2 ports to be accessible from outside you can do some testing via telnet. You can try this with localhost if you so wish, but we want to prove that SIP2 works from non-localhost. Replace $instid, $server, $barcode, $username, and $password variables below as necessary. - We are using 6001 here which is associated with SIP2 as per our configuration. - + We are using 6001 here which is associated with SIP2 as per our configuration. + $ telnet $server 6001 Connected to $server. Escape character is '^]'. 9300CN**$username**|CO**$password**|CP**$instid** You should get back. - 941 - 2. + 941 + 2. Now just copy in the following line (with variables replaced) you don't need to hit enter, just paste! - 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** + 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** You will get back the patron information for $barcode (something similar to the what's below). -24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY +24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY |BHUSD|BV0.00|AFOK|AO**$instid**| The response declares it is a valid patron BLY with a valid password CQY and shows the user's @@ -8997,38 +9019,38 @@ Escape character is '^]'. Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceeded with a 2-character field identifier) are used. The fields vary between message pairs. - PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status + PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status 01 Block Patron01 Block Patron A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts - to disable multiple items during a single item checkout, multiple failed pin entries, etc). + to disable multiple items during a single item checkout, multiple failed pin entries, etc). In Evergreen, this command does the following: - •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL - Blocked Card Message field).•Card is marked inactive. + •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL + Blocked Card Message field).•Card is marked inactive. The request looks like: - 01<card retained><date>[fields AO, AL, AA, AC] + 01<card retained><date>[fields AO, AL, AA, AC] Card Retained: A single character field of Y or N - tells the ACS whether the SC has retained the card (ex: left in the machine) or not. Date: An 18 character field for the date/time when the block occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) Fields: See Fields for more details. The response is a 24 “Patron Status Response” with the following: - •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron + •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron 09/10 Checkin09/10 Checkin The request looks like: - 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] + 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] No Block (Offline): A single character field of Y or N - Offline transactions are not currently supported so send N. xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) Fields: See Fields for more details. The response is a 10 “Checkin Response” with the following: - 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] + 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] Example (with a remote hold): - 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| - + 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| + 101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 |CTBR3|CY373827|DANicholas Richard Woodard|CV02| @@ -9036,16 +9058,16 @@ Escape character is '^]'. to be picked up at CT “BR3”. Since the transaction is happening at AO “BR1”, the alert type CV is 02 for hold at remote library. The possible values for CV are: - •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other - + •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other + the logic for Evergreen to determine the content is magnetic_media comes from either legacy circ scripts or search_config_circ_modifier. - The default is non-magnetic. + The default is non-magnetic. The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the call_number where available. Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of the bib ID. Don't be confused by the different branches that can show up in the same response line. - •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). + •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). 11/12 Checkout11/12 Checkout @@ -9058,24 +9080,24 @@ Escape character is '^]'. 17/18 Item Information17/18 Item Information The request looks like: - 17<xact_date>[fields: AO,AB,AC] + 17<xact_date>[fields: AO,AB,AC] The request is very terse. AC is optional. The following response structure is for SIP2. (Version 1 of the protocol had only 6 total fields.) - + 18<circulation_status><security_marker><fee_type><xact_date> [fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] Example: - 1720060110 215612AOBR1|ABno_such_barcode| - 1801010120100609 162510ABno_such_barcode|AJ| - 1720060110 215612AOBR1|AB1565921879| -1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 + 1720060110 215612AOBR1|ABno_such_barcode| + 1801010120100609 162510ABno_such_barcode|AJ| + 1720060110 215612AOBR1|AB1565921879| +1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 |CTBR3|CSQA76.73.P33V76 1996| The first case is with a bogus barcode. The latter shows an item with a circulation_status of 10 for in transit between libraries. The known values of circulation_status are enumerated in the spec. EXTENSIONS: The CT field for destination location and CS call number are used by - Automated Material Handling systems. + Automated Material Handling systems. 19/20 Item Status Update19/20 Item Status Update @@ -9083,15 +9105,15 @@ Escape character is '^]'. 23/24 Patron Status23/24 Patron Status Example: - 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| - 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| - 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| - 1.The BL field (SIP2, optional) is valid patron, so the + 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| + 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| + 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| + 1.The BL field (SIP2, optional) is valid patron, so the N value means - bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N + bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N value means bad_password doesn't match 999999's password, the Y means userpassword does. So if you were building the most basic SIP2 authentication client, you would check for @@ -9111,8 +9133,8 @@ Escape character is '^]'. 35/36 End Session35/36 End Session - 3520100505 115901AOBR1|AA999999| - 36Y20100507 161213AOCONS|AA999999|AFThank you!| + 3520100505 115901AOBR1|AA999999| + 36Y20100507 161213AOCONS|AA999999|AFThank you!| The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or important in this context, and for evergreen it is hardcoded Y. @@ -9123,15 +9145,15 @@ Escape character is '^]'. 63/64 Patron Information63/64 Patron Information Attempting to retrieve patron info with a bad barcode: - 6300020060329 201700 AOBR1|AAbad_barcode| - 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| + 6300020060329 201700 AOBR1|AAbad_barcode| + 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| Attempting to retrieve patron info with a good barcode (but bad patron password): - 6300020060329 201700 AOBR1|AA999999|ADbadpwd| - + 6300020060329 201700 AOBR1|AA999999|ADbadpwd| + 64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 |BD2 Meadowvale Dr. St Thomas, ON Canada -90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons +90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons |PIUnfiltered|AFOK|AOBR1| See 23/24 Patron Status for info on BL and CQ fields. @@ -9143,11 +9165,11 @@ Escape character is '^]'. 93/94 Login93/94 Login Example: - 9300CNsip_01|CObad_value|CPBR1| - [Connection closed by foreign host.] - ... - 9300CNsip_01|COsip_01|CPBR1| - 941 + 9300CNsip_01|CObad_value|CPBR1| + [Connection closed by foreign host.] + ... + 9300CNsip_01|COsip_01|CPBR1| + 941 941 means successful terminal login. 940 or getting dropped means failure. 97/96 Resend97/96 Resend @@ -9155,21 +9177,21 @@ Escape character is '^]'. 99/98 SC and ACS Status99/98 SC and ACS Status - 99<status code><max print width><protocol version> + 99<status code><max print width><protocol version> All 3 fields are required: - •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx - + •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx + 98<on-line status><checkin ok><checkout ok><ACS renewal policy> <status update ok><offline ok><timeout period> - + <retries allowed><date/time sync><protocol version><institution id> <library name><supported messages><terminal - location><screen message><print line> + location><screen message><print line> Example: - 9910302.00 - 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| + 9910302.00 + 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| The Supported Messages field BX appears only in SIP2, and specifies whether 16 different SIP commands are supported by the ACS or not. @@ -9193,7 +9215,7 @@ Escape character is '^]'. Organizational Unit TypesOrganizational Unit Types - + Organizational Unit Types are the terms used to refer to levels in the hierarchy of your library system(s). Examples could include>All-Encompassing Consortium, Consortium Within a Consortium, Library System, Branch, Bookmobile, Sub-Branch, Twig, etc. @@ -9202,89 +9224,89 @@ Escape character is '^]'. Evergreen. Evergreen can support organizations as simple as a single library with one or more branches or as complex as a consortium composed of many independently governed library systems. Organizational unit types should never have proper names since they are only generic types . - It is a good idea to set up all of your organizational types and units before loading other data. In many cases, editing or deleting organizational units and types + It is a good idea to set up all of your organizational types and units before loading other data. In many cases, editing or deleting organizational units and types may be difficult once you have loaded records or users. The fields in the organizational unit type record include: - •Type Name - The name of the organization unit type.•Opac Label - This is the label displayed in the OPAC to describe the search + •Type Name - The name of the organization unit type.•Opac Label - This is the label displayed in the OPAC to describe the search range and the copy count columns for results. They are range - relative labels.•Parent Type - The parent organizational unit type of this type.•Can Have Volumes - Flag that allows an organizational unit of this type to contain - Volumes/Call Numbers and thus Copies.•Can Have Users - Flag that allows an Organizational unit of this type to be home to + relative labels.•Parent Type - The parent organizational unit type of this type.•Can Have Volumes - Flag that allows an organizational unit of this type to contain + Volumes/Call Numbers and thus Copies.•Can Have Users - Flag that allows an Organizational unit of this type to be home to Users. An organizational unit type can be added, edited, or removed using the staff client. To navigate to the Organization Unit Types from the staff client select Admin → Server Administration → Organization Types - Adding Organization Types1. + Adding Organization Types1. Select an organization type from the organization type tree on the left and - click New Child.2. + click New Child.2. Make sure your new type is selected and edit the Type Name, - OPAC Label and Parent Type.3. - Change the Parent Type if necessary.4. + OPAC Label and Parent Type.3. + Change the Parent Type if necessary.4. Check the Can Have Volumes and Copies check box if the - organization units of this type will have volumes and copies assigned to it.5. + organization units of this type will have volumes and copies assigned to it.5. Check the Can Have Users check box if you will allow users - to be have the organization units of this type as their home unit.6. + to be have the organization units of this type as their home unit.6. Click Save to save your new organization type. - 7. + 7. From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - 8. + 8. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 9. + 9. The staff client will need to be restarted for changes to appear. - Deleting Organization Types + Deleting Organization Types You will not be able to delete organization types if organization units are assigned to that type. Before you can delete the organization Type, you must change the organization type of the units associated with the type or delete the units. - 1. + 1. Select the organization type from the Organization Type - tree.2. - Click Delete.3. - Click OK on the warning alert box.4. + tree.2. + Click Delete.3. + Click OK on the warning alert box.4. From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - 5. + 5. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 6. + 6. The staff client will need to be restarted for changes to appear. - Editing Organization Types1. + Editing Organization Types1. Select the organization type you wish to edit from the - organization type tree.2. - Make the changes in the right pane.3. - Click Save to save your changes.4. + organization type tree.2. + Make the changes in the right pane.3. + Click Save to save your changes.4. From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - 5. + 5. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 6. + 6. The staff client will need to be restarted for changes to appear. Organizational UnitsOrganizational Units - + Organizational Units are the specific instances of the organization unit types that make up your library's hierarchy. These can include consortia, systems, branches, @@ -9294,83 +9316,83 @@ Escape character is '^]'. To navigate to the organizational units administration page in the staff client select Admin → Server Administration → Organizational Units - Adding Organizational Units1. + Adding Organizational Units1. Select an Organizational Unit from the organizational unit tree on the left and click - New Child.2. + New Child.2. Make sure your new unit is selected and edit the Organizational Unit Name, Organizational Unit Policy Code, Main Email Address and Main Phone Number. - + The Organizational Unit Name is the name that will appear in the OPAC. The Policy Code is used by the system to associate policies and copies with the unit. - 3. + 3. Select the Organization Unit Type and - Parent Organization Unit.4. + Parent Organization Unit.4. Check the Can Have Volumes and Copies check box if the - organization units of this type will have volumes and copies assigned to it.5. + organization units of this type will have volumes and copies assigned to it.5. Check the OPAC Visible check box if you want this location to be - visible in the OPAC for searching.6. - Click Save to save your new organizational unit.7. + visible in the OPAC for searching.6. + Click Save to save your new organizational unit.7. From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - 8. + 8. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 9. + 9. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 10. + 10. The staff client will need to be restarted for changes to appear. - Deleting Organizational Units + Deleting Organizational Units You will not be able to delete organizational units if you have users, workstations or copies assigned to the unit. Before you can delete the organizational unit, you must move its users, workstations, copies and other associated resources to other organizational units. - 1. - Select the organizational unit you wish to delete from the organizational unit tree in the left pane.2. - ClickDelete.3. - Click OK on the warning alert box.4. + 1. + Select the organizational unit you wish to delete from the organizational unit tree in the left pane.2. + ClickDelete.3. + Click OK on the warning alert box.4. From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - 5. + 5. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 6. + 6. The staff client will need to be restarted for changes to appear. - Changing the Default Organizational Units and Types Using SQLEvergreen comes with several default organizational units set up out-of-the-box. Most libraries will want to customize the Org units with their own Organizational Units and - types. The quickest way to do this is with SQL.The following procedure should only be done before you have migrated users and items into your system.1. + Changing the Default Organizational Units and Types Using SQLEvergreen comes with several default organizational units set up out-of-the-box. Most libraries will want to customize the Org units with their own Organizational Units and + types. The quickest way to do this is with SQL.The following procedure should only be done before you have migrated users and items into your system.1. Delete all but the core organizational unit: - + BEGIN; DELETE FROM actor.org_unit WHERE id > 1; DELETE FROM actor.org_address WHERE id > 1; DELETE FROM actor.workstation WHERE owning_lib > 1; COMMIT; - 2. + 2. Clean up our org unit types, in preparation for creating the organizational units hierarchy: - + BEGIN; DELETE FROM actor.org_unit_type WHERE id > 2; UPDATE actor.org_unit_type SET name = 'System', can_have_users = TRUE @@ -9379,65 +9401,67 @@ UPDATE actor.org_unit_type SET name = 'Branch', can_have_users = TRUE, can_have_vols = TRUE WHERE id = 2; COMMIT; -3. +3. Create a branch that hangs off the only remaining parent branch setting the addresses to the system address temporarily: - + INSERT INTO actor.org_unit (parent_ou, ou_type, ill_address, holds_address, mailing_address, billing_address, shortname, name) VALUES (1, 2, 1, 1, 1, 1, 'MYBRANCH', 'My Branch'); - 4. + 4. Find out what ID was assigned to the new branch: - + SELECT id FROM actor.org_unit WHERE shortname = 'MYBRANCH'; - 5. + 5. Create our required org address and update actor.org_unit to point ot correct actor.org_address id (assuming output of last step was “101”, adjust accordingly.): - + BEGIN; INSERT INTO actor.org_address (id, org_unit, street1, city, state, country, post_code) VALUES (2, 101, 'Fake Street', 'Fake', 'Fake', 'Fake', 'FOO BAR'); -UPDATE actor.org_unit SET ill_address= 2, holds_address = 2, mailing_address = 2, billing_address = 2 WHERE id = 101; + +UPDATE actor.org_unit SET ill_address= 2, holds_address = 2, + mailing_address = 2, billing_address = 2 WHERE id = 101; COMMIT; - 6. + 6. Run autogen.sh for your changes to be updated. - + ./autogen.sh -c /openils/conf/opensrf_core.xml -u - 7. + 7. As root, restart the Apache server: - + /etc/init.d/apache2 restart - Editing Organizational Units1. + Editing Organizational Units1. Select the organizational unit you wish to edit from the - organizational unit tree in the left pane.2. - Edit the fields in the right pane.3. - Click Save to save your changes.4. + organizational unit tree in the left pane.2. + Edit the fields in the right pane.3. + Click Save to save your changes.4. From the server command line, run autogen to apply the changes to the database and scripts. Run the following command as the opensrf user: - + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u - 5. + 5. As root, restart the Apache server: - + /etc/init.d/apache2 restart - 6. + 6. The staff client will need to be restarted for changes to appear. - Library Hours of OperationLibrary Hours of Operation + Library Hours of OperationLibrary Hours of Operation Local System Administrators can use the Organizational Units interface to set the library's hours of operation. These are regular weekly hours; @@ -9446,45 +9470,45 @@ COMMIT; Hours of operation and closed dates affect due dates and overdue fines. - • + • - Due dates.  + Due dates.  Due dates that would fall on closed days are automatically pushed forward to the next open day. Likewise, if an item is checked out at 8pm, for example, and would normally be due on a day when the library closes before 8pm, Evergreen pushes the due date forward to the next open day. - • + • - Overdue fines .  + Overdue fines .  Overdue fines are not charged on days when the library is closed. To review or edit your library's hours of operation, - 1. + 1. Open the Organizational Units interface. - 2. + 2. - + Click the Hours of Operation tab. - + - 3. + 3. - + Review your library's weekly hours, editing as necessary. To set a closed day click the corresponding Closed button. Closed days (Monday and Sunday in the example below) have open and close times of 12:00 AM. - + - 4. + 4. Click Save to record any changes @@ -9492,7 +9516,7 @@ COMMIT; - Library Addresses Library Addresses + Library Addresses Library Addresses Addresses set in Admin (-) → Server Administrations → Organizational Units appear in patron email notifications, hold slips, and transit slips. Local @@ -9500,19 +9524,19 @@ COMMIT; Holds, and Mailing addresses are set correctly. - 1. + 1. Open the Organizational Units interface as described in the previous section. - 2. + 2. - + Click the Addresses tab. - + - 3. + 3. There are four address tabs: Physical, @@ -9520,12 +9544,12 @@ COMMIT; ILL. The Holds Address appears on transit slips when items are sent to fulfill holds at another branch. - 4. + 4. Click Save to record changes for each tab. - + The Valid check box is an optional setting that does not affect current Evergreen functions. @@ -9533,7 +9557,7 @@ COMMIT; - User and Group Permissions User and Group Permissions + User and Group Permissions User and Group Permissions It is essential to understand how user and group permissions can be used to allow staff @@ -9545,17 +9569,17 @@ COMMIT; have the ability to assign elevated permissions to a user, and which staff members have the ability to edit users in particular groups. - User PermissionsUser Permissions + User PermissionsUser Permissions - The User permissions editor allows an administrator to set up permission for an individual user. However, In most cases, permissions can be controlled more + The User permissions editor allows an administrator to set up permission for an individual user. However, In most cases, permissions can be controlled more efficiently at the group level with individuals being assigned to specific groups based on their roles in the library. To open the user permission editor, select Admin → User Permission Editor. Type the user's barcode when prompted. - Working LocationsWorking Locations + Working LocationsWorking Locations You may select more than one working location for a user. This will effect the availability of certain permissions which are dependent on the user having the working location. - User Permission SettingsUser Permission Settings + User Permission SettingsUser Permission Settings Below the working locations is the long list of all the permissions available on your system. For each permission you can apply it by checking the @@ -9564,63 +9588,63 @@ COMMIT; the user the ability to grant the permission to others. - Group Permissions Group Permissions + Group Permissions Group Permissions Most permissions should be assigned at the group level. Here you can create new groups based on the roles and responsibilities of the users in your system. Staff will be able to assign users to these groups when they register patrons. - It is a good idea to create your groups soon after creating your organizational units. + It is a good idea to create your groups soon after creating your organizational units. It is also important to give careful consideration to the hierarchy of your groups to make permission assignment as efficient as possible. To enter the Group Permission module from the staff client menu, select Admin → Server Administration → Permission Groups - Adding Groups1. + Adding Groups1. Select the Group Configuration tab if not - already selected in the right pane2. - Click New Child3. - Enter a unique Group Name4. - Enter a Description5. + already selected in the right pane2. + Click New Child3. + Enter a unique Group Name4. + Enter a Description5. Select a Permission Interval. This will determine the default expiry date of user account when you register patrons and select - their groups6. - Selecting an Editing Permission will determine the group level the user will have for editing other users.7. + their groups6. + Selecting an Editing Permission will determine the group level the user will have for editing other users.7. Select the Parent Group for the group. - The group will inherit its parent group's permissions so it is unnecessary to assign permissions already inherited from its parent.8. + The group will inherit its parent group's permissions so it is unnecessary to assign permissions already inherited from its parent.8. Click the Save button. - Deleting Groups1. + Deleting Groups1. Select the group you wish to delete from the group tree on - left pane.2. - Click the Delete button.3. + left pane.2. + Click the Delete button.3. Click on OKto verify. - Editing Groups1. - Select the group you wish to edit from the group tree on left pane.2. - Edit the fields you wish to change in the right pane.3. + Editing Groups1. + Select the group you wish to edit from the group tree on left pane.2. + Edit the fields you wish to change in the right pane.3. Click on Save to save changes. - Adding Group Permissions1. + Adding Group Permissions1. Select the Group Permissions tab on the right - pane2. - Click on New Mapping.3. + pane2. + Click on New Mapping.3. Select the permission you would like to add from the - Permission Select box.4. + Permission Select box.4. Select the Depth you wish to set the permission. This will determine if the group has the permission at a local level or across a - system, or consortium, or other organizational unit type.5. + system, or consortium, or other organizational unit type.5. check the Grantable check box to allow the user to - grant the permission to others.6. + grant the permission to others.6. Click Add Mapping to add the permission to the group - Deleting Group Permissions1. - Select the group permission you wish to delete.2. - Click the Delete Selected button.3. + Deleting Group Permissions1. + Select the group permission you wish to delete.2. + Click the Delete Selected button.3. Click on OK to verify - Editing Group Permissions1. + Editing Group Permissions1. Click on the Depth or Grantable - field for the permission setting you wish to change.2. - Make changes to other permissions in the same way.3. + field for the permission setting you wish to change.2. + Make changes to other permissions in the same way.3. Click Save Changes when you are finished all the changes. - PermissionsPermissions + PermissionsPermissions Table 25.1. Permissions TablePermission NamePermission DescriptionABORT_REMOTE_TRANSITAllows user to abort a copy transit if the user is not at @@ -9701,23 +9725,23 @@ COMMIT; New staff accounts are created in much the same way as patron accounts, using Circulation → Register Patron or Shift+F1. Select one of the staff profiles from the Profile - Group drop-down menu. + Group drop-down menu. Each new staff account must be assigned a Working - Location which determines its access level in staff client interfaces. + Location which determines its access level in staff client interfaces. Accounts migrated from legacy systems or created before the upgrade to Evergreen 1.6 already have working locations assigned. - 1. + 1. To assign a working location open the newly created staff account using F1 (retrieve patron) or F4 (patron search). - 2. + 2. Select Other → User Permission Editor - 3. + 3. Place a check in the box next to the desired working location, then scroll to the bottom of the display and click Save. @@ -9725,13 +9749,13 @@ COMMIT; - + In multi-branch libraries it is possible to assign more than one working location - Staff Account Permissions Staff Account Permissions + Staff Account Permissions Staff Account Permissions To view a detailed list of permissions for a particular Evergreen account go to Admin (-) → User permission editor in the staff client. @@ -9745,28 +9769,28 @@ COMMIT; accounts. In the example below a Circ +Full Cat account is granted permission to process offline transactions, a function which otherwise requires an LSA login. - 1. + 1. Log in as a Local System Administrator. - 2. - • + 2. + • Select Admin (-) → User Permission Editor and enter the staff account barcode when prompted OR - • + • Retrieve the staff account first, then select Other → User Permission Editor - 3. + 3. - + The User Permission Editor will load (this may take a few seconds). Greyed-out permissions cannot be edited because they are either a) already granted to the account, or b) not available to any staff account, including LSAs. - + @@ -9785,21 +9809,21 @@ COMMIT; - 4. + 4. - + To allow processing of offline transactions check the Applied column next to OFFLINE_EXECUTE. - + - 5. + 5. - + Scroll down and click Save to apply the changes. - + @@ -9809,111 +9833,111 @@ COMMIT; Copy StatusCopy Status - + To navigate to the copy status editor from the staff client menu, select Admin → Server Administration → Copy Statuses The Copy Status Editor is used to Add, edit and delete statuses of copies in your system. Evergreen comes pre-loaded with a number of copy statuses. Table 25.2. Copy Status TableIDNameHoldable - default settingOPAC Visible - default setting0Availabletruetrue1Checked outtruetrue2Binderyfalsefalse3Lostfalsefalse4Missingfalsefalse5In processfalsetrue6In transittruetrue7Reshelvingtruetrue8On holds shelftruetrue9On ordertruetrue10ILLtruefalse11Catalogingtruefalse12Reservesfalsetrue13Discard/Weedfalsefalse14Damagedfalsefalse15On reservation shelftruefalse It is possible to add, delete and edit copy statuses. - Adding Copy Statuses1. + Adding Copy Statuses1. In the New Status field, enter the name of the - new status you wish to add.2. - Click Add.3. + new status you wish to add.2. + Click Add.3. Locate you new status and check the Holdable check box if you wish to all users to place holds on items in this status. Check OPAC Visible if you wish for this status to appear in the public - OPAC.4. + OPAC.4. Click Save Changes at the bottom of the screen to save changes to the new status. - Deleting Copy Statuses1. + Deleting Copy Statuses1. Highlight the statuses you wish to delete. Hold the - Shift to select more than one status.2. - Click Delete Selected.3. + Shift to select more than one status.2. + Click Delete Selected.3. Click OK to verify. - You will not be able to delete statuses if copies currently + You will not be able to delete statuses if copies currently exist with that status. - Editing Copy Statuses1. + Editing Copy Statuses1. Double click on a status name to change its name and enter the new name. To change whether a status is visible in the OPAC, check or uncheck the OPAC Visible check box.To allow patrons the ability to hold items in that status, check the Holdable check box. To prevent users from holding items in that status, uncheck the Holdable - check box. 2. + check box. 2. Once you have finished editing the statuses, remember to click Save Changes. Billing TypesBilling Types - + The billing types editor is used for creating, editing and deleting billing types. To navigate to the billing types editor from the staff client menu, select Admin → Server Administration → Billing Types - Adding Billing Types1. - Click New Billing Type.2. - Enter the name of the billing type.3. - Select the Org Unit to use this billing type.4. + Adding Billing Types1. + Click New Billing Type.2. + Enter the name of the billing type.3. + Select the Org Unit to use this billing type.4. Enter the Default Price. This is only the default since the actual price of a specific billing can be adjusted when staff create - a billing5. + a billing5. Click Save to save the new billing type. - Deleting Billing Types1. - Check the check box of the billing type(s) you wish to delete.2. + Deleting Billing Types1. + Check the check box of the billing type(s) you wish to delete.2. Click Delete Selected. - The selected billing types will be deleted without a + The selected billing types will be deleted without a verification alert. - Editing Billing Types1. - Double click on a billing types to open the editing window.2. + Editing Billing Types1. + Double click on a billing types to open the editing window.2. Make desired changes to the name, - Org Unit and Default Price.3. + Org Unit and Default Price.3. Once you have finished editing, click Save. Circulation ModifiersCirculation Modifiers - + The circulation modifier editor is used to create, edit and delete modifier categories to control circulation policies on specific groups of items. To navigate to the circulation modifiers editor from the staff client menu, select Admin → Server Administration → Circulation Modifiers. - Adding Circulation Modifiers1. - Click New Circ Modifier.2. + Adding Circulation Modifiers1. + Click New Circ Modifier.2. Enter a Code, Name and - Description.3. - Select the SIP 2 Media Type.4. + Description.3. + Select the SIP 2 Media Type.4. Check the Magnetic Media check box if the item is magnetic media such as a cassette - tape.5. + tape.5. Click Save to save the new circulation modifier. - Deleting Circulation Modifiers1. + Deleting Circulation Modifiers1. Check the check box(es) next to the circulation modifiers(s) you wish to - delete.2. + delete.2. Click Delete Selected near the top of the page. - The selected circulation modifiers will be deleted without a + The selected circulation modifiers will be deleted without a verification alert. - Editing Circulation Modifiers1. + Editing Circulation Modifiers1. Double click on the row of the circulation modifier you wish to - edit.2. - Make desired changes.3. + edit.2. + Make desired changes.3. Once you have finished editing, click Save. Cataloging TemplatesCataloging Templates - + Cataloging templates are essential for making the cataloging process more efficient. Templates are used that that the basic structure of specific types of cataloging records can loaded when the cataloger adds a new record - Adding Cataloging Templates1. + Adding Cataloging Templates1. Create a marc template in the directory /openils/var/templates/marc/. It should be in xml format. Here is an example file k_book.xml: - + <record> <leader>00620cam a2200205Ka 4500</leader> @@ -9962,32 +9986,32 @@ COMMIT; - 2. + 2. Add the template to the to the marctemplates list in the open-ils.cat section of the Evergreen configuration file - opensrf.xml - 3. + opensrf.xml + 3. - Restart Perl services for changes to take effect. - /openils/bin/osrf_ctl.sh -l -a restart_perl + Restart Perl services for changes to take effect. + /openils/bin/osrf_ctl.sh -l -a restart_perl Adjusting Search Relevancy RankingsAdjusting Search Relevancy RankingsAbstractThis section describes indexed field weighting and matchpoint weighting, which - control relevance ranking in Evergreen catalog search results. Adjusting relevancy can only be completed through access to the Evergreen database as of version 1.6. + control relevance ranking in Evergreen catalog search results. Adjusting relevancy can only be completed through access to the Evergreen database as of version 1.6. In tuning search relevance, it is good practice to make incremental adjustments, capture search logs, and assess results before making further adjustments. - + - Indexed-field Weighting Indexed-field Weighting + Indexed-field Weighting Indexed-field Weighting Indexed-field weighting is configured in the Evergreen database in the weight column of the config.metabib_field table, which follows the other four columns in this table: field_class, name, xpath, and format. The following is one representative line from the config.metabib_field table: - + author | conference | //mods32:mods/mods32:name[@type='conference']/mods32: namePart[../mods32:role/mods32:roleTerm[text()='creator']] | mods32 | 1 ) @@ -9999,24 +10023,24 @@ namePart[../mods32:role/mods32:roleTerm[text()='creator']] | mods32 | 1 ) Aimee and Jaguar than for a record with the term jaguar in another indexed field. - Match point WeightingMatch point Weighting + Match point WeightingMatch point Weighting - + Match point weighting provides another way to fine-tune Evergreen relevance ranking, and is configured through floating-point multipliers in the multiplier column of the search.relevance_adjustment table. Weighting can be adjusted for one, more, or all multiplier fields in search.relevance_adjustment. You can adjust the following three matchpoints: - • + • first_wordboosts relevance if the query is one term long and matches the first term in the indexed field (search for twain, get a bonus for twain, mark but not mark twain) - • + • word_order increases relevance for words matching the order of search terms, so that the results for the search legend suicide would match higher for the book Legend of a Suicide than for the book, Suicide Legend - • + • full_match boosts relevance when the full query exactly matches the entire indexed field (after space, case, and diacritics are normalized). So a title search for The Future of Ice @@ -10026,49 +10050,49 @@ namePart[../mods32:role/mods32:roleTerm[text()='creator']] | mods32 | 1 ) Here are the default settings of the search.relevance_adjustment table: Table 25.3. search.relevance_adjustment tablefield_classnamebump_typemultiplierauthorconferencefirst_word1.5authorcorporatefirst_word1.5author other first_word1.5authorpersonalfirst_word1.5keywordkeywordword_order10seriesseriestitlefirst_word1.5seriesseriestitlefull_match20titleabbreviatedfirst_word1.5titleabbreviatedfull_match20titleabbreviatedword_order10titlealternativefirst_word1.5titlealternativefull_match20titlealternativeword_order10titleproperfirst_word1.5titleproperfull_match20titleproperword_order10titletranslatedfirst_word1.5titletranslatedfull_match20titletranslatedword_order10titleuniformfirst_word1.5titleuniformfull_match20titleuniformword_order10 - Combining Index Weighting and Match point WeightingCombining Index Weighting and Match point Weighting + Combining Index Weighting and Match point WeightingCombining Index Weighting and Match point Weighting - + Index weighting and matchpoint weighting may be combined. The relevance boost of the combined weighting is equal to the product of the two multiplied values. If the relevance setting in the config.metabib_field were increased to 2, and the multiplier set to 1.2 in the search.relevance_adjustment table, the resulting matchpoint increase would be 240%. - + In practice, these weights are applied serially -- first the index weight, then all the matchpoint weights that apply -- because they are evaluated at different stages of the search process. - Adjusting Relevancy for Keyword SearchesAdjusting Relevancy for Keyword Searches + Adjusting Relevancy for Keyword SearchesAdjusting Relevancy for Keyword Searches - + Searching the out of the box keyword does not boost the ranking for terms appearing in, the title or subject fields since there is just one keyword index which does not distinguish terms that appear in the title field from those in the notes field for example. In comparison, the title index is actually composed of a number of separate indexes: title|proper, title|uniform, title|alternative, title|translated, etc, that collectively form the title index. You can see this in the config.metabib_field table. The following procedure will add a keyword|title index so that terms found in the title field of an item are given more weight than terms in other fields. - 1. + 1. From the command line, access the PostgreSQL command line interface - psql -U evergreen - 2. + psql -U evergreen + 2. Clone the title|proper index to create a keyword|title index 6 = the title|proper index - + INSERT INTO config.metabib_field (field_class, name, xpath, weight, format, search_field, facet_field) SELECT 'keyword', 'title', xpath, weight, format, search_field, facet_field FROM config.metabib_field WHERE id = 6; - 3. + 3. Populate the keyword|title index with a set of index entries cloned from the metabib.title_field_entry table; 6 = the title|proper index - + INSERT INTO metabib.keyword_field_entry (source, field, value) SELECT source, 17, value (the field value, 17, field be different in your database @@ -10077,24 +10101,24 @@ INSERT INTO metabib.keyword_field_entry FROM metabib.title_field_entry WHERE field = 6; - 4. + 4. Bump the relevance when the first search term appears first in the title in a keyword search. 17 = our new keyword|title index (This may be different in your database so you may need to check the config.metabib_field for the id of your new index). - + INSERT INTO search.relevance_adjustment (active, field, bump_type, multiplier) VALUES (true, 17, 'first_word', 5); - 5. + 5. Boost the relevance for search terms appearing in the title in general 17 = our new keyword|title index (This may be different in your database so you may need to check the config.metabib_field for the id of your new index). - + UPDATE config.metabib_field SET weight = 10 WHERE id = 17; @@ -10107,48 +10131,48 @@ WHERE id = 17; Notifications can be set up for Holds, Overdue items and Predue items. There are two ways to configure notifications for each of these type of notifications. Hold NotificationsHold Notifications - + Hold notifications can be used that that library users are sent an email when their items are available for pickup. This notification is triggered when the item being held is captured by a library staff member and the item is in the on holds shelf status. - Hold Notifications using the Action Trigger + Hold Notifications using the Action Trigger The easiest way to set up hold notifications is to use the Action Trigger mechanism introduced in Evergreen 1.6. - 1. + 1. From the staff client menu, Click on Admin → Local Administration → Notifications / Action triggers - 2. + 2. Locate the Action Trigger Definition with the Name Hold Ready for Pickup Email Notification. - 3. + 3. Double click on the item row to open the editing page, but not on the hyperlinked Name - 4. + 4. Check the Enabled check box to enable it. - 5. + 5. Edit the Template text box to customize the body of the email as needed. Note that text between “[% %]” are variables to be generated by the system. For example, [% user.family_name %] with be replaced by the family name of the user receiving a notice. - 6. + 6. Click Save to save your changes. - 7. + 7. Hold notices are now activated and will be processed the next time action triggers are processed. See the section called “Processing Action Triggers” for more details on processing action triggers. - Hold Notifications using the Evergreen Configuration File + Hold Notifications using the Evergreen Configuration File An older method for Setting up hold notifications is through the Configuration file /openils/conf/opensrf.xml. - 1. + 1. Open the file /openils/conf/opensrf.xml with your favorite text editor Locate this section of the configuration file: - + <notify_hold> <email>true</email> <!-- set to true for hold notice emails --> </notify_hold> @@ -10157,10 +10181,10 @@ WHERE id = 17; Ensure that <email> is set to true. - 2. + 2. Locate the following section of the configuration file: - + ... <email_notify> <!-- global email notification settings --> @@ -10169,7 +10193,7 @@ WHERE id = 17; Point the <template> variable to the hold notification template you will be using for hold notifications. - 3. + 3. Locate the template and edit as desired. Use the example template provided as a guide. @@ -10178,58 +10202,58 @@ WHERE id = 17; Overdue and Predue NotificationsOverdue and Predue Notifications Overdue and Predue email notifications can be used to inform users that they have materials which are overdue or to warn them that materials are almost overdue. - Activating the Existing Overdue Action Triggers + Activating the Existing Overdue Action Triggers - + The easiest way to set up overdue notifications is to use the Action Trigger mechanism introduced in Evergreen 1.6. - 1. + 1. From the staff client menu, Click on Admin → Local Administration → Notifications / Action triggers - 2. + 2. Locate the Action Trigger Definition you wish to activate. There are several overdue notices preloaded wit Evergreen 1.6. - 3. + 3. Double click on the item row to open the editing page, but not on the hyperlinked Name - 4. + 4. Check the Enabled check box to enable it. - 5. + 5. Edit the Template text box to customize the body of the email as needed. Note that text between “[% %]” are variables to be generated by the system. For example, [% user.family_name %] with be replaced by the family name of the user receiving a notice. - 6. + 6. Click Save to save your changes. - 7. + 7. Overdue notices are now activated and will be processed the next time action triggers are processed. See the section called “Processing Action Triggers” for more details on processing action triggers. - Creating Overdue and Predue Notifications by Cloning Existing Action Triggers + Creating Overdue and Predue Notifications by Cloning Existing Action Triggers - - + + If you wish to add overdue notices for different periods of time or wish to create a predue notice simply clone an existing overdue note, give it a unique Name, customize as needed. and save. There are no pre-existing predue notices so they will need to be created by cloning an existing overdue notice. To make them predue notices, use a negative value in the Processing Delay Context Field. For example, to create a predue notice the day before the due date, use the value -1 days. - Creating Overdue and Predue Notices using the Evergreen Configuration File + Creating Overdue and Predue Notices using the Evergreen Configuration File - - + + It is also possible to create overdue and predue notices using the Evergreen configuration file /openils/conf/opensrf.xml - 1. + 1. Open /openils/conf/opensrf.xml with your favorite text editor. Locate this section of the configuration file: - + <overdue> ... <notice> @@ -10267,21 +10291,21 @@ WHERE id = 17; ... - 2. + 2. From this section of the configuration file, you may: - •Point to the template file for the specific notice: <email_template>•Set the interval time for the specific notice: <notify_interval>•Indicate whether to attempt email notification for the notice: <email_notify>•For predue notices, you may also specify on which circulation ranges to activate the courtesy notice: + •Point to the template file for the specific notice: <email_template>•Set the interval time for the specific notice: <notify_interval>•Indicate whether to attempt email notification for the notice: <email_notify>•For predue notices, you may also specify on which circulation ranges to activate the courtesy notice: <circ_duration_range> - 3. + 3. Locate the templates and edit as desired. Use the example templates provided as guides. - 4. + 4. From the configuration file you may also set the default email sender address. However, this is just the default and the email sender address for specific organizational units can be specified in the library settings editor from the staff client. - You also need to set the email server from the configuration file. By default, it uses the localhost. + You also need to set the email server from the configuration file. By default, it uses the localhost. @@ -10292,12 +10316,12 @@ WHERE id = 17; Report any errors in this documentation using Launchpad. Chapter 26. Local Administration MenuChapter 26. Local Administration Menu - OverviewOverview + OverviewOverview - + Many Evergreen configuration options are available under the Admin (-) → Local Administration rollover menu. - + This menu is new in Evergreen 1.6 and provides shortcuts to settings also available @@ -10306,7 +10330,7 @@ WHERE id = 17; Local Administration rollover menu. Items on this menu are visible to anyone logged into the staff client but usually require special permissions to edit. The following table describes each of the menu options. - Menu optionDescription + Menu optionDescription Receipt Template Editor Customize printed receipts (checkout receipts, hold slips, etc) for a single workstation @@ -10359,36 +10383,36 @@ WHERE id = 17; Receipt templates are saved on the workstation, but it is possible to export the templates to import to other workstations.   -1. +1. Select Admin (-) → Local Administration → Receipt Template Editor.   - 2. + 2. Select the checkout template from the dropdown menu.   - 3. + 3. You can edit the Header, Line Item or Footer on the right hand side.   - 4. + 4. In the upper right hand corner you can see the available macros by clicking on the Macros button.  A macro prints a real value from the database. The macros that are available vary slightly between types of receipt templates (i.e. bills, holds, items). - 5. - Here are the available macros for an item receipt, like a checkout receipt.   + 5. + Here are the available macros for an item receipt, like a checkout receipt.   - + - Adding an imageAdding an image + Adding an imageAdding an image -1. +1. You can edit the Header to have an image.  This is the default checkout Header.   - 2. + 2. Using HTML tags you can insert a link to an image that exists on the web.  The link will end in .jpg or possibly .gif.  To get this link you can right click on the image and choose Copy Image @@ -10398,26 +10422,26 @@ WHERE id = 17; As… - 3. + 3. Enter the URL of the link for the image that you just copied off a website. By clicking outside the Header box the Preview will update to reflect the edit you just made.   - 4. + 4. If the image runs into the text, add a <br/> after the image to add a line break. - You may use most HTML tags.  See http://www.w3schools.com/html/ for more information on HTML tags.   + You may use most HTML tags.  See http://www.w3schools.com/html/ for more information on HTML tags.   - Line ItemLine Item + Line ItemLine Item - This is what the default Line Item looks like: + This is what the default Line Item looks like: - + In this example, the macro %barcode% prints the item barcodes of the books that were @@ -10427,44 +10451,44 @@ WHERE id = 17; In this example, we will not make any changes to the Line Item - The due date can only be printed in the YYYY-MM-DD format. + The due date can only be printed in the YYYY-MM-DD format. - Editing the footerEditing the footer + Editing the footerEditing the footer -1. - This is what the default Footer looks like: +1. + This is what the default Footer looks like: - - 2. - Remove the “You were helped by %STAFF_FIRSTNAME% <br/>”.  As many + + 2. + Remove the “You were helped by %STAFF_FIRSTNAME% <br/>”.  As many libraries use a generic circulation login on the circulation desk, the “You were helped by…” note isn’t meaningful.   - - 3. - Once you have the checkout template how you want it, click Save Locally to save + + 3. + Once you have the checkout template how you want it, click Save Locally to save the template to your computer.   - - 4. - Click OK. + + 4. + Click OK. - + - The footer is a good place to advertise upcoming library programs or events.   + The footer is a good place to advertise upcoming library programs or events.   - Exporting templatesExporting templates + Exporting templatesExporting templates As you can only save a template on to the computer you are working on you will need to @@ -10473,53 +10497,53 @@ WHERE id = 17; to checkin items or capture holds with). -1. - Click on Export.   +1. + Click on Export.   - + -2. - Select the location to save the template to, name the template, and click Save. +2. + Select the location to save the template to, name the template, and click Save.   - + -3. - Click OK.   +3. + Click OK.   - + - Importing TemplatesImporting Templates + Importing TemplatesImporting Templates -1. - Click Import. +1. + Click Import. - - 2. - Navigate to and select the template that you want to import.  Click Open. + + 2. + Navigate to and select the template that you want to import.  Click Open. - - 3. - Click OK. + + 3. + Click OK. - - 4. - Click Save Locally. + + 4. + Click Save Locally. - - 5. - Click OK. + + 5. + Click OK. - + @@ -10528,27 +10552,27 @@ WHERE id = 17; Global Font and Sound Settings apply to the current workstation only. Use to turn staff client sounds on/off or to adjust the font size in the staff client interface. These settings do not affect OPAC font sizes. - 1. + 1. Select Admin (-) → Local Administration → Global Font and Sound Settings. - 2. + 2. - - + + To turn off the system sounds, like the noise that happens when a patron with a block is retrieved check the disable sound box and click Save to Disk.   - + - 3. + 3. - + To change the size of the font, pick the desired option and click Save to Disk.   - + @@ -10557,26 +10581,26 @@ WHERE id = 17; Use the Printer Settings Editor to configure printer output for each workstation. - 1. + 1. Select Admin (-) → Local Administration → Printer Settings Editor. - 2. + 2. - + From this screen you can print a test page, or alter the page settings for your receipt printer.   - + - 3. + 3. - + Click on Page Settings to change printing format and option settings.  Click on the Margins & Header/Footer tab to adjust - + @@ -10585,73 +10609,73 @@ WHERE id = 17; These dates are in addition to your regular weekly closed days (see the section called “Library Hours of Operation”).    Both regular closed days and those entered in the Closed Dates Editor affect due dates and fines: - • + • Due dates.  - + Due dates that would fall on closed days are automatically pushed forward to the next open day. Likewise, if an item is checked out at 8pm, for example, and would normally be due on a day when the library closes before 8pm, Evergreen pushes the due date forward to the next open day. - • + • Overdue fines.  - + Overdue fines are not charged on days when the library is closed. - Multi-Day ClosingMulti-Day Closing + Multi-Day ClosingMulti-Day Closing - 1. + 1. Select Admin (-) → Local Administration → Closed Dates Editor. 2. - + Select Add Multi-Date Closing if your closed dates are entire business days. - + - 3. + 3. - + Enter applicable dates and a descriptive reason for the closing and click Save.  Check the Apply to all of my libraries box if your library is a multi-branch system and the closing applies to all of your branches.   - + - + You can type dates into fields using YYYY-MM-DD format or use calendar widgets to choose dates. - Detailed ClosingDetailed Closing + Detailed ClosingDetailed Closing - + If your closed dates include a portion of a business day, select Add Detailed Closing at Step 2, then enter detailed hours and dates and click Save. Time format must be HH:MM. - + Copy Locations EditorCopy Locations Editor - 1. + 1. Select Admin (-) → Local Administration → Copy Locations Editor. - 2. + 2. - + You can create new copy locations, or edit existing copy locations. To create a new shelving location type in the name, and select Yes or No for the various attributes: OPAC Visible, @@ -10661,38 +10685,38 @@ WHERE id = 17; before an item is captured for a hold.  Finally click Create. - + - 3. + 3. - + In the bottom part of the Copy Locations Editor you can edit or delete existing copy locations. You cannot delete a location that contains items. In this example the copy location Adult Videos is being edited. - + - + There are also options in the Copy Editor for a copy to be OPAC Visible-yes or no, Holdable-yes or no, or Circulate-yes or no.  If either the copy record or the shelving location is set to Circulate-no, then the item will not be able to circulate. - + This is where you see the shelving locations in the Copy Editor: - + This is where the shelving location appears in the OPAC. - + Library Settings EditorLibrary Settings Editor @@ -10702,7 +10726,7 @@ WHERE id = 17; For descriptions of available settings see the Settings Overview table below. To open the Library Settings Editor select Admin (-) → Local Adminstration → Library Settings Editor. - Settings OverviewSettings Overview + Settings OverviewSettings Overview This table describes available settings and shows which LSAs can change on a per-library basis. Below the table is a list of data types [] with details about acceptable @@ -10827,20 +10851,20 @@ WHERE id = 17; This is where you configure your non-catalogued types that appear in the dropdown menu for non-catalogued circulations.  - 1. + 1. Select Admin (-) → Local Administration → Non Catalogued Type Editor. - 2. + 2. - + To set up a new non-catalogued type, type the name in the left hand box, and choose how many days the item will circulate for.  Click Create. - + Select the Circulate In-House box for non-catalogued items @@ -10851,12 +10875,12 @@ WHERE id = 17; - + This is what the dropdown menu for non-catalogued circulations in the patron checkout screen looks like: - + @@ -10867,16 +10891,16 @@ WHERE id = 17; visible under Admin (-) → Local Administration → Group Penalty Thresholds. - PenaltyEffectPATRON_EXCEEDS_FINESBlocks new circulations and renewals if patron exceeds X in fines PATRON_EXCEEDS_OVERDUE_COUNTBlocks new circulations and renewals if patron exceeds X overdue items PATRON_EXCEEDS_CHECKOUT_COUNTBlocks new circulations if patron exceeds X items out + PenaltyEffectPATRON_EXCEEDS_FINESBlocks new circulations and renewals if patron exceeds X in fines PATRON_EXCEEDS_OVERDUE_COUNTBlocks new circulations and renewals if patron exceeds X overdue items PATRON_EXCEEDS_CHECKOUT_COUNTBlocks new circulations if patron exceeds X items out - + Accounts that exceed penalty thresholds display an alert message when opened and require staff overrides for blocked transactions. - + @@ -10885,12 +10909,12 @@ WHERE id = 17; Penalty threshold inheritance rulesPenalty threshold inheritance rules - + Local penalty thresholds are identified by Org Unit and appear in the same table as the system wide defaults. - + @@ -10903,7 +10927,7 @@ WHERE id = 17; profile. - + Multi-branch libraries may create rules for the entire library system or for individual branches. Evergreen will use the most specific applicable rule. @@ -10914,30 +10938,30 @@ WHERE id = 17; Creating local penalty thresholdsCreating local penalty thresholds - + Local System Administrators can override the system defaults by creating local penalty thresholds for selected patron groups. - 1. + 1. Select Admin (-) → Local Administration → Group Penalty Thresholds. - 2. + 2. - + Click New Penalty Threshold. - + - 3. + 3. - + The new penalty pop-up appears. Complete all fields and click Save. - + @@ -10959,34 +10983,34 @@ WHERE id = 17; PATRON_EXCEEDS_OVERDUE_COUNT, or PATRON_EXCEEDS_FINES - 4. + 4. - + After clicking Save the new threshold appears with the defaults. Evergreen always gives precedence to local settings (in this example, BSP). - + - Deleting or editing local penalty thresholdsDeleting or editing local penalty thresholds + Deleting or editing local penalty thresholdsDeleting or editing local penalty thresholds - + To delete a local threshold select the row to remove and click Delete Selected. The threshold is removed immediately without further confirmation. - + - + To edit a local threshold, double-click the desired row to open the pop-up form. Edit the form and click Save. New settings take effect immediately. - + @@ -10997,55 +11021,55 @@ WHERE id = 17; way to save and report on additional information that doesn’t fit elsewhere in Evergreen's default records.  It is possible to have stat cats for copies or patrons.   - 1. + 1. Select Admin (-) → Local Administration → Statistical Categories Editor. - 2. + 2. - + To create a new stat cat, enter the name of the stat cat, select if you want OPAC Visiblity, and select either patron or copy from the Type dropdown menu.   - + Copy Stat Cats.  - + The image above shows some examples of copy stat cats. You would see these when editing items in the Copy Editor, also known as the Edit Item Attributes screen. You might use copy stat cats to track books you have bought from a specific vendor, or donations. - + This is what the copy stat cat looks like in the Copy Editor. - + - + Patron stat cats.  - + Below are some examples of patron stat cats.  Patron stat cats can be used to keep track of information like the high school a patron attends, or the home library for a consortium patron, e.g. Interlink. You would see these in the fifth screen of patron registration/edit patron.   - + - + This is what the patron stat cat looks like in the patron registration screen.  It looks very similar in the patron edit screen. - + @@ -11053,43 +11077,43 @@ WHERE id = 17; Field Documentation is custom field-level documentation that explains individual fields for library staff. As of 2.0, the field documentation only is used in the Patron Registration screen. - Administering Field DocumentationAdministering Field Documentation + Administering Field DocumentationAdministering Field Documentation - + If their permission settings allow, staff members can create local field documentation. This requires the ADMIN_FIELD_DOC permission. The 'depth' at which that permission is applied, is the maximum level of the org tree at which the staff member will be able to create field documentation. - 1. + 1. In the staff client, select Admin → Local Administration → Field Documentation - 2. + 2. Click the New button. - 3. + 3. Using the fm_class selector, select the database table for which you wish to create Field Documentation. This will show all of the existing Field Documentation for that table. - As of Evergreen 2.0, only the ILS User table is used anywhere in the Evergreen UI - 4. + As of Evergreen 2.0, only the ILS User table is used anywhere in the Evergreen UI + 4. Using the owner selector, select the topmost org unit at which you would like the field documentation to be available. - 5. + 5. Using the field selector, select the field you wish to document. - 6. + 6. Enter your actual documentation in the string text box. - 7. + 7. Click Save to save your Field Documentation entry - + To view field documentation for different tables, use the Class selector to filter the Field Documentation list - Patron Field DocumentationPatron Field Documentation + Patron Field DocumentationPatron Field Documentation - + On the patron registration screen there are small boxes along the left hand side. If a magnifying glass appears, you may click that magnifying glass to retrieve the Field Documentation for that patron field. @@ -11113,133 +11137,133 @@ WHERE id = 17; many people say they would use the library if it were open on a Sunday, or you could get a list of patrons who say they would like to receive marketing material from the library. - 1. + 1. - + From the Admin (-) menu, select Local Administration → Surveys. - + - 2. + 2. - + The Survey List will open. In this example the table is empty because no surveys have been created. Click Add New Survey. - + - 3. + 3. - + Fill out the New Survey form, then click Save Changes. - + A few tips when creating a new survey: - • + • Start Date must always be in the future. It is not possible to add questions to a survey after the start date. - • + • Dates should be in YYYY-MM-DD format - • + • OPAC Survey? and Poll Style? are not yet implemented - leave unchecked - • + • Check Is Required if the survey should be mandatory for all new patrons - • + • Check Display in User Summary to make survey answers visible from patron records - 4. + 4. - + A summary of your new survey will appear. Type the first survey question in the Question field, then click Save Question & Add Answer. Survey questions are multiple choice. - + - 5. + 5. - + Enter possible multiple choice answers and click Add Answer. Each question may have as many answers as you like. - + - 6. + 6. - Repeat the steps above to add as many questions and answers as you wish. When + Repeat the steps above to add as many questions and answers as you wish. When finished click Save, then Go Back to return to the survey list. - + - 7. + 7. - Your new survey will appear in the Survey List table. To make further changes click the survey name to open the detailed view. + Your new survey will appear in the Survey List table. To make further changes click the survey name to open the detailed view. - + - This is what the survey looks like in the patron registration/edit screen. Note that in + This is what the survey looks like in the patron registration/edit screen. Note that in this example this survey question appears in red and is required as the Is Required box was checked when creating the survey. - + - To see a patron’s response to a survey, retrieve the patron record. Click Other → Surveys to see the response. - - + To see a patron’s response to a survey, retrieve the patron record. Click Other → Surveys to see the response. + + Cash ReportsCash Reports - 1. + 1. Select Admin (-) → Local Administration → Cash Reports. - 2. + 2. - + Select the start date and the end date that you wish to run a cash report for.  You can either enter the date in the YYYY-MM-DD format, or click on the calendar icon to use the calendar widget.   - + - 3. + 3. Select your library from the drop down menu.  Click Go.   - 4. + 4. - + The output will show cash, check, and credit card payments.  It will also show amounts for credits, forgiven payments, work payments and goods payments (i.e. food for fines initiatives).  The output will look something like this: - + - + By clicking on the hyperlinked column headers (i.e. workstation, cash_payment, check_payment, etc.) it is possible to sort the columns to order the payments from smallest to largest, or largest @@ -11260,13 +11284,13 @@ WHERE id = 17; To access the Action Triggers module, select Admin → Local Administration → Notifications / Action triggers - You must have Local Administrator permissions to access the Action Triggers module. + You must have Local Administrator permissions to access the Action Triggers module. You will notice four tabs on this page: Event Definitions, Hooks, Reactors and Validators. Event DefinitionsEvent Definitions - + Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: Table 27.1. Action Trigger Event DefinitionsFieldDescriptionOwning libraryThe shortname of the library for which the action / trigger / hook is defined.NameThe name of the trigger event, that links to a trigger event environment containing a set of fields that will be returned to the Validators / Reactors for processing.HooksThe name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper @@ -11277,27 +11301,27 @@ WHERE id = 17; if the validator returns false.ReactorsLinks the action trigger to the Reactor.Max Event Validity DelayDefine the threshold for how far back the action_trigger_runner.pl script should reach to generate a batch of events. - Creating Action Triggers1. + Creating Action Triggers1. From the top menu, select Admin → Local Administration → Notifications / Action triggers - 2. - Click on the New button.3. - Select an Owning Library.4. - Create a unique Name for your new action trigger.5. - Select the Hook.6. - Check the Enabled check box.7. - Create a unique Name for your new action trigger.8. + 2. + Click on the New button.3. + Select an Owning Library.4. + Create a unique Name for your new action trigger.5. + Select the Hook.6. + Check the Enabled check box.7. + Create a unique Name for your new action trigger.8. Set the Processing Delay in the appropriate format. Eg. 7 days to run 7 days from the trigger event - or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. - Set the Processing Delay Context Field and Processing Group Context Field.10. + or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. + Set the Processing Delay Context Field and Processing Group Context Field.10. Select the Validator, Reactor, Failure Cleanup and Success Cleanup. - 11. - Set the Processing Delay Context Field and Processing Group Context Field.12. + 11. + Set the Processing Delay Context Field and Processing Group Context Field.12. Enter text in the Template text box if required. These are for email messages. Here is an sample template for sending 90 day overdue notices: - + [%- USE date -%] [%- user = target.0.usr -%] To: robert.soulliere@mohawkcollege.ca @@ -11336,52 +11360,52 @@ The following items are 90 days overdue and have been marked LOST. [% END %] - 13. + 13. Once you are satisfied with your new event trigger , click the Save button located at the bottom of the form - A quick and easy way to create new action triggers is to clone an existing action trigger. - Cloning Existing Action Triggers1. + A quick and easy way to create new action triggers is to clone an existing action trigger. + Cloning Existing Action Triggers1. Check the check box next to the action trigger you wish to clone - 2. + 2. Click the Clone Selected on the top left of the page. - 3. + 3. An editing window with open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and give the new action trigger a unique Name. - 4. + 4. Click Save. - Editing Action Triggers1. + Editing Action Triggers1. Check the check box next to the action trigger you wish to delete - 2. + 2. Click the Delete Selected on the top left of the page. - Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use + Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use the action trigger in the future. - Deleting Action Triggers1. + Deleting Action Triggers1. Check the check box next to the action trigger you wish to delete - 2. + 2. Click the Delete Selected on the top left of the page. HooksHooks - + Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions “hang”. Table 27.2. HooksFieldDescriptionHook KeyA unique name given to the hook.Core TypeUsed to link the action trigger to the IDL class in fm_IDL.xmlDescriptionText to describe the purpose of the hook. PassiveIndicates whether or not an event is created by direct user action or is circumstantial. You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. ReactorsReactors - + Reactors link the trigger definition to the action to be carried out. Table 27.3. Action Trigger ReactorsFieldDescriptionModule NameThe name of the Module to run if the action trigger is validated. It must be defined as a subroutine in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm or as a module @@ -11390,7 +11414,7 @@ The following items are 90 days overdue and have been marked LOST. ValidatorsValidators - + Validators set the validation test to be preformed to determine whether the action trigger is executed. Table 27.4. Action Trigger ValidatorsFieldDescriptionModule NameThe name of the subroutine in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm to validate the action trigger.DescriptionDescription of validation test to run. @@ -11398,23 +11422,23 @@ The following items are 90 days overdue and have been marked LOST. Processing Action TriggersProcessing Action Triggers - + To run the action triggers, an Evergreen administrator will need to run the trigger processing script /openils/bin/action_trigger_runner.pl --process-hooks --run-pending. This should be set up as a cron job to run periodically. You have several options when running the script: - •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: - /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should - use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. - Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined - in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. + •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: + /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should + use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. + Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined + in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. Chapter 28. Booking Module AdministrationChapter 28. Booking Module Administration Report errors in this documentation using Launchpad. Chapter 28. Booking Module Administration Report any errors in this documentation using Launchpad. - Chapter 28. Booking Module AdministrationChapter 28. Booking Module Administration + Chapter 28. Booking Module AdministrationChapter 28. Booking Module Administration Adapted with permission from original material by the Evergreen Community AbstractThe Evergreen booking module is included in Evergreen 1.6.1.x and above.The following @@ -11424,44 +11448,44 @@ The following items are 90 days overdue and have been marked LOST. Make a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable in Advance - + If their permission settings allow, staff members can make items bookable. Staff members can do this in advance of a booking request, or they can do it on the fly. If you know in advance of the request that an item will need to be booked, you can make the item bookable. - 1. + 1. In the staff client, select Search → Search the Catalog - 2. + 2. Begin a title search to find an item. - 3. + 3. Click the title of the item that you want to book. - 4. + 4. The Record Summary will appear. In this view you can see information about the item and its locations. Click Actions for this Record → Holdings Maintenance in the top right corner of the screen. - 5. + 5. The Holdings Maintenance screen will appear. In this screen, you can view the volumes and copies of an item avaialable at each branch. To view the barcodes and other information for each copy, click the arrow adjacent to the branch with the copy that you need to view. Click on successive arrows until you find the copy that you need to view. - 6. + 6. Select the item that you want to make bookable. Right click to open the menu, and click Make Item Bookable. - 7. + 7. The item has now been added to the list of resources that are bookable. To book the item, return to the Record Summary, and proceed with booking.. - + In Evergreen 1.6.1, there is no way to make an item “unbookable” after it has been made bookable and has been reserved. The Delete Selected button on this screen deletes the resource from the screen, but the item will be able to be @@ -11474,14 +11498,14 @@ The following items are 90 days overdue and have been marked LOST. If a patron wants to book an item immediately that does not have bookable status, you can book the item on the fly if you have the appropriate permissions. - 1. + 1. Follow steps one through five in the section called “Make a Cataloged Item Bookable in Advance”. - 2. + 2. Select the item that you want to make bookable. Right click to open the menu, and click Book Item Now. - 3. + 3. A Reservations screen will appear in a new tab, and you can make the reservation. @@ -11490,43 +11514,43 @@ The following items are 90 days overdue and have been marked LOST. Create a Bookable Status for Non-Bibliographic ItemsCreate a Bookable Status for Non-Bibliographic Items - + Staff with the required permissions can create a bookable status for non-bibliographic items. For example, staff can book conference rooms or laptops. You will be able to create types of resources, specify the names of individual resources within each type, and set attributes to describe those resources. You can then bring the values together through the Resource Attribute Map. - 1. + 1. First, create the type of resource that you want to make bookable. Select Admin → Server Administration → Booking → Resource Types. - 2. + 2. A list of resource types will appear. You may also see titles of cataloged items on this screen if they were added using the Make Item Bookable or Book Now links. You should not attempt to add cataloged items on this screen; it is best to use the aforementioned links to make those items bookable. In this screen, you will create a type of resource. - 3. + 3. In the right corner, click New Resource Type. - 4. + 4. A box will appear in which you will create a type of resource. In this box, you can set fines, determine “elbow room” periods between reservations on this type of resource, and indicate if this type of resource can be transferred to another library. Click Save when you have entered the needed information. - 5. + 5. After you click Save, the box will disappear. Refresh the screen to see the item that you have added. - 6. + 6. Next, set the attributes for the type of resource that you have created. Select Server Administration → Booking → Resource Attributes. - 7. + 7. Click New Resource Attribute. - 8. + 8. A box will appear in which you can add the attributes of the resource. Attributes are descriptive information that is provided to the staff member when the booking @@ -11534,51 +11558,51 @@ The following items are 90 days overdue and have been marked LOST. for its transportation. Other attributes might be number of seats available in a room, or MAC or PC attributes for a laptop. Click Save when the necessary information has been entered. - 9. + 9. The box will disappear. Refresh the screen to see the added attribute. - 10. + 10. Next, add the values for the resource attributes. A value can be a number, yes/no, or any other meaningful information. Select Server Administration → Booking → Resource Attribute Values. - 11. + 11. Select New Resource Attribute Value. - 12. + 12. A pop up box will appear. Select the Resource Attribute from the drop down box. Add the value. You can add multiple values for this field. Click Save when the required information has been added. - 13. + 13. If you refresh the screen, the attribute value may not appear, but it has been saved. - 14. + 14. Next, identify the specific objects that are associated with this resource type. Click Admin → Server Administration → Booking → Resources. - 15. + 15. Click New Resource. - 16. + 16. A pop-up box will appear. Add information for the resource and click Save. Repeat this process for each resource. - 17. + 17. Refresh the screen, and the resource(s) that you added will appear. - 18. + 18. Finally, use Resource Attribute Maps to bring together the resource and its attributes. Select Admin → Server Administration → Booking → Resource Attribute Maps. - 19. + 19. Select New Resource Attribute Map - 20. + 20. Select the resource that you want to match with its attributes, then click Save. Repeat for all applicable resources. - 21. + 21. You have now created bookable, non-bibliographic resource(s) with attributes. @@ -11586,7 +11610,7 @@ The following items are 90 days overdue and have been marked LOST. Setting Booking PermissionsSetting Booking Permissions - + Administrators can set permissions so that staff members can view reservations, make reservations, and make bibliographic or non-bibliographic items bookable. @@ -11601,17 +11625,17 @@ The following items are 90 days overdue and have been marked LOST. already included in the Staff permission group. All other booking permissions must be applied individually. - • + • View Reservations: VIEW_TRANSACTION* - • + • Use the pull list: RETRIEVE_RESERVATION_PULL_LIST - • + • Capture reservations: CAPTURE_RESERVATION - • + • Assist patrons with pickup and return: VIEW_USER* - • + • Create/update/delete reservations: ADMIN_BOOKING_RESERVATION @@ -11620,19 +11644,19 @@ The following items are 90 days overdue and have been marked LOST. bookable, booking items on the fly, and creating non-bibliographic resources for booking. - • + • Create/update/delete booking resource type: ADMIN_BOOKING_RESOURCE_TYPE - • + • Create/update/delete booking resource attributes: ADMIN_BOOKING_RESOURCE_ATTR - • + • Create/update/delete booking resource attribute values: ADMIN_BOOKING_RESOURCE_ATTR_VALUE - • + • Create/update/delete booking resource: ADMIN_BOOKING_RESOURCE - • + • Create/update/delete booking resource attribute maps: ADMIN_BOOKING_RESOURCE_ATTR_MAP @@ -11665,10 +11689,10 @@ The following items are 90 days overdue and have been marked LOST. Starting the Reporter DaemonStarting the Reporter Daemon To start the reporter daemon, run the following command as the opensrf user: - clark-kent.pl --daemon + clark-kent.pl --daemon You can also specify other options: - •sleep=interval : number of seconds to sleep between checks for new reports to run; defaults to 10•lockfile=filename : where to place the lockfile for the process; defaults to /tmp/reporter-LOCK•concurrency=integer : number of reporter daemon processes to run; defaults to 1•boostrap=filename : OpenSRF bootstrap configuration file; defaults to /openils/conf/opensrf_core.xml - + •sleep=interval : number of seconds to sleep between checks for new reports to run; defaults to 10•lockfile=filename : where to place the lockfile for the process; defaults to /tmp/reporter-LOCK•concurrency=integer : number of reporter daemon processes to run; defaults to 1•boostrap=filename : OpenSRF bootstrap configuration file; defaults to /openils/conf/opensrf_core.xml + The open-ils.reporter process must be running and enabled on the gateway before the reporter daemon can be started. Remember that if the server is restarted, the reporter daemon will need to be restarted before you can view reports unless you have configured your server to start the daemon automatically at start up time. @@ -11677,8 +11701,8 @@ The following items are 90 days overdue and have been marked LOST. To stop the reporter daemon, you have to kill the process and remove the lockfile. Assuming you're running just a single process and that the lockfile is in the default location, perform the following commands as the opensrf user: - kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` - rm /tmp/reporter-LOCK + kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` + rm /tmp/reporter-LOCK Chapter 30. FoldersChapter 30. Folders @@ -11690,13 +11714,13 @@ The following items are 90 days overdue and have been marked LOST. There are three main components to reports: Templates, Reports, and Output. Each of these components must be stored in a folder. Folders can be private (accessible to your login only) or shared with other staff at your library, other libraries in your system or consortium. It is also possible to selectively share only certain folders and/or subfolders. - + There are two parts to the folders pane. The My Folders section contains folders created with your Evergreen account. Folders that other users have shared with you appear in the Shared Folders section under the username of the sharing account. - + @@ -11711,41 +11735,41 @@ The following items are 90 days overdue and have been marked LOST. possible to do it before each step. This example demonstrates creating a folder for a template. - 1. + 1. Click on Templates in the My Folders section. - 2. + 2. Name the folder. Select Share or Do not share from the dropdown menu. - 3. + 3. If you want to share your folder, select who you want to share this folder with from the dropdown menu. - 4. + 4. Click Create Sub Folder. - 5. + 5. Click OK. - 6. + 6. Next, create a folder for the report definition to be saved to. Click on Reports. - 7. + 7. Repeat steps 2-5 to create a Reports folder also called Circulation. - 8. + 8. Finally, you need to create a folder for the report’s output to be saved in. Click on Output. - 9. + 9. Repeat steps 2-5 to create an Output folder named Circulation. - + Using a parallel naming scheme for folders in Templates, Reports, and Output helps keep your reports organized and easier to find The folders you just created will now be visible by clicking the arrows in @@ -11759,23 +11783,23 @@ The following items are 90 days overdue and have been marked LOST. Once a folder has been created you can change the name, delete it, create a new subfolder, or change the sharing settings. This example demonstrates changing a folder name; the other choices follow similar steps - 1. + 1. Click on the folder that you wish to rename. - 2. + 2. Click Manage Folder. - 3. + 3. Select Change folder name from the dropdown menu and click Go. - 4. + 4. Enter the new name and click Submit - 5. + 5. Click OK. - 6. + 6. You will get a confirmation box that the Action Succeeded. Click OK. @@ -11821,29 +11845,29 @@ The following items are 90 days overdue and have been marked LOST. - 1. + 1. - + Click on the My Folder template folder where you want the template to be saved. - + - 2. + 2. - + Click on Create a new Template for this folder. - + - 3. + 3. - + You can now see the template creating interface. The upper half of the screen is the Database Source Browser. The top left hand pane contains the database Sources drop-down list. @@ -11853,49 +11877,49 @@ The following items are 90 days overdue and have been marked LOST. reports), and Item (for reports on a library's holdings). - + The Enable source nullability checkbox below the sources list is for advanced reporting and should be left unchecked by default. - 4. + 4. - + Select Circulation in the Sources dropdown menu. Note that the Core Sources for reporting are listed first, however it is possible to access all available sources at the bottom of this dropdown menu. You may only specify one source per template. - + - 5. + 5. - + Click on Circulation to retrieve all the field names in the Field Name pane. Note that the Source Specifier (above the middle and right panes) shows the path that you took to get to the specific field. - + - 6. + 6. - + Select Circ ID in the middle Field Name pane, and Count Distinct from the right Field Transform pane. The Field Transform pane is where you choose how to manipulate the data from the selected fields. You are counting the number of circulations. - + Field Transforms have either an @@ -11904,40 +11928,40 @@ The following items are 90 days overdue and have been marked LOST. Count, Count Distinct, and other transform options. - 7. + 7. - + Click Add Selected Fields underneath the Field Transform pane to add this field to your report output. Note that Circ ID now shows up in the bottom left hand pane under the Displayed Fields tab. - + 8. - + Circ ID will be the column header in the report output. You can rename default display names to something more meaningful. To do so in this example, select the Circ ID row and click Alter Display Header. - + Double-clicking on the displayed field name is a shortcut to altering the display header. - 9. + 9. - + Type in the new column header name, for example Circ count and click OK. - + @@ -11947,21 +11971,21 @@ The following items are 90 days overdue and have been marked LOST. Add other data to your report by going back to the Sources pane and selecting the desired fields. In this example, we are going to add Circulating Item → Shelving Location to further refine the circulation report. - + In the top left hand Sources pane, expand Circulation. Depending on your computer you will either click on the + sign or on an arrow to expand the tree. - + - 11. + 11. - + Click on the + or arrow to expand Circulating Item. Select Shelving Location. - + When you are creating a template take the shortest path to the field you need @@ -11969,109 +11993,109 @@ The following items are 90 days overdue and have been marked LOST. find the same field name further in the file structure, but the shortest path is the most efficient. - 12. + 12. - + In the Field Name pane select Name. - + - 13. + 13. - + In the upper right Field Transform pane, select Raw Data and click Add Selected Fields. Use Raw Data when you do not wish to transform field data in any manner. - + 14. - + Name will appear in the bottom left pane. Select the Name row and click Alter Display Header. - + - 15. + 15. - + Enter a new, more descriptive column header, for example, Shelving location. Click OK. - + - 16. + 16. - + Note that the order of rows (top to bottom) will correspond to the order of columns (left to right) on the final report. Select Shelving location and click on Move Up to move Shelving location before Circ count. - + - 17. + 17. - + Return to the Sources pane to add more fields to your template. Under Sources click Circulation, then select Check Out Date/Time from the middle Field Name pane. - + - 18. + 18. - + Select Year + Month in the right hand Field Transform pane and click Add Selected Fields - + - 19. + 19. - + Check Out Date/Time will appear in the Displayed Fields pane. In the report it will appear as a year and month (YYYY-MM) corresponding to the selected tranform. - + 20. - + Select the Check Out Date/Time row. Click Alter Display Header and change the column header to Check out month. - + - 21. + 21. - + Move Check out month to the top of the list using the Move Up button, so that it will be the first column in an MS Excel spreadsheet or in a chart. Report output will sort by the first column. - + - + - + Note the Change Transform button in the bottom left hand pane. It has the same function as the upper right Field @@ -12079,7 +12103,7 @@ The following items are 90 days overdue and have been marked LOST. - + @@ -12122,41 +12146,41 @@ The following items are 90 days overdue and have been marked LOST. one library’s circulations for a specified time frame. The time frame in the template will be configured so that you can change it each time you run the report. - Using Base Filters1. + Using Base Filters1. Select the Base Filters tab in the bottom Template Configuration pane. - 2. + 2. - + For this circulation statistics example, select Circulation → Check Out Date/Time → Year + Month and click on Add Selected Fields. You are going to filter on the time period. - + - 3. + 3. - + Select Check Out Date/Time. Click on Change Operator and select In list from the dropdown menu. - + 4. - + To filter on the location of the circulation select Circulation → Circulating library → Raw Data and click on Add Selected Fields. - + - 5. + 5. - + Select Circulating Library and click on Change Operator and select Equals. Note that this is a template, so the value for Equals will be filled out when you run the report - + For multi-branch libraries, you would select Circulating @@ -12168,46 +12192,46 @@ The following items are 90 days overdue and have been marked LOST. so that deleted records don’t show up in the results. You might want to use deleted=true, for a template for a report on deleted items in the last month. - 6. + 6. - + Once you have configured your template, you must name and save it. Name this template Circulations by month for one library. You can also add a description. In this example, the title is descriptive enough, so a description is not necessary. Click Save. - + - 7. + 7. - + Click OK. - + - 8. + 8. - + You will get a confirmation dialogue box that the template was successfully saved. Click OK. - + After saving it is not possible to edit a template. To make changes you will need to clone it and edit the clone - + - + The bottom right hand pane is also a source specifier. By selecting one of these rows you will limit the fields that are visible to the sources you have specified. This may be helpful when reviewing templates with many fields. Use Ctrl+Click to select or deselect items. - + @@ -12228,33 +12252,33 @@ The following items are 90 days overdue and have been marked LOST. Now you are ready to run the report from the template you have created. - 1. + 1. - + In the My Folders section click the arrow next to Templates to expand this folder and select circulation. - + - 2. + 2. - Select the box beside Circulations by month for one + Select the box beside Circulations by month for one library. Select Create a new report from selected template from the dropdown menu. Click Submit. - - 3. + + 3. - + Complete the first part of report settings. Only Report Name and Choose a folder... are required fields. - + @@ -12295,15 +12319,15 @@ The following items are 90 days overdue and have been marked LOST. definition. Only report folders under My Folders are available. Click on the desired folder to select it. - 4. + 4. - + Select values for the Circulation > Check Out Date/Time. Use the calendar widget or manually enter the desired dates, then click Add to include the date on the list. You may add multiple dates. - + The Transform for this field is Year + @@ -12316,17 +12340,17 @@ The following items are 90 days overdue and have been marked LOST. also show comparative data from one year ago, select a relative date 1 month ago, and 13 months ago. - 5. + 5. Select a value for the Circulating Library. - 6. + 6. - + Complete the bottom portion of the report definition interface, then click Save. - + @@ -12360,14 +12384,14 @@ The following items are 90 days overdue and have been marked LOST. Select a folder for the report's output. - 7. + 7. - + You will get a confirmation dialogue box that the Action Succeeded. Click OK. - + @@ -12397,21 +12421,21 @@ The following items are 90 days overdue and have been marked LOST. defined in the report. Output is also stored in the specified Output folder and will remain there until manually deleted. - 1. + 1. To view report output in the staff client, open the reports interface from Admin (-) → Local Administration → Reports - 2. + 2. - + Click on Output to expand the folder. Select Circulation (where you just saved the circulation report output). - + - 3. + 3. View report output is the default selection in the dropdown menu. Select Recurring Monthly Circ by Location by @@ -12420,28 +12444,28 @@ The following items are 90 days overdue and have been marked LOST. - + - 4. + 4. A new tab will open for the report output. Select either Tabular Output or Excel Output. If Bar Charts was selected during report definition the chart will also appear. - 5. + 5. Tabular output looks like this: - - 6. + + 6. - + If you want to manipulate, filter or graph this data, Excel output would be more useful. Excel output looks like this in Excel: - + @@ -12465,53 +12489,53 @@ The following items are 90 days overdue and have been marked LOST. The steps below assume you have already created at least one Templates folder.  If you haven’t done this, please see Chapter 30, Folders. - 1. + 1. Access the reports interface from the Admin (-) menu under Local Administration → Reports - 2. + 2. Under Shared Folders expand the Templates folder and the subfolder of the report you wish to clone.  To expand the folders click on the grey arrow or folder icon.  Do not click on the blue underlined hyperlink. - 3. + 3. Click on the subfolder. -4. +4. - + Select the template you wish to clone.  From the dropdown menu choose Clone selected templates, then click Submit.   - + By default Evergreen only displays the first 10 items in any folder. To view all content, change the Limit output setting from 10 to All. - 5. + 5. - + Choose the folder where you want to save the cloned template, then click Select Folder. Only template folders created with your account will be visible. If there are no folders to choose from please see Chapter 30, Folders. - + - 6. + 6. - + The cloned template opens in the template editor. From here you may modify the template by adding, removing, or editing fields and filters as described in Chapter 31, Creating Templates. Template Name and Description can also be edited. When satisfied with your changes click Save. - + - 7. + 7. Click OK in the resulting confirmation windows. @@ -12536,29 +12560,29 @@ The following items are 90 days overdue and have been marked LOST. the output through the reporter interface as described in Chapter 33, Viewing Report Output. To set up a monthly recurring report follow the procedure in Generating Reports from Templates but make the changes described below. - 1. + 1. Select the Recurring Report check-box and set the recurrence interval to 1 month. - 2. + 2. Do not select Run ASAP. Instead schedule the report to run early on the first day of the next month. Enter the date in YYYY-MM-DD format. - 3. + 3. Ensure there is an email address to receive completion emails. You will receive an email completion notice each month when the output is ready. - 4. + 4. Select a folder for the report’s output. - 5. + 5. Click Save Report. - 6. + 6. You will get a confirmation dialogue box that the Action Succeeded. Click OK. @@ -12578,7 +12602,7 @@ The following items are 90 days overdue and have been marked LOST. procedure to set up a new recurring report with the desired changes. Please note that deleting a report also deletes all output associated with it. - + Once you have been on Evergreen for a year, you could set up your recurring monthly reports to show comparative data from one year ago. To do this select relative dates of 1 month ago and 13 months ago. @@ -12593,13 +12617,13 @@ The following items are 90 days overdue and have been marked LOST. Data TypesData Types - The central column of the Database Source Browser lists + The central column of the Database Source Browser lists Field Name and Data Type for the selected database table. - + Each data type has its own characteristics and uses: - Data TypeDescriptionNotesidUnique number assigned by the database to identify a + Data TypeDescriptionNotesidUnique number assigned by the database to identify a recordA number that is a meaningful reference for the database but not of much use to a human user. Use in displayed fields when counting records or in filters.textText fieldUsually uses the Raw Data transform.timestampExact date and timeSelect appropriate date/time transform. Raw Data includes second and timezone information, usually more than is required for a report.boolTrue or FalseCommonly used to filter out deleted item or patron records.org_unitA number representing a library, library system, or @@ -12614,9 +12638,9 @@ The following items are 90 days overdue and have been marked LOST. Field TransformsField Transforms - A Field Transform tells the reporter how to process a field + A Field Transform tells the reporter how to process a field for output. Different data types have different transform options. - + Raw Data.  To display a field exactly as it appears in @@ -12640,12 +12664,12 @@ The following items are 90 days overdue and have been marked LOST. counted. To identify the number of active patrons in this example the Count Distinct transform should be used. - Title Patron ID Patron Name Harry Potter and the Chamber of Secrets 001 John Doe Northern Lights 001 John Doe Harry Potter and the Philosopher’s Stone 222 Jane Doe - + Title Patron ID Patron Name Harry Potter and the Chamber of Secrets 001 John Doe Northern Lights 001 John Doe Harry Potter and the Philosopher’s Stone 222 Jane Doe + Output Type.  Note that each transform has either an Aggregate or Non-Aggregate output type. - + Selecting a Non-Aggregate output type will return one row of output in your report for each row in the database. Selecting an @@ -12689,7 +12713,7 @@ The following items are 90 days overdue and have been marked LOST. Evergreen Directory StructureEvergreen Directory Structure This is the top level directory structure of Evergreen located in the default installation directory /openils: - Table 37.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and + Table 37.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and oils.ctl. confContains the configuration scripts including the two most important base configuration files opensrf_core.xml and opensrf.xml.includeContains the header files used by the scripts written in C.libContains the core code of Evergreen including the C code and perl modules. In particular, the perl modules in the subdirectoryperl5/OpenILS are of particular interest to developers. varLargest directory and includes the web directories (web), lock pid fies @@ -12699,12 +12723,12 @@ The following items are 90 days overdue and have been marked LOST. Evergreen Configuration FilesEvergreen Configuration Files - Table 37.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. - It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take - effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. + Table 37.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. + It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take + effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. An Apache restart is required for changes to this file to take effect. - Table 37.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and - Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. + Table 37.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and + Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. @@ -12714,7 +12738,7 @@ The following items are 90 days overdue and have been marked LOST. Report any errors in this documentation using Launchpad. Chapter 38. Customizing the Staff ClientChapter 38. Customizing the Staff Client - + This chapter will give you some guidance on customizing the staff client. The files related to the staff client are located in the directory /openils/var/web/xul/[staff client version]/server/ Changing Colors and ImagesChanging Colors and Images @@ -12726,20 +12750,20 @@ The following items are 90 days overdue and have been marked LOST. Changing Labels and MessagesChanging Labels and Messages - + You can customize labels in the staff client by editing the corresponding DTD files. The staff client uses the same lang.dtd used by the OPAC. This file is located in /openils/var/web/opac/locale/[your locale]. Other labels are controlled by the staff client specific lang.dtd file in /openils/var/web/xul/client version]/server/locale/[your locale]/. Changing the Search SkinChanging the Search Skin There are a few ways to change the custom skin for OPAC searching in staff client. - Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings + Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings To change the opac search skins used by the staff client create a file named custom.js and place it in the /openils/var/web/xul/[staff client version]/server/skin/ directory. This will effect all staff clients since these settings will override local settings. For example, the following text in custom.js would set the staff client opac, details page, results page and browse function to the craftsman skin: - + urls['opac'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; urls['opac_rdetail'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml'; urls['opac_rresult'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml'; @@ -12747,13 +12771,13 @@ urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; Restart the staff client to see the changes. - Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine + Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine To change the search skin on an individual machine for personal preferences or needs, edit the file /[Evergreen staff client path]/build/chrome/content/main/constants.js. Find the lines which point to the urls for the OPAC and edit accordingly. For example, here is an example to set the opac, details page, results page and browse function to the craftsman skin: - + 'opac' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', 'opac_rdetail' : '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml', 'opac_rresult' : '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml', @@ -12773,50 +12797,50 @@ urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these instructions assume an installation of Evergreen using the default file locations. - + Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be overwritten when you upgrade your copy of Evergreen. Change the Color SchemeChange the Color Scheme - + To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can change the 4 base color scheme as well as colors of specific elements. You can also create alternate themes for your users. - 1. + 1. Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ to a new folder /openils/var/web/opac/theme/[your new theme]/. - 2. + 2. Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. - 3. + 3. Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. - + <link type='text/css' rel="alternate stylesheet" title='&opac.style.yourtheme;' href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" name='Default' csstype='color'/> - 4. + 4. Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ [your locale]/opac.dtd. - <!ENTITY opac.style.yourtheme "YourTheme"> + <!ENTITY opac.style.yourtheme "YourTheme"> customizing Opac Text and Labelscustomizing Opac Text and Labels - + To change text and links used throughout the OPAC, edit the following files: - •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd - + •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd + A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include statement above the default dtd files. - + <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ @@ -12828,46 +12852,46 @@ name='Default' csstype='color'/> While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. For example, the footer.xml file has this code to generate a copyright statement: - + <div id='copyright_text'> <span>&footer.copyright;</span> The included opac.dtd file in the en-US locale directory has this setting for &footer.copyright text: - <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> + <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> Logo ImagesLogo Images To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. - •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg + •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg Added ContentAdded Content - + By default Evergreen includes customizable “Added Content” features to enhance the OPAC experience for your user. These features include Amazon book covers and Google books searching. These features can be turned off or customized. Book CoversBook Covers The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of /openils/conf/opensrf.xml. Here are the key elements of this configuration: - <module>OpenILS::WWW::AddedContent::Amazon</module> + <module>OpenILS::WWW::AddedContent::Amazon</module> This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. You will also need to change other settings accordingly. There are some available book cover perl modules available in trunk - <base_url>http://images.amazon.com/images/P/</base_url> + <base_url>http://images.amazon.com/images/P/</base_url> Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching capabilities are added. - <timeout>1</timeout> + <timeout>1</timeout> Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. - <retry_timeout>600</retry_timeout> + <retry_timeout>600</retry_timeout> After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. - <max_errors>15</max_errors> + <max_errors>15</max_errors> Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. - <userid>MY_USER_ID</userid> + <userid>MY_USER_ID</userid> If a userid is required to access the added content. Google Books LinkGoogle Books Link - + The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries in Google Books. This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not @@ -12875,10 +12899,10 @@ name='Default' csstype='color'/> /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. SyndeticsSyndetics - + Sydantics is another option for added content, Here is an example of using Sydantics as your added content provider: - + <!-- We're using Syndetics --> <module>OpenILS::WWW::AddedContent::Syndetic</module> @@ -12910,7 +12934,7 @@ name='Default' csstype='color'/> </added_content> - Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ + Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ Customizing the Results PageCustomizing the Results Page @@ -12918,13 +12942,13 @@ name='Default' csstype='color'/> The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more experienced web developers. There are several critical files to edit if you wish to customize the results page: - •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results - page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but - requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. + •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results + page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but + requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. Customizing the Details PageCustomizing the Details Page - + There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential of Evergreen when displaying the details of items. Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. @@ -12938,47 +12962,47 @@ name='Default' csstype='color'/> The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. BibTemplateBibTemplate - + BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, metarecords, monograph holdings information, Located URIs, and more to come – in many different formats from MARCXML to MODS to custom XML applications. - + Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. BibTemplate supports the following Evergreen meta data formats: - •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' - HTML APIHTML API + •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' + HTML APIHTML API BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a - set of attributes that are added to existing OPAC markup, and fall into two classes: - • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. + set of attributes that are added to existing OPAC markup, and fall into two classes: + • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. - Slot MarkerSlot Marker - + Slot MarkerSlot Marker + A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML - Nodes that should be considered for formatting. + Nodes that should be considered for formatting. The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information and unAPI links. Example of a slot marker: - <p type='opac/slot-data' query='datafield[tag=245]'></p> + <p type='opac/slot-data' query='datafield[tag=245]'></p> Most useful attribute match operators include: - • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value - Selectors always narrow, so select broadly and iterate through the NodeList + • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value + Selectors always narrow, so select broadly and iterate through the NodeList - Slot FormatterSlot Formatter - + Slot FormatterSlot Formatter + A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents - of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector + of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector specified on the slot marker. This function is passed one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is concatenated into a single string and used to replace the contents of the slot marker. The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes specified in the CSS3 selector. Example of a slot formatter: - + <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' query='volumes volume uris uri' join=", "> <script type='opac/slot-format'><![CDATA[ @@ -12990,14 +13014,14 @@ name='Default' csstype='color'/> </td> - JavaScript APIJavaScript API - + JavaScript APIJavaScript API + In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done for each record that is to contribute to a pages display. The API for this is simple and straight-forward: The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. + specified in the CSS3 selector. Example of a slot formatter: - + dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded // Create a renderer supplying the record id and the short name of the org unit, if known, @@ -13005,28 +13029,28 @@ name='Default' csstype='color'/> new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: - •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers + •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers - BibTemplate ExamplesBibTemplate Examples + BibTemplate ExamplesBibTemplate Examples This is all that we had to add to display the contents of an arbitrary MARC field: - + <tr> <td>Bibliography note</td> <td type='opac/slot-data' query='datafield[tag=504]'></td> </tr> - If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. - To display a specific MARC subfield, add that subfield to the query attribute. + If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. + To display a specific MARC subfield, add that subfield to the query attribute. For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) - + <tr> <td>Awards note</td> <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> </tr> Hide empty rows by default, and display them only if they have content: - + <tr class='hide_me' id='tag504'> <td>Bibliographic note</td> <td type='opac/slot-data' query='datafield[tag=504]'> @@ -13037,11 +13061,11 @@ name='Default' csstype='color'/> ]]></script> </td></tr> - •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - - avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item + •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - + avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item containing the results of the query (a NodeList) Suppressing a subfield: - + <tr class='hide_me' id='tag700'> <td>Additional authors</td> <td type='opac/slot-data' query='datafield[tag=700]'> @@ -13066,13 +13090,13 @@ name='Default' csstype='color'/> start.html is the basic search display and advanced.html is the display for the advanced search option. By default, the Slimpac files include the same locale dtd as the regular OPAC (opac.dtd). However, the slimpac files do not use the same CSS files as the regular OPAC which means that if you change the OPAC color scheme, you must also edit the Slimpac files. - Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display + Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display Two files control the display results for the slimpac. Edit the XSL stylesheet (/openils/var/xsl/ATOM2XHTML.xsl) to edit the elements of the record as pulled from the XML output. You may also change the style of the page by editing the CSS stylesheet for the results display (/openils/var/web/opac/extras/os.css). - Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display + Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display It is also possible to customize the details page when viewing specific items from the results list. To edit the holdings display which contains the details of the specific record linked from the results display, edit the CSS stylesheet for the holdings/details page @@ -13084,7 +13108,7 @@ name='Default' csstype='color'/> It is possible to embed a simple search form into an html page which will allow users to search for materials in your Evergreen catalog. Here is code which can be embedded anywhere in the body of your web page: - + <form action="http://[domain name]/opac/[locale]/skin/default/xml/rresult.xml" method="get"> <div> Quick Catalog Search:<br /> @@ -13120,7 +13144,7 @@ name='Default' csstype='color'/> Introducing OpenSRFIntroducing OpenSRF - + OpenSRF is a message routing network that offers scalability and failover support for individual services and entire servers with minimal development and deployment overhead. You can use OpenSRF to build loosely-coupled applications @@ -13168,29 +13192,29 @@ name='Default' csstype='color'/> language. Writing an OpenSRF binding for another language is a relatively small task if that language offers libraries that support the core technologies on which OpenSRF depends: - • + • Extensible Messaging and Presence Protocol (XMPP, sometimes referred to as Jabber) - provides the base messaging infrastructure between OpenSRF clients and services - - • + + • JavaScript Object Notation (JSON) - serializes the content of each XMPP message in a standardized and concise format - • + • memcached - provides the caching service - - • + + • syslog - the standard UNIX logging service - + Unfortunately, the OpenSRF @@ -13202,7 +13226,7 @@ name='Default' csstype='color'/> The recommended option for would-be developers of another language binding is to use the Python implementation as the cleanest basis for a port to another language. - + Writing an OpenSRF ServiceWriting an OpenSRF Service @@ -13221,19 +13245,19 @@ name='Default' csstype='color'/> arguments and results are typical core language data types: strings, numbers, booleans, arrays, and hashes. To implement a new OpenSRF service, perform the following steps: - 1. + 1. Include the base OpenSRF support libraries - 2. + 2. Write the code for each of your OpenSRF methods as separate procedures - 3. + 3. Register each method - 4. + 4. Add the service definition to the OpenSRF configuration files @@ -13241,7 +13265,7 @@ name='Default' csstype='color'/> For example, the following code implements an OpenSRF service. The service includes one method named opensrf.simple-text.reverse() that accepts one string as input and returns the reversed version of that string: - + #!/usr/bin/perl package OpenSRF::Application::Demo::SimpleText; @@ -13275,13 +13299,13 @@ __PACKAGE__->register_method( Registering a service with the OpenSRF configuration filesRegistering a service with the OpenSRF configuration files Two files control most of the configuration for OpenSRF: - • + • opensrf.xml contains the configuration for the service itself, as well as a list of which application servers in your OpenSRF cluster should start the service. - • + • opensrf_core.xml (often referred to as the "bootstrap configuration" file) contains the OpenSRF networking information, including the XMPP server @@ -13289,13 +13313,13 @@ __PACKAGE__->register_method( this for a new service if the new service needs to be accessible via the public router. - + Begin by defining the service itself in opensrf.xml. To register the opensrf.simple-text service, add the following section to the <apps> element (corresponding to the XPath /opensrf/default/apps/): - - + + <apps> <opensrf.simple-text> <keepalive>3</keepalive> @@ -13318,17 +13342,17 @@ __PACKAGE__->register_method( <!-- other OpenSRF services registered here... --> </apps> - + The element name is the name that the OpenSRF control scripts use to refer to the service. - + The <keepalive> element specifies the interval (in seconds) between checks to determine if the service is still running. - + The <stateless> element specifies whether OpenSRF clients can call methods from this service without first having to create a connection to a @@ -13336,61 +13360,61 @@ __PACKAGE__->register_method( the client can simply issue a request and the router will forward the request to an available service and the result will be returned directly to the client. - + The <language> element specifies the programming language in which the service is implemented. - + The <implementation> element pecifies the name of the library or module in which the service is implemented. - + (C implementations only): The <max_requests> element, as a direct child of the service element name, specifies the maximum number of requests a process serves before it is killed and replaced by a new process. - + (Perl implementations only): The <max_requests> element, as a direct child of the <unix_config> element, specifies the maximum number of requests a process serves before it is killed and replaced by a new process. - + The <unix_log> element specifies the name of the log file for language-specific log messages such as syntax warnings. - + The <unix_sock> element specifies the name of the UNIX socket used for inter-process communications. - + The <unix_pid> element specifies the name of the PID file for the master process for the service. - + The <min_children> element specifies the minimum number of child processes that should be running at any given time. - + The <max_children> element specifies the maximum number of child processes that should be running at any given time. - + The <min_spare_children> element specifies the minimum number of idle child processes that should be available to handle incoming requests. If there are fewer than this number of spare child processes, new processes will be spawned. - + The`<max_spare_children>` element specifies the maximum number of idle child processes that should be available to handle incoming requests. If there @@ -13402,7 +13426,7 @@ __PACKAGE__->register_method( edit the opensrf_core.xml configuration file to add the service to the list of publicly accessible services: Making a service publicly accessible in opensrf_core.xml.  - + <router> <!-- This is the public router. On this router, we only register applications which should be accessible to everyone on the opensrf network --> @@ -13415,17 +13439,17 @@ __PACKAGE__->register_method( </router> - + This section of the opensrf_core.xml file is located at XPath /config/opensrf/routers/. - + public.localhost is the canonical public router domain in the OpenSRF installation instructions. - + Each <service> element contained in the <services> element offers their services via the public router as well as the private router. @@ -13436,7 +13460,7 @@ __PACKAGE__->register_method( Calling an OpenSRF methodCalling an OpenSRF method - + OpenSRF clients in any supported language can invoke OpenSRF services in any supported language. So let’s see a few examples of how we can call our fancy new opensrf.simple-text.reverse() method: @@ -13450,7 +13474,7 @@ __PACKAGE__->register_method( The following example calls the opensrf.simple-text.reverse method of the opensrf.simple-text OpenSRF service, passing the string "foobar" as the only method argument: - + $ srfsh srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" @@ -13475,12 +13499,12 @@ Request Time in seconds: 0.016718 command, srfsh lists all of the methods offered by the service. If you pass a partial method name, srfsh lists all of the methods that match that portion of the method name. - The quality and availability of the descriptive information for each + The quality and availability of the descriptive information for each method depends on the developer to register the method with complete and accurate information. The quality varies across the set of OpenSRF and Evergreen APIs, although some effort is being put towards improving the state of the internal documentation. - + srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" --> opensrf.simple-text @@ -13513,43 +13537,43 @@ Received Data: { } } - + stream denotes whether the method supports streaming responses or not. - + package identifies which package or library implements the method. - + api_name identifies the name of the OpenSRF method. - + signature is a hash that describes the parameters for the method. - + params is an array of hashes describing each parameter in the method; each parameter has a description (desc), name (name), and type (type). - + desc is a string that describes the method itself. - + return is a hash that describes the return value for the method; it contains a description of the return value (desc) and the type of the returned value (type). - + method identifies the name of the function or method in the source implementation. - + argc is an integer describing the minimum number of arguments that must be passed to this method. @@ -13560,7 +13584,7 @@ Received Data: { To call an OpenSRF method from Perl, you must connect to the OpenSRF service, issue the request to the method, and then retrieve the results. - + #/usr/bin/perl use strict; use OpenSRF::AppSession; @@ -13599,20 +13623,20 @@ print "Word count: " . $result->{'word_count'} . "\n"; $session->disconnect(); - + The OpenSRF::System->bootstrap_client() method reads the OpenSRF configuration information from the indicated file and creates an XMPP client connection based on that information. - + The OpenSRF::AppSession->create() method accepts one argument - the name of the OpenSRF service to which you want to want to make one or more requests - and returns an object prepared to use the client connection to make those requests. - + The OpenSRF::AppSession->request() method accepts a minimum of one argument - the name of the OpenSRF method to which you want to make a request - @@ -13621,19 +13645,19 @@ $session->disconnect(); opensrf.simple-text.substring method defined by the opensrf.simple-text OpenSRF service. - + The gather() method, called on the result object returned by the request() method, iterates over all of the possible results from the result object and returns a single variable. - + This request() call passes two strings to the opensrf.simple-text.split method defined by the opensrf.simple-text OpenSRF service and returns (via gather()) a reference to an array of results. - + The opensrf.simple-text.split() method is a streaming method that returns an array of results with one element per recv() call on the @@ -13641,25 +13665,25 @@ $session->disconnect(); - + While the gather() convenience method returns only the content of the complete set of results for a given request, the recv() method returns an OpenSRF result object with status, statusCode, and content fields as we saw in the HTTP results example. - + This request() call passes an array to the opensrf.simple-text.statistics method defined by the opensrf.simple-text OpenSRF service. - + The result object returns a hash reference via gather(). The hash contains the length and word_count keys we defined in the method. - + The OpenSRF::AppSession->disconnect() method closes the XMPP client connection and cleans up resources associated with the session. @@ -13677,7 +13701,7 @@ $session->disconnect(); Basic text splitting method.  - + sub text_split { my $self = shift; my $conn = shift; @@ -13710,7 +13734,7 @@ __PACKAGE__->register_method( relationships to other objects, the permissions required to create, read, update, or delete objects of that type, and the database table or view on which they are based. - + The Evergreen fieldmapper offers a great deal of convenience for working with complex system objects beyond the basic mapping of classes to database schemas. Although the result is passed over the wire as a JSON object @@ -13723,7 +13747,7 @@ __PACKAGE__->register_method( configuration file and dynamically register OpenSRF methods for creating, reading, updating, and deleting all of the defined classes. Example fieldmapper class definition for "Open User Summary".  - + <class id="mous" controller="open-ils.cstore open-ils.pcrud" oils_obj:fieldmapper="money::open_user_summary" oils_persist:tablename="money.open_usr_summary" @@ -13747,11 +13771,11 @@ __PACKAGE__->register_method( </class> - + The <class> element defines the class: - • + • The id attribute defines the class hint that identifies the class both elsewhere in the fieldmapper configuration file, such as in the value of the @@ -13759,26 +13783,26 @@ __PACKAGE__->register_method( it is instantiated. For example, an "Open User Summary" JSON object would have the top level property of "__c":"mous". - • + • The controller attribute identifies the services that have direct access to this class. If open-ils.pcrud is not listed, for example, then there is no means to directly access members of this class through a public service. - • + • The oils_obj:fieldmapper attribute defines the name of the Perl fieldmapper class that will be dynamically generated to provide setter and getter methods for instances of the class. - • + • The oils_persist:tablename attribute identifies the schema name and table name of the database table that stores the data that represents the instances of this class. In this case, the schema is money and the table is open_usr_summary. - • + • The reporter:label attribute defines a human-readable name for the class used in the reporting interface to identify the class. These names are defined @@ -13786,16 +13810,16 @@ __PACKAGE__->register_method( so that they can be translated and served in the user’s language of choice. - + The <fields> element lists all of the fields that belong to the object. - • + • The oils_persist:primary attribute identifies the field that acts as the primary key for the object; in this case, the field with the name usr. - • + • The oils_persist:sequence attribute identifies the sequence object (if any) in this database provides values for new instances of this class. In @@ -13803,51 +13827,51 @@ __PACKAGE__->register_method( table, so no sequence is used to populate these instances. - + Each <field> element defines a single field with the following attributes: - • + • The name attribute identifies the column name of the field in the underlying database table as well as providing a name for the setter / getter method that can be invoked in the JSON or native version of the object. - • + • The reporter:datatype attribute defines how the reporter should treat the contents of the field for the purposes of querying and display. - • + • The reporter:label attribute can be used to provide a human-readable name for each field; without it, the reporter falls back to the value of the name attribute. - + The <links> element contains a set of zero or more <link> elements, each of which defines a relationship between the class being described and another class. - • + • The field attribute identifies the field named in this class that links to the external class. - • + • The reltype attribute identifies the kind of relationship between the classes; in the case of has_a, each value in the usr field is guaranteed to have a corresponding value in the external class. - • + • The key attribute identifies the name of the field in the external class to which this field links. - • + • The rarely-used map attribute identifies a second class to which the external class links; it enables this field to define a direct @@ -13855,29 +13879,29 @@ __PACKAGE__->register_method( avoid having to retrieve all of the linked members of an intermediate class just to retrieve the instances from the actual desired target class. - • + • The class attribute identifies the external class to which this field links. - + The <permacrud> element defines the permissions that must have been granted to a user to operate on instances of this class. - + The <retrieve> element is one of four possible children of the <actions> element that define the permissions required for each action: create, retrieve, update, and delete. - • + • The permission attribute identifies the name of the permission that must have been granted to the user to perform the action. - • + • The contextfield attribute, if it exists, defines the field in this class that identifies the library within the system for which the user must have @@ -13886,7 +13910,7 @@ __PACKAGE__->register_method( at that library. - + The rarely-used <context> element identifies a linked field (link attribute) in this class which links to an external class that holds the field @@ -13932,7 +13956,7 @@ __PACKAGE__->register_method( In the following example, the text splitting method has been reimplemented to support streaming; very few changes are required: Text splitting method - streaming mode.  - + sub text_split { my $self = shift; my $conn = shift; @@ -13953,13 +13977,13 @@ __PACKAGE__->register_method( ); - + Rather than returning a reference to the array, a streaming method loops over the contents of the array and invokes the respond() method of the connection object on each element of the array. - + Registering the method as a streaming method instructs OpenSRF to also register an atomic variant (opensrf.simple-text.split.atomic). @@ -13980,7 +14004,7 @@ __PACKAGE__->register_method( You can include many calls to the OpenSRF logger; only those that are higher than your configured logging level will actually hit the log. The following example exercises all of the available logging levels in OpenSRF: - + use OpenSRF::Utils::Logger; my $logger = OpenSRF::Utils::Logger; # some code in some function @@ -13996,7 +14020,7 @@ my $logger = OpenSRF::Utils::Logger; statements on a system running at the default logging level of INFO, you will only see the INFO, WARN, and ERR messages, as follows: Results of logging calls at the default level of INFO.  - + [2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] [2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] [2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] @@ -14005,7 +14029,7 @@ my $logger = OpenSRF::Utils::Logger; If you then increase the the logging level to INTERNAL (5), the logs will contain much more information, as follows: Results of logging calls at the default level of INTERNAL.  - + [2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] [2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] [2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] @@ -14030,7 +14054,7 @@ my $logger = OpenSRF::Utils::Logger; Caching results: one secret of scalabilityCaching results: one secret of scalability - + If you have ever used an application that depends on a remote Web service outside of your control—say, if you need to retrieve results from a microblogging service—you know the pain of latency and dependability (or the @@ -14041,7 +14065,7 @@ my $logger = OpenSRF::Utils::Logger; by sleeping for 10 seconds the first time it receives a given cache key and cannot retrieve a corresponding value from the cache: Simple caching OpenSRF service.  - + use OpenSRF::Utils::Cache; sub test_cache { my $self = shift; @@ -14061,40 +14085,40 @@ sub test_cache { } - + The OpenSRF::Utils::Cache module provides access to the built-in caching support in OpenSRF. - + The constructor for the cache object accepts a single argument to define the cache type for the object. Each cache type can use a separate memcache server to keep the caches separated. Most Evergreen services use the global cache, while the anon cache is used for Web sessions. - + The cache key is simply a string that uniquely identifies the value you want to store or retrieve. This line creates a cache key based on the OpenSRF method name and request input value. - + The get_cache() method checks to see if the cache key already exists. If a matching key is found, the service immediately returns the stored value. - + If the cache key does not exist, the code sleeps for 10 seconds to simulate a call to a slow remote Web service or an intensive process. - + The $cache_timeout variable represents a value for the lifetime of the cache key in seconds. - + After the code retrieves its value (or, in the case of this example, finishes sleeping), it creates the cache entry by calling the put_cache() @@ -14125,7 +14149,7 @@ sub test_cache { The settings for OpenSRF services are maintained in the opensrf.xml XML configuration file. The structure of the XML document consists of a root element <opensrf> containing two child elements: - • + • The <default> element contains an <apps> element describing all OpenSRF services running on this system—see the section called “Registering a service with the OpenSRF configuration files” --, as @@ -14133,7 +14157,7 @@ sub test_cache { purposes. For example, Evergreen uses this section for email notification and inter-library patron privacy settings. - • + • The <hosts> element contains one element per host that participates in this OpenSRF system. Each host element must include an <activeapps> element @@ -14143,18 +14167,18 @@ sub test_cache { OpenSRF includes a service named opensrf.settings to provide distributed cached access to the configuration settings with a simple API: - • + • opensrf.settings.default_config.get accepts zero arguments and returns the complete set of default settings as a JSON document. - • + • opensrf.settings.host_config.get accepts one argument (hostname) and returns the complete set of settings, as customized for that hostname, as a JSON document. - • + • opensrf.settings.xpath.get accepts one argument (an XPath expression) and returns the portion of @@ -14178,7 +14202,7 @@ sub test_cache { OpenSRF Communication FlowsOpenSRF Communication Flows - + Now that you have seen that it truly is easy to create an OpenSRF service, we can take a look at what is going on under the covers to make all of this work for you. @@ -14210,7 +14234,7 @@ sub test_cache { OpenSRF communication flows over XMPPOpenSRF communication flows over XMPP - + In a minimal OpenSRF deployment, two XMPP users named "router" connect to the XMPP server, with one connected to the private XMPP domain and one connected to the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to @@ -14250,7 +14274,7 @@ sub test_cache { OpenSRF communication flows over HTTPOpenSRF communication flows over HTTP - + In some contexts, access to a full XMPP client is not a practical option. For example, while XMPP clients have been implemented in JavaScript, you might be concerned about browser compatibility and processing overhead - or you might @@ -14262,7 +14286,7 @@ sub test_cache { OpenSRF requests and returns OpenSRF results as HTTP results to the initiating HTTP client. Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - + # curl request broken up over multiple lines for legibility curl -H "X-OpenSRF-service: opensrf.simple-text" --data 'osrf-msg=[ \ @@ -14274,52 +14298,52 @@ curl -H "X-OpenSRF-service: opensrf.simple-text" - + The X-OpenSRF-service header identifies the OpenSRF service of interest. - + The POST request consists of a single parameter, the osrf-msg value, which contains a JSON array. - + The first object is an OpenSRF message ("__c":"osrfMessage") with a set of parameters ("__p":{}). - • + • The identifier for the request ("threadTrace":0); this value is echoed back in the result. - • + • The message type ("type":"REQUEST"). - • + • The locale for the message; if the OpenSRF method is locale-sensitive, it can check the locale for each OpenSRF request and return different information depending on the locale. - • + • The payload of the message ("payload":{}) containing the OpenSRF method request ("__c":"osrfMethod") and its parameters ("__p:"{}). - • + • The method name for the request ("method":"opensrf.simple-text.reverse"). - • + • A set of JSON parameters to pass to the method ("params":["foobar"]); in this case, a single string "foobar". - + The URL on which the OpenSRF HTTP translator is listening, /osrf-http-translator is the default location in the Apache example @@ -14327,7 +14351,7 @@ http://localhost/osrf-http-translator Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - + # HTTP response broken up over multiple lines for legibility [{"__c":"osrfMessage","__p": {"threadTrace":0, "payload": @@ -14345,63 +14369,63 @@ http://localhost/osrf-http-translator - + The OpenSRF HTTP Translator returns an array of JSON objects in its response. Each object in the response is an OpenSRF message ("__c":"osrfMessage") with a collection of response parameters ("__p":). - + The OpenSRF message identifier ("threadTrace":0) confirms that this message is in response to the request matching the same identifier. - + The message includes a payload JSON object ("payload":) with an OpenSRF result for the request ("__c":"osrfResult"). - + The result includes a status indicator string ("status":"OK"), the content of the result response - in this case, a single string "raboof" ("content":"raboof") - and an integer status code for the request ("statusCode":200). - + The message also includes the message type ("type":"RESULT") and the message locale ("locale":"en-CA"). - + The second message in the set of results from the response. - + Again, the message identifier confirms that this message is in response to a particular request. - + The payload of the message denotes that this message is an OpenSRF connection status message ("__c":"osrfConnectStatus"), with some information about the particular OpenSRF connection that was used for this request. - + The response parameters for an OpenSRF connection status message include a verbose status ("status":"Request Complete") and an integer status code for the connection status (`"statusCode":205). - + The message also includes the message type ("type":"RESULT") and the message locale ("locale":"en-CA"). - Before adding a new public OpenSRF service, ensure that it does + Before adding a new public OpenSRF service, ensure that it does not introduce privilege escalation or unchecked access to data. For example, the Evergreen open-ils.cstore private service is an object-relational mapper that provides read and write access to the entire Evergreen database, so it @@ -14444,7 +14468,7 @@ http://localhost/osrf-http-translator A request for opensrf.simple-text.reverse("foobar"):  - + <message from='router@private.localhost/opensrf.simple-text' to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' router_from='opensrf@private.localhost/_karmic_126678.3719_6288' @@ -14466,7 +14490,7 @@ http://localhost/osrf-http-translator A response from opensrf.simple-text.reverse("foobar").  - + <message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' to='opensrf@private.localhost/_karmic_126678.3719_6288' router_command='' router_class='' osrf_xid='' @@ -14502,63 +14526,63 @@ http://localhost/osrf-http-translator __PACKAGE__->register_method(): - • + • The method member specifies the name of the procedure in this module that is being registered as an OpenSRF method. - • + • The api_name member specifies the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix. - • + • The optional api_level member can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1. - • + • The optional argc member specifies the minimal number of arguments that the method expects. - • + • The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. - • + • The optional signature member is a hash that describes the method’s purpose, arguments, and return value. - • + • The desc member of the signature hash describes the method’s purpose. - • + • The params member of the signature hash is an array of hashes in which each array element describes the corresponding method argument in order. - • + • The name member of the argument hash specifies the name of the argument. - • + • The desc member of the argument hash describes the argument’s purpose. - • + • The type member of the argument hash specifies the data type of the argument: for example, string, integer, boolean, number, array, or hash. - • + • The return member of the signature hash is a hash that describes the return value of the method. - • + • The desc member of the return hash describes the return value. - • + • The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, array, or hash. @@ -14573,92 +14597,92 @@ http://localhost/osrf-http-translator Evergreen is currently the primary showcase for the use of OpenSRF as an application architecture. Evergreen 1.6.1 includes the following set of OpenSRF services: - • + • The open-ils.actor service supports common tasks for working with user accounts and libraries. - • + • The open-ils.auth service supports authentication of Evergreen users. - • + • The open-ils.booking service supports the management of reservations for bookable items. - • + • The open-ils.cat service supports common cataloging tasks, such as creating, modifying, and merging bibliographic and authority records. - • + • The open-ils.circ service supports circulation tasks such as checking out items and calculating due dates. - • + • The open-ils.collections service supports tasks that assist collections agencies in contacting users with outstanding fines above a certain threshold. - • + • The open-ils.cstore private service supports unrestricted access to Evergreen fieldmapper objects. - • + • The open-ils.ingest private service supports tasks for importing data such as bibliographic and authority records. - • + • The open-ils.pcrud service supports permission-based access to Evergreen fieldmapper objects. - • + • The open-ils.penalty penalty service supports the calculation of penalties for users, such as being blocked from further borrowing, for conditions such as having too many items checked out or too many unpaid fines. - • + • The open-ils.reporter service supports the creation and scheduling of reports. - • + • The open-ils.reporter-store private service supports access to Evergreen fieldmapper objects for the reporting service. - • + • The open-ils.search service supports searching across bibliographic records, authority records, serial records, Z39.50 sources, and ZIP codes. - • + • The open-ils.storage private service supports a deprecated method of providing access to Evergreen fieldmapper objects. Implemented in Perl, this service has largely been replaced by the much faster C-based open-ils.cstore service. - • + • The open-ils.supercat service supports transforms of MARC records into other formats, such as MODS, as well as providing Atom and RSS feeds and SRU access. - • + • The open-ils.trigger private service supports event-based triggers for actions such as overdue and holds available notification emails. - • + • The open-ils.vandelay service supports the import and export of batches of bibliographic and authority records. @@ -14692,37 +14716,37 @@ http://localhost/osrf-http-translator Exploring the Database SchemaExploring the Database Schema - The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL - adheres closely to ANSI SQL standards, the use of schemas, SQL functions + The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL + adheres closely to ANSI SQL standards, the use of schemas, SQL functions implemented in both plpgsql and plperl, and PostgreSQL’s native full-text search would make it… challenging… to port to other database platforms. A few common PostgreSQL interfaces for poking around the schema and manipulating data are: - • + • - psql (the command line client) + psql (the command line client) - • + • - pgadminIII (a GUI client). + pgadminIII (a GUI client). Or you can read through the source files in Open-ILS/src/sql/Pg. Let’s take a quick tour through the schemas, pointing out some highlights and some key interdependencies: - • + • actor.org_unit → asset.copy_location - • + • actor.usr → actor.card - • + • biblio.record_entry → asset.call_number → asset.copy - • + • config.metabib_field → metabib.*_field_entry @@ -14740,7 +14764,7 @@ http://localhost/osrf-http-translator Evergreen Interface Definition Language (IDL)Evergreen Interface Definition Language (IDL) - + Defines properties and required permissions for Evergreen classes. To reduce network overhead, a given object is identified via a class-hint and serialized as a JSON array of properties (no named properties). @@ -14748,7 +14772,7 @@ http://localhost/osrf-http-translator fm_IDL.xml file and makes DRY people happier :) - • + • … oils_persist:readonly tells us, if true, that the data lives in the database, but is pulled from the SELECT statement defined in the <oils_persist:source_definition> child element @@ -14756,7 +14780,7 @@ http://localhost/osrf-http-translator IDL basic example (config.language_map)IDL basic example (config.language_map) - + <class id="clm" controller="open-ils.cstore open-ils.pcrud" oils_obj:fieldmapper="config::language_map" oils_persist:tablename="config.language_map" @@ -14779,34 +14803,34 @@ http://localhost/osrf-http-translator - + The class element defines the attributes and permissions for classes, and relationships between classes. - - • + + • The id attribute on the class element defines the class hint that is used everywhere in Evergreen. - • + • The controller attribute defines the OpenSRF services that provide access to the data for the class objects. - + The oils_obj::fieldmapper attribute defines the name of the class that is generated by OpenILS::Utils::Fieldmapper. - + The oils_persist:tablename attribute defines the name of the table that contains the data for the class objects. - + The reporter interface uses reporter:label attribute values in the source list to provide meaningful class and attribute names. The @@ -14814,7 +14838,7 @@ http://localhost/osrf-http-translator oils_persist:field_safe is true. For example, - + srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ {"query":{"code":{"=":"eng"}}} @@ -14827,57 +14851,57 @@ Received Data: [ ] - + The fields element defines the list of fields for the class. - - • + + • The oils_persist:primary attribute defines the column that acts as the primary key for the table. - • + • The oils_persist:sequence attribute holds the name of the database sequence. - + Each field element defines one property of the class. - - • + + • The name attribute defines the getter/setter method name for the field. - • + • The reporter:label attribute defines the attribute name as used in the reporter interface. - • + • The reporter:selector attribute defines the field used in the reporter filter interface to provide a selectable list. This gives the user a more meaningful access point than the raw numeric ID or abstract code. - • + • The reporter:datatype attribute defines the type of data held by this property for the purposes of the reporter. - + The oils_persist:i18n attribute, when true, means that translated values for the field’s contents may be accessible in different locales. - + - + The permacrud element defines the permissions (if any) required to create, retrieve, update, and delete data for this @@ -14885,32 +14909,32 @@ Received Data: [ for the permissions to be applied. - + Each action requires one or more permission values that the user must possess to perform the action. - • + • If the global_required attribute is true, then the user must have been granted that permission globally (depth = 0) to perform the action. - • + • The context_field attribute denotes the <field> that identifies the org_unit at which the user must have the pertinent permission. - • - + • + An action element may contain a <context_field> element that defines the linked class (identified by the link attribute) and the field in the linked class that identifies the org_unit where the permission must be held. - • - + • + If the <context_field> element contains a jump attribute, then it defines a link to a link to a class with a field identifying @@ -14922,40 +14946,40 @@ Received Data: [ Reporter data types and their possible valuesReporter data types and their possible values - • + • bool: Boolean true or false - • + • id: ID of the row in the database - • + • int: integer value - • + • interval: PostgreSQL time interval - • + • link: link to another class, as defined in the <links> element of the class definition - • + • money: currency amount - • + • org_unit: list of org_units - • + • text: text value - • + • timestamp: PostgreSQL timestamp @@ -14968,7 +14992,7 @@ Received Data: [ can contain fields that link to fields in other classes. The <links> element defines which fields link to fields in other classes, and the nature of the relationship: - + <class id="aws" controller="open-ils.cstore" oils_obj:fieldmapper="actor::workstation" oils_persist:tablename="actor.workstation" @@ -14994,16 +15018,16 @@ Received Data: [ </links> </class> - + This field includes an oils_persist:virtual attribute with the value of true, meaning that the linked class circ is a virtual class. - + The <links> element contains 0 or more <link> elements. - + Each <link> element defines the field (field) that links to a different class (class), the relationship (rel_type) between this field and the target @@ -15016,38 +15040,38 @@ Received Data: [ open-ils.cstore data access interfacesopen-ils.cstore data access interfaces - + For each class documented in the IDL, the open-ils.cstore service automatically generates a set of data access methods, based on the oils_persist:tablename class attribute. For example, for the class hint clm, cstore generates the following methods with the config.language_map qualifer: - • + • open-ils.cstore.direct.config.language_map.id_list {"code" { "like": "e%" } } Retrieves a list composed only of the IDs that match the query. - • + • open-ils.cstore.direct.config.language_map.retrieve "eng" Retrieves the object that matches a specific ID. - • + • open-ils.cstore.direct.config.language_map.search {"code" : "eng"} Retrieves a list of objects that match the query. - • + • open-ils.cstore.direct.config.language_map.create <_object_> Creates a new object from the passed in object. - • + • open-ils.cstore.direct.config.language_map.update <_object_> Updates the object that has been passed in. - • + • open-ils.cstore.direct.config.language_map.delete "eng" @@ -15056,33 +15080,33 @@ Received Data: [ open-ils.pcrud data access interfacesopen-ils.pcrud data access interfaces - + For each class documented in the IDL, the open-ils.pcrud service automatically generates a set of data access methods, based on the oils_persist:tablename class attribute. For example, for the class hint clm, open-ils.pcrud generates the following methods that parallel the open-ils.cstore interface: - • + • open-ils.pcrud.id_list.clm <_authtoken_>, { "code": { "like": "e%" } } - • + • open-ils.pcrud.retrieve.clm <_authtoken_>, "eng" - • + • open-ils.pcrud.search.clm <_authtoken_>, { "code": "eng" } - • + • open-ils.pcrud.create.clm <_authtoken_>, <_object_> - • + • open-ils.pcrud.update.clm <_authtoken_>, <_object_> - • + • open-ils.pcrud.delete.clm <_authtoken_>, "eng" @@ -15093,27 +15117,27 @@ Received Data: [ Both open-ils.cstore and open-ils.pcrud enable you to control database transactions to ensure that a set of operations either all succeed, or all fail, atomically: - • + • open-ils.cstore.transaction.begin - • + • open-ils.cstore.transaction.commit - • + • open-ils.cstore.transaction.rollback - • + • open-ils.pcrud.transaction.begin - • + • open-ils.pcrud.transaction.commit - • + • open-ils.pcrud.transaction.rollback @@ -15121,27 +15145,27 @@ Received Data: [ At a more granular level, open-ils.cstore and open-ils.pcrud enable you to set database savepoints to ensure that a set of operations either all succeed, or all fail, atomically, within a given transaction: - • + • open-ils.cstore.savepoint.begin - • + • open-ils.cstore.savepoint.commit - • + • open-ils.cstore.savepoint.rollback - • + • open-ils.pcrud.savepoint.begin - • + • open-ils.pcrud.savepoint.commit - • + • open-ils.pcrud.savepoint.rollback @@ -15151,19 +15175,19 @@ Received Data: [ In srfsh, you can open a stateful connection using the open command, and then close the stateful connection using the close command - for example: - srfsh# open open-ils.cstore + srfsh# open open-ils.cstore ... perform various transaction-related work srfsh# close open-ils.cstore JSON QueriesJSON Queries - + Beyond simply retrieving objects by their ID using the \*.retrieve methods, you can issue queries against the \*.delete and \*.search methods using JSON to filter results with simple or complex search conditions. For example, to generate a list of barcodes that are held in a copy location that allows holds and is visible in the OPAC: - + srfsh# request open-ils.cstore open-ils.cstore.json_query {"select": {"acp":["barcode"], "acpl":["name"]}, "from": {"acp":"acpl"}, @@ -15182,32 +15206,32 @@ Received Data: { "name":"Stacks" } - + Invoke the json_query service. - + Select the barcode field from the acp class and the name field from the acpl class. - + Join the acp class to the acpl class based on the linked field defined in the IDL. - + Add a where clause to filter the results. We have more than one condition beginning with the same key, so we wrap the conditions inside an array. - + The first condition tests whether the boolean value of the holdable field on the acpl class is true. - + The second condition tests whether the boolean value of the opac_visible field on the acpl class is true. @@ -15238,7 +15262,7 @@ Received Data: { open-ils.cstore.direct.\*.retrieve methods allow you to specify a JSON structure defining the fields you wish to flesh in the returned object. Fleshing fields in objects returned by open-ils.cstore.  - + srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ { "flesh": 1, @@ -15248,13 +15272,13 @@ srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ } - + The flesh argument is the depth at which objects should be fleshed. For example, to flesh out a field that links to another object that includes a field that links to another object, you would specify a depth of 2. - + The flesh_fields argument contains a list of objects with the fields to flesh for each object. @@ -15264,7 +15288,7 @@ srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ let’s also flesh the call number attached to the copy, and then flesh the bibliographic record attached to the call number. Fleshing fields in fields of objects returned by open-ils.cstore.  - + request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ { "flesh": 2, @@ -15295,71 +15319,71 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ oils_persist:virtual class attribute to true. So, let’s add an entry to the IDL for the open-ils.resolver.resolve_holdings service: - + And let’s make ResolverResolver.pm return an array composed of our new rhr classes rather than raw JSON objects: - + Once we add the new entry to the IDL and copy the revised ResolverResolver.pm Perl module to /openils/lib/perl5/OpenILS/Application/, we need to: - 1. + 1. Copy the updated IDL to both the /openils/conf/ and /openils/var/web/reports/ directories. The Dojo approach to parsing the IDL uses the IDL stored in the reports directory. - 2. + 2. Restart the Perl services to make the new IDL visible to the services and refresh the open-ils.resolver implementation - 3. + 3. - Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions + Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions of the IDL required by the HTTP translator and gateway. - We also need to adjust our JavaScript client to use the nifty new + We also need to adjust our JavaScript client to use the nifty new objects that open-ils.resolver.resolve_holdings now returns. - The best approach is to use the support in Evergreen’s Dojo extensions + The best approach is to use the support in Evergreen’s Dojo extensions to generate the JavaScript classes directly from the IDL XML file. Accessing classes defined in the IDL via Fieldmapper.  - + - + Load the Dojo core. - + fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to generate a list of class properties. - + fieldmapper.dojoData seems to provide a store for Evergreen data accessed via Dojo. - + fieldmapper.Fieldmapper converts the list of class properties into actual classes. - + fieldmapper.standardRequest invokes an OpenSRF method and returns an array of objects. - + The first argument to fieldmapper.standardRequest is an array containing the OpenSRF service name and method name. - + The second argument to fieldmapper.standardRequest is an array containing the arguments to pass to the OpenSRF method. - + As Fieldmapper has instantiated the returned objects based on their class hints, we can invoke getter/setter methods on the objects. @@ -15379,8 +15403,8 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ Introduction to SQL DatabasesIntroduction to SQL Databases - - IntroductionIntroduction + + IntroductionIntroduction Over time, the SQL database has become the standard method of storing, retrieving, and processing raw data for applications. Ranging from embedded @@ -15399,7 +15423,7 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ TablesTables - + The table is the cornerstone of a SQL database. Conceptually, a database table is similar to a single sheet in a spreadsheet: every table has one or more columns, with each row in the table containing values for each column. Each @@ -15408,7 +15432,7 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ worry if the INSERT statement is completely unfamiliar, we’ll talk more about the syntax of the insert statement later. actor.usr_note database table.  - + evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); @@ -15433,13 +15457,13 @@ evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; schemas as namespaces if you’re a programmer; or you might think of the schema / table / column relationship like the area code / exchange / local number structure of a telephone number. - Table 42.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc + Table 42.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc The default schema name in PostgreSQL is public, so if you do not specify a schema name when creating or accessing a database object, PostgreSQL will use the public schema. As a result, you might not find the object that you’re looking for if you don’t use the appropriate schema. Example: Creating a table without a specific schema.  - + evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); CREATE TABLE evergreen=# \d foobar @@ -15451,7 +15475,7 @@ evergreen=# \d foobar Example: Trying to access a unqualified table outside of the public schema.  - evergreen=# SELECT * FROM usr_note; + evergreen=# SELECT * FROM usr_note; ERROR: relation "usr_note" does not exist LINE 1: SELECT * FROM usr_note; ^ @@ -15459,8 +15483,8 @@ evergreen=# \d foobar Evergreen uses schemas to organize all of its tables with mostly intuitive, if short, schema names. Here’s the current (as of 2010-01-03) list of schemas used by Evergreen: - Table 42.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter - The term schema has two meanings in the world of SQL databases. We have + Table 42.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter + The term schema has two meanings in the world of SQL databases. We have discussed the schema as a conceptual grouping of tables and other database objects within a given namespace; for example, "the actor schema contains the tables and functions related to users and organizational units". Another common @@ -15470,23 +15494,23 @@ evergreen=# \d foobar ColumnsColumns Each column definition consists of: - • + • a data type - • + • (optionally) a default value to be used whenever a row is inserted that does not contain a specific value - • + • (optionally) one or more constraints on the values beyond data type Although PostgreSQL supports dozens of data types, Evergreen makes our life easier by only using a handful. - Table 42.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money + Table 42.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money values, with a precision of 6 and a scale of 2 (####.##). Full details about these data types are available from the data types section of @@ -15510,7 +15534,7 @@ evergreen=# \d foobar Natural primary keys are drawn from the intrinsic properties of the data being modelled. For example, some potential natural primary keys for a table that contains people would be: - Table 42.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license + Table 42.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license To avoid problems with natural keys, many applications instead define surrogate primary keys. A surrogate primary keys is a column with an autoincrementing integer value added to a table definition that ensures uniqueness. @@ -15526,7 +15550,7 @@ evergreen=# \d foobar contained in the biblio.record_entry table can have call numbers attached to them. Call numbers are contained in the asset.call_number table, and they can have copies attached to them. Copies are contained in the asset.copy table. - Table 42.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id + Table 42.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id Check constraintsCheck constraints @@ -15541,7 +15565,7 @@ evergreen=# \d foobar The actor.org_address table is a simple table in the Evergreen schema that we can use as a concrete example of many of the properties of databases that we have discussed so far. - + CREATE TABLE actor.org_address ( id SERIAL PRIMARY KEY, valid BOOL NOT NULL DEFAULT TRUE, @@ -15557,7 +15581,7 @@ CREATE TABLE actor.org_address ( post_code TEXT NOT NULL ); - + The column named id is defined with a special data type of SERIAL; if given no value when a row is inserted into a table, the database automatically @@ -15565,7 +15589,7 @@ CREATE TABLE actor.org_address ( popular data type for a primary key because it is guaranteed to be unique - and indeed, the constraint for this column identifies it as the PRIMARY KEY. - + The data type BOOL defines a boolean value: TRUE or FALSE are the only acceptable values for the column. The constraint NOT NULL instructs the @@ -15573,20 +15597,20 @@ CREATE TABLE actor.org_address ( property DEFAULT TRUE instructs the database to automatically set the value of the column to TRUE if no value is provided. - + The data type TEXT defines a text column of practically unlimited length. As with the previous column, there is a NOT NULL constraint, and a default value of 'MAILING' will result if no other value is supplied. - + The REFERENCES actor.org_unit (id) clause indicates that this column has a foreign key relationship to the actor.org_unit table, and that the value of this column in every row in this table must have a corresponding value in the id column in the referenced table (actor.org_unit). - + The column named street2 demonstrates that not all columns have constraints beyond data type. In this case, the column is allowed to be NULL or to contain a @@ -15608,7 +15632,7 @@ CREATE TABLE actor.org_address ( To display the definition of a database object such as a table, issue the command \d _object-name_. For example, to display the definition of the actor.usr_note table: - + $ psql evergreen psql (8.4.1) Type "help" for help. @@ -15635,16 +15659,16 @@ Foreign-key constraints: evergreen=# \q $ - + This is the most basic connection to a PostgreSQL database. You can use a number of other flags to specify user name, hostname, port, and other options. - + The \d command displays the definition of a database object. - + The \q command quits the psql session and returns you to the shell prompt. @@ -15657,7 +15681,7 @@ $ The SELECT statement is the basic tool for retrieving information from a database. The syntax for most SELECT statements is: - SELECT [columns(s)] + SELECT [columns(s)] FROM [table(s)] [WHERE condition(s)] [GROUP BY columns(s)] @@ -15668,7 +15692,7 @@ $ ; For example, to select all of the columns for each row in the actor.usr_address table, issue the following query: - SELECT * + SELECT * FROM actor.usr_address ; @@ -15680,7 +15704,7 @@ $ you want after the SELECT keyword. Separate each column name with a comma. For example, to select just the city, county, and state from the actor.usr_address table, issue the following query: - SELECT city, county, state + SELECT city, county, state FROM actor.usr_address ; @@ -15694,7 +15718,7 @@ $ For example, to sort the rows returned from your actor.usr_address query by city, with county and then zip code as the tie breakers, issue the following query: - + SELECT city, county, state FROM actor.usr_address ORDER BY city, county, post_code @@ -15712,7 +15736,7 @@ SELECT city, county, state For example, to restrict the results returned from your actor.usr_address query to only those rows containing a state value of Connecticut, issue the following query: - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -15724,7 +15748,7 @@ SELECT city, county, state actor.usr_address query to only those rows where the state column contains a value of Connecticut and the city column contains a value of Hartford, issue the following query: - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -15732,13 +15756,13 @@ SELECT city, county, state ORDER BY city, county, post_code ; - To return rows where the state is Connecticut and the city is Hartford or + To return rows where the state is Connecticut and the city is Hartford or New Haven, you must use parentheses to explicitly group the city value conditions together, or else the database will evaluate the OR city = 'New Haven' clause entirely on its own and match all rows where the city column is New Haven, even though the state might not be Connecticut. Trouble with OR.  - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -15750,7 +15774,7 @@ SELECT city, county, state Grouped OR’ed conditions.  - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -15768,34 +15792,34 @@ SELECT city, county, state WHERE clauses: Comparing two scalar valuesComparing two scalar values - • + • x = y (equal to) - • + • x != y (not equal to) - • + • x < y (less than) - • + • x > y (greater than) - • + • x LIKE y (TEXT value x matches a subset of TEXT y, where y is a string that can contain % as a wildcard for 0 or more characters, and _ as a wildcard for a single character. For example, WHERE 'all you can eat fish and chips and a big stick' LIKE '%fish%stick' would return TRUE) - • + • x ILIKE y (like LIKE, but the comparison ignores upper-case / lower-case) - • + • x IN y (x is in the list of values y, where y can be a list or a SELECT statement that returns a list) @@ -15813,7 +15837,7 @@ SELECT city, county, state To return rows from a table where a given column is not NULL, use the IS NOT NULL comparison operator. Retrieving rows where a column is not NULL.  - + SELECT id, first_given_name, family_name FROM actor.usr WHERE second_given_name IS NOT NULL @@ -15823,7 +15847,7 @@ SELECT id, first_given_name, family_name Similarly, to return rows from a table where a given column is NULL, use the IS NULL comparison operator. Retrieving rows where a column is NULL.  - + SELECT id, first_given_name, second_given_name, family_name FROM actor.usr WHERE second_given_name IS NULL @@ -15839,7 +15863,7 @@ SELECT id, first_given_name, second_given_name, family_name indistinguishable from an empty string; this is the default display method in psql. You can change the behaviour of psql using the pset command: Changing the way NULL values are displayed in psql.  - + evergreen=# \pset null '(null)' Null display is '(null)'. @@ -15868,7 +15892,7 @@ SELECT id, first_given_name, second_given_name, family_name For example, to change the last name of a user in the actor.usr table to L’estat, issue the following SQL: Escaping ' in TEXT values.  - + UPDATE actor.usr SET family_name = 'L''estat' WHERE profile IN ( @@ -15880,7 +15904,7 @@ UPDATE actor.usr When you retrieve the row from the database, the value is displayed with just a single ' character: - + SELECT id, family_name FROM actor.usr WHERE family_name = 'L''estat' @@ -15899,7 +15923,7 @@ SELECT id, family_name results for a range of values in a single query, rather than requiring you to issue one query per target value. Returning unique results of a single column with GROUP BY.  - + SELECT grp FROM permission.grp_perm_map GROUP BY grp @@ -15923,7 +15947,7 @@ SELECT grp following query shows us which groups have permissions at each depth in the library hierarchy: Returning unique results of multiple columns with GROUP BY.  - + SELECT grp, depth FROM permission.grp_perm_map GROUP BY grp, depth @@ -15954,7 +15978,7 @@ SELECT grp, depth appears in the table. Yes, this is a sneak peek at the use of aggregate functions! Keeners. Counting unique column combinations with GROUP BY.  - + SELECT grp, depth, COUNT(grp) FROM permission.grp_perm_map GROUP BY grp, depth @@ -15984,7 +16008,7 @@ SELECT grp, depth, COUNT(grp) is applied to the results. The following query restricts the results to those rows that have a depth of 0. Using the WHERE clause with GROUP BY.  - + SELECT grp, COUNT(grp) FROM permission.grp_perm_map WHERE depth = 0 @@ -16009,7 +16033,7 @@ SELECT grp, COUNT(grp) the following query restricts the returned rows to those that have more than 5 occurrences of the same value for grp in the table. GROUP BY restricted by a HAVING clause.  - + SELECT grp, COUNT(grp) FROM permission.grp_perm_map GROUP BY grp @@ -16039,7 +16063,7 @@ SELECT grp, COUNT(grp) them, then applying the DISTINCT keyword might be a sign that you are papering over a real problem. Returning unique results of multiple columns with DISTINCT.  - + SELECT DISTINCT grp, depth FROM permission.grp_perm_map ORDER BY depth, grp @@ -16073,7 +16097,7 @@ SELECT DISTINCT grp, depth example, in the following query we list the five most frequently used circulation modifiers: Using the LIMIT clause to restrict results.  - + SELECT circ_modifier, COUNT(circ_modifier) FROM asset.copy GROUP BY circ_modifier @@ -16098,7 +16122,7 @@ SELECT circ_modifier, COUNT(circ_modifier) In the following example, we use the OFFSET clause to get results 6 through 10 from the same query that we prevously executed. Using the OFFSET clause to return a specific subset of rows.  - + SELECT circ_modifier, COUNT(circ_modifier) FROM asset.copy GROUP BY circ_modifier @@ -16136,7 +16160,7 @@ SELECT circ_modifier, COUNT(circ_modifier) used to convert text values to upper-case, modifies the results in the following set of queries: Using the UPPER() scalar function to convert text values to upper-case.  - + -- First, without the UPPER() function for comparison SELECT shortname, name FROM actor.org_unit @@ -16166,27 +16190,27 @@ SELECT shortname, UPPER(name) There are so many scalar functions in PostgreSQL that we cannot cover them all here, but we can list some of the most commonly used functions: - • + • || - concatenates two text values together - • + • COALESCE() - returns the first non-NULL value from the list of arguments - • + • LOWER() - returns a text value converted to lower-case - • + • REPLACE() - returns a text value after replacing all occurrences of a given text value with a different text value - • + • REGEXP_REPLACE() - returns a text value after being transformed by a regular expression - • + • UPPER() - returns a text value converted to upper-case @@ -16198,23 +16222,23 @@ SELECT shortname, UPPER(name) Aggregate functions return a single value computed from the the complete set of values returned for the specified column. - • + • AVG() - • + • COUNT() - • + • MAX() - • + • MIN() - • + • SUM() @@ -16233,7 +16257,7 @@ SELECT shortname, UPPER(name) returned by the main SELECT statement to only those locations that have an opac_visible value of TRUE: Sub-select example.  - + SELECT call_number FROM asset.copy WHERE deleted IS FALSE @@ -16284,7 +16308,7 @@ SELECT call_number we have to fully qualify the column names in our queries with the schema and table names. A simple inner join.  - + SELECT * FROM actor.usr INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id @@ -16326,7 +16350,7 @@ fiscal_calendar | 1 joining the actor.org_unit table to give us access to the user’s home library: Borrower Count by Profile (Adult, Child, etc)/Library.  - + SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) FROM actor.usr INNER JOIN permission.grp_tree @@ -16363,7 +16387,7 @@ SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree. example, we can write the previous INNER JOIN statement example using aliases instead of fully-qualified identifiers: Borrower Count by Profile (using aliases).  - + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au INNER JOIN permission.grp_tree pgt @@ -16391,19 +16415,19 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" An outer join returns all of the rows from one or both of the tables participating in the join. - • + • For a LEFT OUTER JOIN, the join returns all of the rows from the left-hand table and the rows matching the join condition from the right-hand table, with NULL values for the rows with no match in the right-hand table. - • + • A RIGHT OUTER JOIN behaves in the same way as a LEFT OUTER JOIN, with the exception that all rows are returned from the right-hand table participating in the join. - • + • For a FULL OUTER JOIN, the join returns all the rows from both the left-hand and right-hand tables, with NULL values for the rows with no match in either @@ -16411,7 +16435,7 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" Base tables for the OUTER JOIN examples.  - + SELECT * FROM aaa; id | stuff @@ -16435,7 +16459,7 @@ SELECT * FROM bbb; Example of a LEFT OUTER JOIN.  - + SELECT * FROM aaa LEFT OUTER JOIN bbb ON aaa.id = bbb.id ; @@ -16450,7 +16474,7 @@ SELECT * FROM aaa Example of a RIGHT OUTER JOIN.  - + SELECT * FROM aaa RIGHT OUTER JOIN bbb ON aaa.id = bbb.id ; @@ -16464,7 +16488,7 @@ SELECT * FROM aaa Example of a FULL OUTER JOIN.  - + SELECT * FROM aaa FULL OUTER JOIN bbb ON aaa.id = bbb.id ; @@ -16494,7 +16518,7 @@ SELECT * FROM aaa of columns with compatible data types: the union, intersection, and difference operators. Base tables for the set operation examples.  - + SELECT * FROM aaa; id | stuff @@ -16524,7 +16548,7 @@ SELECT * FROM bbb; does not return any duplicate rows. To return duplicate rows, use the UNION ALL operator. Example of a UNION set operation.  - + -- The parentheses are not required, but are intended to help -- illustrate the sets participating in the set operation ( @@ -16557,7 +16581,7 @@ ORDER BY 1 both the left-hand and right-hand tables. To return duplicate rows, use the INTERSECT ALL operator. Example of an INTERSECT set operation.  - + ( SELECT id, stuff FROM aaa @@ -16585,7 +16609,7 @@ ORDER BY 1 exist in the right-hand table. You are effectively subtracting the common rows from the left-hand table. Example of an EXCEPT set operation.  - + ( SELECT id, stuff FROM aaa @@ -16633,7 +16657,7 @@ ORDER BY 1 and a SELECT statement on which the view is built. The following example creates a view based on our borrower profile count: Creating a view.  - + CREATE VIEW actor.borrower_profile_count AS SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au @@ -16655,7 +16679,7 @@ CREATE VIEW actor.borrower_profile_count AS Then we issue a SELECT statement with a WHERE clause to further filter the results. Selecting results from a view.  - + SELECT * FROM actor.borrower_profile_count; Profile | Library | Count @@ -16723,7 +16747,7 @@ SELECT * row in the biblio.record_entry table with every row in the metabib.full_rec view: Query plan for a terrible query.  - + EXPLAIN SELECT * FROM biblio.record_entry FULL OUTER JOIN metabib.full_rec ON 1=1 @@ -16744,7 +16768,7 @@ EXPLAIN SELECT * the left-hand table with every row in the right-hand table and take the saner approach of using an INNER JOIN where the join condition is on the record ID. Query plan for a less terrible query.  - + EXPLAIN SELECT * FROM biblio.record_entry bre INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; @@ -16763,7 +16787,7 @@ EXPLAIN SELECT * the following example, we will limit the results to deleted records that were modified in the last month. Query plan for a realistic query.  - + EXPLAIN SELECT * FROM biblio.record_entry bre INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id @@ -16797,7 +16821,7 @@ EXPLAIN SELECT * flattened MARC subfields is a fairly common operation, we could create a new index and see if that speeds up our query plan. Query plan with optimized access via a new index.  - + -- This index will take a long time to create on a large database -- of bibliographic records CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); @@ -16828,7 +16852,7 @@ EXPLAIN SELECT * the execution estimate has dropped to 1558330 because the query planner can use the new index (bib_record_idx) rather than scanning the entire table. Success! - While indexes can significantly speed up read access to tables for common + While indexes can significantly speed up read access to tables for common filtering conditions, every time a row is created or updated the corresponding indexes also need to be maintained - which can decrease the performance of writes to the database. Be careful to keep the balance of read performance @@ -16848,14 +16872,14 @@ EXPLAIN SELECT * (this multiple row syntax requires PostgreSQL 8.2 or higher). For example, to insert two rows into the permission.usr_grp_map table: Inserting rows into the permission.usr_grp_map table.  - INSERT INTO permission.usr_grp_map (usr, grp) + INSERT INTO permission.usr_grp_map (usr, grp) VALUES (2, 10), (2, 4) ; Of course, as with the rest of SQL, you can replace individual column values with one or more use sub-selects: Inserting rows using sub-selects instead of integers.  - + INSERT INTO permission.usr_grp_map (usr, grp) VALUES ( (SELECT id FROM actor.usr @@ -16885,7 +16909,7 @@ INSERT INTO permission.usr_grp_map (usr, grp) the grp column value based on the id column values returned from permission.grp_tree: Inserting rows via a SELECT statement.  - + INSERT INTO permission.usr_grp_map (usr, grp) SELECT 1, id FROM permission.grp_tree @@ -16905,7 +16929,7 @@ INSERT 0 6 permission.grp_perm_map table where the permission maps to UPDATE_ORG_UNIT_CLOSING and the group is anything other than administrators: Deleting rows from a table.  - + DELETE FROM permission.grp_perm_map WHERE grp IN ( SELECT id @@ -16919,16 +16943,16 @@ DELETE FROM permission.grp_perm_map ; - There are two main reasons that a DELETE statement may not actually + There are two main reasons that a DELETE statement may not actually delete rows from a table, even when the rows meet the conditional clause. - 1. + 1. If the row contains a value that is the target of a relational constraint, for example, if another table has a foreign key pointing at your target table, you will be prevented from deleting a row with a value corresponding to a row in the dependent table. - 2. + 2. If the table has a rule that substitutes a different action for a DELETE statement, the deletion will not take place. In Evergreen it is common for a @@ -16950,7 +16974,7 @@ DELETE FROM permission.grp_perm_map respective new values, and (optionally) a WHERE clause identifying the row or rows that should be updated. Following is the syntax for the UPDATE statement: - UPDATE [table-name] + UPDATE [table-name] SET [column] TO [new-value] WHERE [condition] ; @@ -16963,7 +16987,7 @@ DELETE FROM permission.grp_perm_map Monthly circulation stats by collection code / libraryMonthly circulation stats by collection code / library Monthly Circulation Stats by Collection Code/Library.  - + SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" FROM asset.copy ac INNER JOIN asset.copy_location acl ON ac.location = acl.id @@ -16982,7 +17006,7 @@ SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Loca Monthly circulation stats by borrower stat / libraryMonthly circulation stats by borrower stat / library Monthly Circulation Stats by Borrower Stat/Library.  - + SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" FROM action.circulation acirc INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id @@ -17002,7 +17026,7 @@ SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry Monthly intralibrary loan stats by libraryMonthly intralibrary loan stats by library Monthly Intralibrary Loan Stats by Library.  - + SELECT aou.name AS "Library", COUNT(acirc.id) FROM action.circulation acirc INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id @@ -17022,7 +17046,7 @@ SELECT aou.name AS "Library", COUNT(acirc.id) Monthly borrowers added by profile (adult, child, etc) / libraryMonthly borrowers added by profile (adult, child, etc) / library Monthly Borrowers Added by Profile (Adult, Child, etc)/Library.  - + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au INNER JOIN permission.grp_tree pgt @@ -17040,7 +17064,7 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" Borrower count by profile (adult, child, etc) / libraryBorrower count by profile (adult, child, etc) / library Borrower Count by Profile (Adult, Child, etc)/Library.  - + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au INNER JOIN permission.grp_tree pgt @@ -17058,7 +17082,7 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" We define a “collection” as a shelving location in Evergreen. Monthly Items Added by Collection/Library.  - + SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) FROM actor.org_unit aou INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id @@ -17081,7 +17105,7 @@ SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) bibliographic record ID and display the title / author information for those records that have more than a given threshold of holds. Hold Purchase Alert by Library.  - + -- Title holds SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) FROM @@ -17137,7 +17161,7 @@ SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_ Then we issue an UPDATE statement to set the home library for patrons with a physical address with a city that matches the city names in our staging table. Update borrower records with a different home library.  - + CREATE SCHEMA staging; CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); @@ -17185,7 +17209,7 @@ WHERE ( Nevertheless, the need to encode a query in a JSON string adds complications, because the client needs to know how to build the right JSON. JSON queries are also somewhat limiting -- they can't do all of the things that you can do with raw SQL. - The IDLThe IDL + The IDLThe IDL A JSON query does not refer to tables and columns. Instead, it refers to classes and fields, which the IDL maps to the corresponding database entities. @@ -17202,27 +17226,27 @@ WHERE ( You should also be familiar with JSON. However it is worth defining a couple of terms that have other meanings in other contexts: - •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: - { "a":"frobozz", "b":24, "c":null } - •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: - [ "Goober", 629, null, false, "glub" ] + •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: + { "a":"frobozz", "b":24, "c":null } + •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: + [ "Goober", 629, null, false, "glub" ] - The ExamplesThe Examples + The ExamplesThe Examples The test_json_query utility generated the SQL for all of the sample queries in this tutorial. Newlines and indentation were then inserted manually for readability. All examples involve the actor.org_unit table, sometimes in combination with a few related tables. The queries themselves are designed to illustrate the syntax, not to do anything useful at the application level. For example, it's not meaningful to take the square root of an org_unit id, except to illustrate how to code a function call. The examples are like department store mannequins -- they have no brains, they're only for display. The simplest kind of query defines nothing but a FROM clause. For example: - + { "from":"aou" } In this minimal example we select from only one table. Later we will see how to join multiple tables. Since we don't supply a WHERE clause, json_query constructs a default WHERE clause for us, including all the available columns. The resulting SQL looks like this: - + SELECT "aou".billing_address AS "billing_address", "aou".holds_address AS "holds_address", @@ -17240,9 +17264,9 @@ FROM actor.org_unit AS "aou" ; - Default SELECT ClausesDefault SELECT Clauses + Default SELECT ClausesDefault SELECT Clauses - + The default SELECT clause includes every column that the IDL defines it as a non-virtual field for the class in question. If a column is present in the database but not defined in the IDL, json_query doesn't know about it. In the case of the example shown above, all the columns are defined in the IDL, so they all show up in the default SELECT clause. @@ -17253,28 +17277,28 @@ FROM In practice, the sequencing of columns in the SELECT clause is not significant. The result set is returned to the client program in the form of a data structure, which the client program can navigate however it chooses. - Other LessonsOther Lessons + Other LessonsOther Lessons There are other ways to get a default SELECT clause. However, default SELECT clauses are a distraction at this point, because most of the time you'll specify your own SELECT clause explicitly, as we will discuss later. Let's consider some more important aspects of this simple example -- more important because they apply to more complex queries as well. - • + • The entire JSON query is an object. In this simple case the object includes only one entry, for the FROM clause. Typically you'll also have entries for the SELECT clause and the WHERE clause, and possibly for HAVING, ORDER BY, LIMIT, or OFFSET clauses. There is no separate entry for a GROUP BY clause, which you can specify by other means. - • + • Although all the other entries are optional, you must include an entry for the FROM clause. You cannot, for example, do a SELECT USER the way you can in psql. - • + • Every column is qualified by an alias for the table. This alias is always the class name for the table, as defined in the IDL. - • + • Every column is aliased with the column name. There is a way to choose a different column alias (not shown here). - The SELECT ClauseThe SELECT Clause + The SELECT ClauseThe SELECT Clause The following variation also produces a default SELECT clause: - + { "from":"aou", "select": { @@ -17283,7 +17307,7 @@ FROM } ...and so does this one: - + { "select": { "aou":null @@ -17295,7 +17319,7 @@ FROM with a key of “select”. The value associated with this key is another JSON object, whose keys are class names. (These two examples also illustrate another point: unlike SQL, a JSON query doesn't care whether the FROM clause or the SELECT clause comes first.) Usually you don't want the default SELECT clause. Here's how to select only some of the columns: - + { "from":"aou", "select": { @@ -17306,7 +17330,7 @@ FROM The value associated with the class name is an array of column names. If you select columns from multiple tables (not shown here), you'll need a separate entry for each table, and a separate column list for each entry. The previous example results in the following SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17314,26 +17338,26 @@ FROM actor.org_unit AS "aou" ; - Fancier SELECT ClausesFancier SELECT Clauses + Fancier SELECT ClausesFancier SELECT Clauses The previous example featured an array of column names. More generally, it featured an array of field specifications, and one kind of field specification is a column name. The other kind is a JSON object, with some combination of the following keys: - • + • “column” -- the column name (required). - • + • “alias” -- used to define a column alias, which otherwise defaults to the column name. - • + • “aggregate” -- takes a value of true or false. Don't worry about this one yet. It concerns the use of GROUP BY clauses, which we will examine later. - • + • “transform” -- the name of an SQL function to be called. - • + • “result_field” -- used with "transform"; specifies an output column of a function that returns multiple columns at a time. - • + • “params” -- used with "transform"; provides a list of parameters for the function. They may be strings, numbers, or nulls. This example assigns a different column alias: - + { "from":"aou", "select": { @@ -17353,7 +17377,7 @@ FROM In this case, changing the column alias doesn't accomplish much. But if we were joining to the actor.org_unit_type table, which also has a "name" column, we could use different aliases to distinguish them. The following example uses a function to raise a column to upper case: - + { "from":"aou", "select": { @@ -17371,7 +17395,7 @@ FROM actor.org_unit AS "aou" ; Here we take a substring of the name, using the params element to pass parameters: - + { "from":"aou", "select": { @@ -17395,7 +17419,7 @@ FROM which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. Finally we call a fictitious function "frobozz" that returns multiple columns, where we want only one of them: - + { "from":"aou", "select": { @@ -17418,7 +17442,7 @@ FROM The frobozz function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in the database. - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do You can do some things in a SELECT clause with raw SQL (with psql, for example) that you can't do with a JSON query. Some of them matter and some of them don't. When you do a JOIN, you can't arrange the selected columns in any arbitrary sequence, because all of the columns from a given table must be grouped together. @@ -17428,7 +17452,7 @@ FROM to them (and it has to be the first parameter). You can't use a CASE expression. Instead, the client application can do the equivalent branching for itself. You can't select a subquery. In raw SQL you can do something like the following: - + SELECT id, name, @@ -17443,10 +17467,10 @@ FROM This contrived example is not very realistic. Normally you would use a JOIN in this case, and that's what you should do in a JSON query. Other cases may not be so easy to solve. - The WHERE ClauseThe WHERE Clause + The WHERE ClauseThe WHERE Clause Most queries need a WHERE clause, as in this simple example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17460,7 +17484,7 @@ FROM In this case, we use a special shortcut for expressing an equality condition. The column name is on the left of the colon, and the value to which we are equating it is on the right. Here's the resulting SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17473,10 +17497,10 @@ WHERE If you want to compare a column to NULL, put “null” (without quotation marks) to the right of the colon instead of a literal value. The resulting SQL will include “IS NULL” instead of an equals sign. - Other Kinds of ComparisonsOther Kinds of Comparisons + Other Kinds of ComparisonsOther Kinds of Comparisons Here's the same query (which generates the same SQL) without the special shortcut: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17488,7 +17512,7 @@ WHERE We still have an entry whose key is the column name, but this time the associated value is another JSON object. It must contain exactly one entry, with the comparison operator on the left of the colon, and the value to be compared on the right. The same syntax works for other kinds of comparison operators. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17498,7 +17522,7 @@ WHERE } ...turns into: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17509,7 +17533,7 @@ WHERE The condition '“=”:null' turns into IS NULL. Any other operator used with “null” turns into IS NOT NULL. You can use most of the comparison operators recognized by PostgreSQL: - + = <> != < > <= >= ~ ~* !~ !~* @@ -17518,13 +17542,13 @@ WHERE The only ones you can't use are “is distinct from” and “is not distinct from”. - Custom ComparisonsCustom Comparisons + Custom ComparisonsCustom Comparisons Here's a dirty little secret: json_query doesn't really pay much attention to the operator you supply. It merely checks to make sure that the operator doesn't contain any semicolons or white space, in order to prevent certain kinds of SQL injection. It also allows "similar to" as a special exception. As a result, you can slip an operator of your own devising into the SQL, so long as it doesn't contain any semicolons or white space, and doesn't create invalid syntax. Here's a contrived and rather silly example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17534,7 +17558,7 @@ WHERE } ...which results in the following SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17545,10 +17569,10 @@ WHERE It's hard to come up with a realistic case where this hack would be useful, but it could happen. - Comparing One Column to AnotherComparing One Column to Another + Comparing One Column to AnotherComparing One Column to Another Here's how to put another column on the right hand side of a comparison: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17560,7 +17584,7 @@ WHERE This syntax is similar to the previous examples, except that instead of comparing to a literal value, we compare to an object. This object has only a single entry, whose key is a table alias preceded by a leading plus sign. The associated value is the name of the column. Here's the resulting SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17576,10 +17600,10 @@ WHERE The leading plus sign, combined with a table alias, can be used in other situations to designate the table to which a column belongs. We shall defer a discussion of this usage to the section on joins. - Testing Boolean ColumnsTesting Boolean Columns + Testing Boolean ColumnsTesting Boolean Columns In SQL, there are several ways to test a boolean column such as actor.org_unit.opac_visible. The most obvious way is to compare it to true or false: - + SELECT id FROM @@ -17589,7 +17613,7 @@ WHERE In a JSON query this approach doesn't work. If you try it, the "= true" test will turn into IS NULL. Don't do that. Instead, use a leading plus sign, as described in the preceding section, to treat the boolean column as a stand-alone condition: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -17599,7 +17623,7 @@ WHERE } Result: - + SELECT "aou".id AS "id" FROM @@ -17608,7 +17632,7 @@ WHERE "aou".opac_visible ; If you need to test for falsity, then write a test for truth and negate it with the "-not" operator. We will discuss the "-not" operator later, but here's a preview: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -17627,7 +17651,7 @@ WHERE NOT ( "aou".opac_visible ); You can also compare a boolean column directly to a more complex condition: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -17639,7 +17663,7 @@ WHERE } Here we compare a boolean column, not to a literal value, but to a boolean expression. The resulting SQL looks a little goofy, but it works: - + SELECT "aou".id AS "id" FROM @@ -17652,10 +17676,10 @@ WHERE In this case we compare the boolean column to a single simple condition. However you can include additional complications -- multiple conditions, IN lists, BETWEEN clauses, and other features as described below. - Multiple ConditionsMultiple Conditions + Multiple ConditionsMultiple Conditions If you need multiple conditions, just add them to the "where" object, separated by commas: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17666,7 +17690,7 @@ WHERE } The generated SQL connects the conditions with AND: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17678,10 +17702,10 @@ WHERE Later we will see how to use OR instead of AND. - Using ArraysUsing Arrays + Using ArraysUsing Arrays Here's a puzzler. Suppose you need two conditions for the same column. How do you code them in the same WHERE clause? For example, suppose you want something like this: - + SELECT id, name @@ -17692,7 +17716,7 @@ WHERE AND parent_ou <> 7; You might try a WHERE clause like this: - + "where": { "parent_ou":{ ">":3 }, "parent_ou":{ "<>":7 } @@ -17700,7 +17724,7 @@ WHERE Nope. Won't work. According to JSON rules, two entries in the same object can't have the same key. After slapping yourself in the forehead, you try something a little smarter: - + "where": { "parent_ou": { ">":3, @@ -17710,7 +17734,7 @@ WHERE Nice try, but that doesn't work either. Maybe it ought to work -- at least it's legal JSON -- but, no. Here's what works: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17721,7 +17745,7 @@ WHERE } We wrapped the two conditions into two separate JSON objects, and then wrapped those objects together into a JSON array. The resulting SQL looks like this: - + SELECT "aou".id AS "id", "aou".name AS "name @@ -17734,7 +17758,7 @@ AND That's not quite what we were hoping for, because the extra parentheses are so ugly. But they're harmless. This will do. If you're in the mood, you can use arrays to as many parentheses as you like, even if there is only one condition inside: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17747,7 +17771,7 @@ AND } ...yields: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17757,10 +17781,10 @@ WHERE ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); - How to ORHow to OR + How to ORHow to OR By default, json_query combines conditions with AND. When you need OR, here's how to do it: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17775,7 +17799,7 @@ WHERE We use “-or” as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a column name. Later we'll see some other operators with leading minus signs. In a couple of spots we even use plus signs. Here are the results from the above example: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17789,7 +17813,7 @@ WHERE The conditions paired with “-or” are linked by OR and enclosed in parentheses. Here's how to do the same thing using an array, except that it produces an extra layer of parentheses: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17814,17 +17838,17 @@ WHERE It's possible, though not very useful, to have only a single condition subject to the “-or” operator. In that case, the condition appears by itself, since there's nothing to OR it to. This trick is another way to add an extraneous layer of parentheses. - Another way to ANDAnother way to AND + Another way to ANDAnother way to AND You can also use the “-and” operator. It works just like “-or”, except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually need a separate operator for it, but it's available. In rare cases, nothing else will do -- you can't include two conditions in the same list because of the duplicate key problem, but you can't combine them with arrays either. In particular, you might need to combine them within an expression that you're comparing to a boolean column (see the subsection above on Testing Boolean Columns). - Negation with NOTNegation with NOT + Negation with NOTNegation with NOT The “-not” operator negates a condition or set of conditions. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17853,11 +17877,11 @@ WHERE In most cases you can achieve the same result by other means. However the “-not” operator is the only way to represent NOT BETWEEN (to be discussed later). - EXISTS with SubqueriesEXISTS with Subqueries + EXISTS with SubqueriesEXISTS with Subqueries Two other operators carry a leading minus sign: “-exists” and its negation “-not-exists”. These operators apply to subqueries, which have the same format as a full query. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17888,7 +17912,7 @@ EXISTS This kind of subquery is of limited use, because its WHERE clause doesn't have anything to do with the main query. It just shuts down the main query altogether if it isn't satisfied. More typical is a correlated subquery, whose WHERE clause refers to a row from the main query. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17904,7 +17928,7 @@ EXISTS } Note the use of “+aou” to qualify the id column in the inner WHERE clause. - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17920,10 +17944,10 @@ WHERE This latter example illustrates the syntax, but in practice, it would probably be more natural to use an IN clause with a subquery (to be discussed later). - BETWEEN ClausesBETWEEN Clauses + BETWEEN ClausesBETWEEN Clauses Here's how to express a BETWEEN clause: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -17937,7 +17961,7 @@ WHERE The range bounds must be either numbers or string literals. Although SQL allows them to be null, a null doesn't make sense in this context, because a null never matches anything. Consequently json_query doesn't allow them. The resulting SQL is just what you would expect: - + SELECT "aou".id AS "id" FROM @@ -17946,10 +17970,10 @@ WHERE parent_ou BETWEEN '3' AND '7'; - IN and NOT IN ListsIN and NOT IN Lists + IN and NOT IN ListsIN and NOT IN Lists There are two ways to code an IN list. One way is simply to include the list of values in an array: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17960,7 +17984,7 @@ WHERE As with a BETWEEN clause, the values in the array must be numbers or string literals. Nulls aren't allowed. Here's the resulting SQL, which again is just what you would expect: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -17970,7 +17994,7 @@ WHERE "aou".parent_ou IN (3, 5, 7); The other way is similar to the syntax shown above for a BETWEEN clause, except that the array may include any non-zero number of values: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -17983,11 +18007,11 @@ WHERE For a NOT IN list, you can use the latter format, using the “not in” operator instead of “in”. Alternatively, you can use either format together with the “-not” operator. - IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries + IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries For an IN clause with a subquery, the syntax is similar to the second of the two formats for an IN list (see the previous subsection). The "in" or "not in" operator is paired, not with an array of values, but with an object representing the subquery. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -18003,7 +18027,7 @@ WHERE } The results: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -18023,10 +18047,10 @@ WHERE In SQL the subquery may select multiple columns, but in a JSON query it can select only a single column. For a NOT IN clause with a subquery, use the “not in” operator instead of “in”. - Comparing to a FunctionComparing to a Function + Comparing to a FunctionComparing to a Function Here's how to compare a column to a function call: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -18037,7 +18061,7 @@ WHERE A comparison operator (“>” in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, if any, are the parameters. They may be strings, numbers, or nulls. The resulting SQL for this example: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -18049,12 +18073,12 @@ WHERE All parameters are passed as quoted strings -- even if, as in this case, they are really numbers. This syntax is somewhat limited in that the function parameters must be constants (hence the use of a silly example). - Putting a Function Call on the LeftPutting a Function Call on the Left + Putting a Function Call on the LeftPutting a Function Call on the Left In the discussion of the SELECT clause, we saw how you could transform the value of a selected column by passing it to a function. In the WHERE clause, you can use similar syntax to transform the value of a column before comparing it to something else. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -18070,7 +18094,7 @@ WHERE The "transform" entry gives the name of the function that we will use on the left side of the comparison. The "value" entry designates the value on the right side of the comparison. - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -18080,7 +18104,7 @@ WHERE upper("aou".name ) = 'CARTER BRANCH' ; As in the SELECT clause, you can pass literal values or nulls to the function as additional parameters by using an array tagged as “params”: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -18110,11 +18134,11 @@ WHERE complicated than it needs to be. - Putting Function Calls on Both SidesPutting Function Calls on Both Sides + Putting Function Calls on Both SidesPutting Function Calls on Both Sides If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the “value” entry carries an array instead of a literal value. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -18137,19 +18161,19 @@ WHERE The format for the right side function is similar to what we saw earlier, in the subsection Comparing to a Function. Note that there are two different formats for defining function calls: - • + • For a function call to the left of the comparison, the function name is tagged as “transform”. The first parameter is always the relevant column name; additional parameters, if any, are in an array tagged as "params". The entry for “result_field”, if present, specifies a subcolumn. - • + • For a function call to the right of the comparison, the function name is the first entry in an array, together with any parameters. There's no way to specify a subcolumn. - Comparing a Function to a ConditionComparing a Function to a Condition + Comparing a Function to a ConditionComparing a Function to a Condition So far we have seen two kinds of data for the “value” tag. A string or number translates to a literal value, and an array translates to a function call. The third possibility is a JSON object, which translates to a condition. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -18165,7 +18189,7 @@ WHERE The function tagged as “transform” must return boolean, or else json_query will generate invalid SQL. The function used here, “is_prime”, is fictitious. - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -18181,7 +18205,7 @@ WHERE In the example above we compared the boolean to a simple condition. However the expression on the right may include multiple conditions, IN lists, subqueries, and whatever other complications are necessary. - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do The WHERE clause is subject to some of the same limitations as the SELECT clause. However, in the WHERE clause these limitations are more limiting, because the client program can't compensate by doing some of the work for itself. @@ -18193,13 +18217,13 @@ WHERE As noted earlier: you can't use the comparison operators “is distinct from” or “is not distinct from”. Also as noted earlier: a subquery in an IN clause cannot select more than one column. - JOIN clausesJOIN clauses + JOIN clausesJOIN clauses Until now, our examples have selected from only one table at a time. As a result, the FROM clause has been very simple -- just a single string containing the class name of the relevant table. When the FROM clause joins multiple tables, the corresponding JSON naturally gets more complicated. SQL provides two ways to define a join. One way is to list both tables in the FROM clause, and put the join conditions in the WHERE clause: - + SELECT aou.id, aout.name @@ -18210,7 +18234,7 @@ WHERE aout.id = aou.ou_type; The other way is to use an explicit JOIN clause: - + SELECT aou.id, aout.name @@ -18220,7 +18244,7 @@ FROM ON ( aout.id = aou.ou_type ); JSON queries use only the second of these methods. The following example expresses the same query in JSON: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18234,7 +18258,7 @@ FROM that information consists entirely of a string containing the class name of the other table. So where is the join condition? It's in the IDL. Upon reading the IDL, json_query knows that actor.org_unit has a foreign key pointing to actor.org_unit_type, and builds a join condition accordingly: - + SELECT "aou".id AS "id", "aout".name AS "name" @@ -18244,7 +18268,7 @@ FROM ON ( "aout".id = "aou".ou_type ) ; In this case the core table is the child table, and the joined table is the parent table. We could just as well have written it the other way around: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18261,14 +18285,14 @@ FROM ON ( "aou".ou_type = "aout".id ) ; - Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly + Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly While it's convenient to let json_query pick the join columns, it doesn't always work. For example, the actor.org_unit table has four different address ids, for four different kinds of addresses. Each of them is a foreign key to the actor.org_address table. Json_query can't guess which one you want if you don't tell it. (Actually it will try to guess. It will pick the first matching link that it finds in the IDL, which may or may not be the one you want.) Here's how to define exactly which columns you want for the join: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -18285,15 +18309,15 @@ FROM class name, and the associated data is another layer of JSON object containing the attributes of the join. Later we'll encounter other kinds of join attributes. For now, the only attributes that we're looking at are the ones that identify the join columns: “fkey” and “field”. The hard part is remembering which is which: - • + • “fkey” identifies the join column from the left table; - • + • “field” identifies the join column from the right table. When there are only two tables involved, the core table is on the left, and the non-core table is on the right. In more complex queries neither table may be the core table. Here is the result of the preceding JSON: - + SELECT "aou".id AS "id", "aoa".street1 AS "street1" @@ -18303,7 +18327,7 @@ FROM ON ( "aoa".id = "aou".holds_address ) ; In this example the child table is on the left and the parent table is on the right. We can swap the tables if we swap the join columns as well: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -18327,11 +18351,11 @@ FROM When you specify both of the join columns, json_query assumes that you know what you're doing. It doesn't check the IDL to confirm that the join makes sense. The burden is on you to avoid absurdities. - Specifying Only One Join ColumnSpecifying Only One Join Column + Specifying Only One Join ColumnSpecifying Only One Join Column We just saw how to specify both ends of a join. It turns out that there's a shortcut -- most of the time you only need to specify one end. Consider the following variation on the previous example: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -18350,11 +18374,11 @@ FROM column of actor.org_address. As noted earlier, there are four different foreign keys from actor.org_unit to actor.org_address, and json_query would have no way to guess which one we wanted. - Joining to Multiple TablesJoining to Multiple Tables + Joining to Multiple TablesJoining to Multiple Tables So far we have joined only two tables at a time. What if we need to join one table to two different tables? Here's an example: - + { "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, "from": { @@ -18372,7 +18396,7 @@ FROM For the second join, to actor.org_address, we have to specify at least the join column in the child table, as discussed earlier. We could also have specified the join column from the parent table, but we don't have to, so we didn't. Here is the resulting SQL: - + SELECT "aou".id AS "id", "aout".depth AS "depth", @@ -18387,11 +18411,11 @@ FROM Since there can be only one core table, the outermost object in the FROM clause can have only one entry, whose key is the class name of the core table. The next level has one entry for every table that's joined to the core table. - Nested JoinsNested Joins + Nested JoinsNested Joins Let's look at that last query again. It joins three tables, and the core table is the one in the middle. Can we make one of the end tables the core table instead? Yes, we can: - + { "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, "from": { @@ -18408,7 +18432,7 @@ FROM The “join” attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. Here are the results: - + SELECT "aou".id AS "id", "aout".depth AS "depth", @@ -18421,11 +18445,11 @@ FROM ON ( "aout".id = "aou".ou_type ) ; - Outer JoinsOuter Joins + Outer JoinsOuter Joins By default, json_query constructs an inner join. If you need an outer join, you can add the join type as an attribute of the join: Yes, we can: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -18439,7 +18463,7 @@ FROM } Here is the resulting SQL for this example: - + SELECT "aou".id AS "id", "aoa".street1 AS "street1" @@ -18449,12 +18473,12 @@ FROM ON ( "aou".mailing_address = "aoa".id ) ; - Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause + Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause In the WHERE clause of the generated SQL, every column name is qualified by a table alias, which is always the corresponding class name. If a column belongs to the core table, this qualification happens by default. If it belongs to a joined table, the JSON must specify what class name to use for an alias. For example: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18467,7 +18491,7 @@ FROM Note the peculiar operator “+aou” -- a plus sign followed by the relevant class name. This operator tells json_query to apply the specified class to the condition that follows. The result: - + SELECT "aou".id AS "id", "aout".name AS "name" @@ -18479,7 +18503,7 @@ WHERE ( "aou".parent_ou = 2 ); The plus-class operator may apply to multiple conditions: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18510,7 +18534,7 @@ WHERE special gyrations to apply the right table alias. In a more realistic case, however, you might need to apply conditions to both tables. Just swapping the tables wouldn't solve the problem. You can also use a plus-class operator to compare columns from two different tables: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18536,10 +18560,10 @@ WHERE Please don't expect that query to make any sense. It doesn't. But it illustrates the syntax. - Join FiltersJoin Filters + Join FiltersJoin Filters While the above approach certainly works, the special syntax needed is goofy and awkward. A somewhat cleaner solution is to include a condition in the JOIN clause: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18563,7 +18587,7 @@ FROM By default, json_query uses AND to combine the “filter” condition with the original join condition. If you need OR, you can use the “filter_op” attribute to say so: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18592,18 +18616,18 @@ FROM Remember, though, that it all gets combined with the the original join condition with an AND, or with an OR if you so specify. If you're not careful, the result may be a confusing mixture of AND and OR at the same level. - Joining to a SubqueryJoining to a Subquery + Joining to a SubqueryJoining to a Subquery In SQL you can put a subquery in a FROM clause, and select from it as if it were a table. A JSON query has no way to do that directly. The IDL, however, can define a class as a subquery instead of as a table. When you SELECT from it, json_query inserts the corresponding subquery into the FROM clause. For example: - + { "select":{ "iatc":[ "id", "dest", "copy_status" ] }, "from": "iatc" } There's nothing special-looking about this JSON, but json_query expands it as follows: - + SELECT "iatc".id AS "id", "iatc".dest AS "dest", @@ -18624,13 +18648,13 @@ FROM The “iatc” class is like a view, except that it's defined in the IDL instead of the database. In this case it provides a way to do a join that would otherwise be impossible through a JSON query, because it joins the same table in two different ways (see the next subsection). - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do In a JOIN, as with other SQL constructs, there are some things that you can't do with a JSON query. In particular, you can't specify a table alias, because the table alias is always the class name. As a result: - • + • You can't join a table to itself. For example, you can't join actor.org_unit to itself in order to select the name of the parent for every org_unit. - • + • You can't join to the same table in more than one way. For example, you can't join actor.org_unit to actor.org_address through four different foreign keys, to get four kinds of addresses in a single query. @@ -18642,7 +18666,7 @@ FROM condition, or to omit any join condition in order to obtain a Cartesian product. If necessary, you can devise such unconventional joins by combining the normal join conditions with join filters. For example, here's how to get a Cartesian product: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -18672,11 +18696,11 @@ FROM Yes, it's ugly, but at least you're not likely to do it by accident. - Selecting from FunctionsSelecting from Functions + Selecting from FunctionsSelecting from Functions In SQL, you can put a function call in the FROM clause. The function may return multiple columns and multiple rows. Within the query, the function behaves like a table. A JSON query can also select from a function: - + { "from": [ "actor.org_unit_ancestors", 5 ] } @@ -18684,7 +18708,7 @@ FROM The data associated with “from” is an array instead of a string or an object. The first element in the array specifies the name of the function. Subsequent elements, if any, supply the parameters of the function; they must be literal values or nulls. Here is the resulting query: - + SELECT * FROM actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; @@ -18693,10 +18717,10 @@ FROM another function. If you try to supply a SELECT list or a WHERE clause, json_query will ignore it. The generated query will always select every column, via a wild card asterisk, from every row. - The ORDER BY ClauseThe ORDER BY Clause + The ORDER BY ClauseThe ORDER BY Clause In most cases you can encode an ORDER BY clause as either an array or an object. Let's take a simple example and try it both ways. First the array: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -18706,7 +18730,7 @@ FROM } Now the object: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -18716,7 +18740,7 @@ FROM } The results are identical from either version: - + SELECT "aou".name AS "name" FROM @@ -18727,17 +18751,17 @@ ORDER BY The array format is more verbose, but as we shall see, it is also more flexible. It can do anything the object format can do, plus some things that the object format can't do. - ORDER BY as an ArrayORDER BY as an Array + ORDER BY as an ArrayORDER BY as an Array In the array format, each element of the array is an object defining one of the sort fields. Each such object must include at least two tags: - • + • The “class” tag provides the name of the class, which must be either the core class or a joined class. - • + • The “field” tag provides the field name, corresponding to one of the columns of the class. If you want to sort by multiple fields, just include a separate object for each field. If you want to sort a field in descending order, add a “direction” tag: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -18759,7 +18783,7 @@ ORDER BY upper("aou".name ); If you need additional parameters for the function, you can use the “params” tag to pass them: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -18774,7 +18798,7 @@ ORDER BY } The additional parameters appear as elements in an array. They may be numbers, strings, or nulls. - + SELECT "aou".name AS "name" FROM @@ -18786,11 +18810,11 @@ ORDER BY If the function returns multiple columns, you can use the “result_field” tag to indicate which one you want (not shown). - ORDER BY as an ObjectORDER BY as an Object + ORDER BY as an ObjectORDER BY as an Object When you encode the ORDER BY clause as an object, the keys of the object are class names. Each class must be either the core class or a joined class. The data for each class can be either an array or another layer of object. Here's an example with one of each: - + { "select":{ "aout":"id", "aou":[ "name" ] }, "from": { "aou":"aout" }, @@ -18807,7 +18831,7 @@ ORDER BY In this example, we use the “direction"” tag to specify that the name field be sorted in descending order. This tag works the same way here as described earlier. If the associated string starts with "D" or "d", the sort will be descending; otherwise it will be ascending. Here is the resulting SQL: - + SELECT "aou".name AS "name" FROM @@ -18818,7 +18842,7 @@ ORDER BY "aout".id, "aou".name DESC; - + { "select":{ "aou":[ "name", "id" ] }, "from": "aou", @@ -18838,7 +18862,7 @@ ORDER BY substr("aou".name,'1','8' ); - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do If you encode the ORDER BY clause as an object, you may encounter a couple of restrictions. Because the key of such an object is the class name, all the fields from a given class must be grouped together. You can't sort by a column from one table, followed by @@ -18848,7 +18872,7 @@ ORDER BY there are situations where it can be useful, provided that the column is passed to a transforming function. For example, you might want a case-insensitive sort, except that for any given letter you want lower case to sort first. For example, you want “diBona” to sort before “Dibona”. Here's a way to do that, coding the ORDER BY clause as an array: - + { "select":{ "au":[ "family_name", "id" ] }, "from": "au", @@ -18868,12 +18892,12 @@ ORDER BY Such a sort is not possible where the ORDER BY clause is coded as an object. - The GROUP BY ClauseThe GROUP BY Clause + The GROUP BY ClauseThe GROUP BY Clause A JSON query has no separate construct to define a GROUP BY clause. Instead, the necessary information is distributed across the SELECT clause. However, the way it works is a bit backwards from what you might expect, so pay attention. Here's an example: - + { "select": { "aou": [ @@ -18886,7 +18910,7 @@ ORDER BY The “transform” tag is there just to give us an excuse to do a GROUP BY. What's important to notice is the “aggregate” tag. Here's the resulting SQL: - + SELECT "aou".parent_ou AS "parent_ou", max("aou".name ) AS "name" @@ -18902,12 +18926,12 @@ GROUP BY it appears only with a value of false, then there is no GROUP BY clause. If you really want to include every field in the GROUP BY clause, don't use “aggregate”. Use the “distinct” tag, as described in the next section. - The DISTINCT ClauseThe DISTINCT Clause + The DISTINCT ClauseThe DISTINCT Clause JSON queries don't generate DISTINCT clauses. However, they can generate GROUP BY clauses that include every item from the SELECT clause. The effect is the same as applying DISTINCT to the entire SELECT clause. For example: - + { "select": { "aou": [ @@ -18920,7 +18944,7 @@ GROUP BY } Note the “distinct” entry at the top level of the query object, with a value of “true”. - + SELECT "aou".parent_ou AS "parent_ou", "aou".ou_type AS "ou_type" @@ -18931,12 +18955,12 @@ GROUP BY The generated GROUP BY clause references every column in the SELECT clause by number. - The HAVING ClauseThe HAVING Clause + The HAVING ClauseThe HAVING Clause For a HAVING clause, add a “having” entry at the top level of the query object. For the associated data, you can use all the same syntax that you can use for a WHERE clause. Here's a simple example: - + { "select": { "aou": [ @@ -18960,7 +18984,7 @@ GROUP BY } We use the “aggregate” tag in the SELECT clause to give us a GROUP BY to go with the HAVING. Results: - + SELECT "aou".parent_ou AS "parent_ou", count("aou".id ) AS "id_count" @@ -18974,10 +18998,10 @@ HAVING In raw SQL we could have referred to “count( 1 )”. But since JSON queries cannot encode arbitrary expressions, we applied the count function to a column that cannot be null. - The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses + The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses To add an LIMIT or OFFSET clause, add an entry to the top level of a query object. For example: - + { "select": { "aou": [ "id", "name" ] @@ -18990,7 +19014,7 @@ HAVING The data associated with “offset” and “limit” may be either a number or a string, but if it's a string, it should have a number inside. Result: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -19011,17 +19035,17 @@ OFFSET 7; Using SuperCatUsing SuperCat> - + SuperCat allows Evergreen record and information retrieval from a web browser using a based on a number of open web standards and formats. The following record types are supported: - •isbn•metarecord•record - Return a list of ISBNs for related recordsReturn a list of ISBNs for related records + •isbn•metarecord•record + Return a list of ISBNs for related recordsReturn a list of ISBNs for related records - + Similar to the OCLC xISBN service, Evergreen can return a list of related records based on its oISBN algorithm: http://<hostname>/opac/extras/osibn/<ISBN> For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: - + <idlist metarecord="302670"> <isbn record="250060">0790783525</isbn> <isbn record="20717">0736691316</isbn> @@ -19038,13 +19062,13 @@ OFFSET 7; </idlist> - Return recordsReturn records + Return recordsReturn records - + SuperCat can return records and metarecords in many different formats (see the section called “Supported formats” http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: - + <mods:modsCollection version="3.0"> <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> <titleInfo> @@ -19084,9 +19108,9 @@ OFFSET 7; </mods:modsCollection> - Return a feed of recently edited or created recordsReturn a feed of recently edited or created records + Return a feed of recently edited or created recordsReturn a feed of recently edited or created records - + SuperCat can return feeds of recently edited or created authority and bibliographic records: http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. @@ -19094,12 +19118,12 @@ OFFSET 7; Feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 - Browse recordsBrowse records + Browse recordsBrowse records SuperCat can browse records in HTML and XML formats: http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: - + <hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> @@ -19124,11 +19148,11 @@ OFFSET 7; Supported formatsSupported formats - + SuperCat maintains a list of supported formats for records and metarecords: http://<hostname>/opac/extras/supercat/formats/<record-type> For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: - + <formats> <format> <name>opac</name> @@ -19144,17 +19168,17 @@ OFFSET 7; Adding new SuperCat FormatsAdding new SuperCat Formats - + Adding SuperCat formats requires experience editing XSL files and familiarity with XML and Perl. SuperCat web services are based on the OpenSRF service, >open-ils.supercat. Developers are able to add new formats by adding the xsl stylesheet for the format. By default, the location of the stylesheets is /openils/var/xsl/. You must also add the feed to the perl modules openils/lib/perl5/OpenILS/WWW/SuperCat/feed.pm and openils/lib/perl5/OpenILS/WWW/SuperCat.pm. An Evergreen restart is required for the feed to be activated. - Use an existing xsl stylesheet and Perl module entry as a template for your new format. + Use an existing xsl stylesheet and Perl module entry as a template for your new format. Customizing SuperCat FormatsCustomizing SuperCat Formats - + Editing SuperCat formats requires experience editing XSL files and familiarity with XML.. It is possible to customize existing supercat formats using XSL stylesheets. You are able to change the content to be displayed and the design of the pages. In order to change the display of a specific format, edit the corresponding XSL file(s) for the particular format. The default location for the XSL stylesheets is @@ -19170,7 +19194,7 @@ OFFSET 7; Report errors in this documentation using Launchpad. Part VIII. Appendices Report any errors in this documentation using Launchpad. - Part VIII. AppendicesTable of ContentsA. Evergreen Installation Checklist 45. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema reporter Schema search Schema serial Schema stats Schema vandelay B. About this Documentation About the Documentation Interest Group (DIG) How to Participate C. Getting More Information Glossary Index + Part VIII. AppendicesTable of ContentsA. Evergreen Installation Checklist 45. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema reporter Schema search Schema serial Schema stats Schema vandelay B. About this Documentation About the Documentation Interest Group (DIG) How to Participate C. Getting More Information Glossary Index Appendix A. Evergreen Installation ChecklistAppendix A. Evergreen Installation Checklist @@ -19179,63 +19203,63 @@ OFFSET 7; Report any errors in this documentation using Launchpad. Appendix A. Evergreen Installation ChecklistAppendix A. Evergreen Installation ChecklistAbstractThis appendix is a checklist of things to do to istall and configure Evergreen. It will refer to the necessary chapter with the specific instructions for each item. - a. - Install OpenSRF - b. + a. + Install OpenSRF + b. Install Evergreen server software - c. - Install Evergreen staff client - d. + c. + Install Evergreen staff client + d. Establish a back up strategy for Evergreen data and files - e. - Configure PostgreSQL for better performance - f. - Configure Evergreen error logging - g. - Set up organizational unit types - h. - Set up organizational units - i. - Customize localization and languages (optional) - j. - Add circ modifiers - k. - Configure copy statuses - l. - Add cataloguing templates - m. - Add user groups and assign permissions - n. + e. + Configure PostgreSQL for better performance + f. + Configure Evergreen error logging + g. + Set up organizational unit types + h. + Set up organizational units + i. + Customize localization and languages (optional) + j. + Add circ modifiers + k. + Configure copy statuses + l. + Add cataloguing templates + m. + Add user groups and assign permissions + n. Adjust various Local Administration Settings - o. - Adjust circulation policies and penalty threshholds for groups - p. + o. + Adjust circulation policies and penalty threshholds for groups + p. Add staff users - q. - Customize OPAC as needed - r. - Import data - s. + q. + Customize OPAC as needed + r. + Import data + s. Start the reporter service and set up reports - + - t. - Set up email notifications for holds and overdue items - u. - Set up action triggers - v. - Set up Z39.50 server (optional) - w. - Adjust search relevancy settings if required (optional) - x. - Install SIP server (optional) - for communications with automated devices such as self check stations, autmated sorters and other devices using SIP + t. + Set up email notifications for holds and overdue items + u. + Set up action triggers + v. + Set up Z39.50 server (optional) + w. + Adjust search relevancy settings if required (optional) + x. + Install SIP server (optional) - for communications with automated devices such as self check stations, autmated sorters and other devices using SIP Chapter 45. Database SchemaChapter 45. Database Schema Report errors in this documentation using Launchpad. Chapter 45. Database Schema Report any errors in this documentation using Launchpad. - Chapter 45. Database SchemaChapter 45. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqcurrency_typecurrency_typeFieldData TypeConstraints and Referencescodetext + Chapter 45. Database SchemaChapter 45. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqcurrency_typecurrency_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -19258,12 +19282,12 @@ OFFSET 7; Tables referencing acq.exchange_rate via Foreign Key Constraints - •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider + •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider - distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial + distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19325,12 +19349,12 @@ OFFSET 7; Tables referencing acq.distribution_formula_entry via Foreign Key Constraints - •acq.distribution_formula_entry + •acq.distribution_formula_entry - distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial + distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19413,13 +19437,13 @@ OFFSET 7; - Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) + Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) - exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial + exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19487,7 +19511,7 @@ OFFSET 7; - fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial + fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19512,12 +19536,12 @@ OFFSET 7; Tables referencing acq.fiscal_year via Foreign Key Constraints - •acq.fiscal_year + •acq.fiscal_year - fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial + fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19601,7 +19625,7 @@ OFFSET 7; - fundfundFieldData TypeConstraints and Referencesidserial + fundfundFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19716,12 +19740,12 @@ OFFSET 7; Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.lineitem_detail + •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.lineitem_detail - fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial + fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19802,13 +19826,13 @@ OFFSET 7; - Constraints on fund_allocationallocation_amount_or_percentCHECK ((((percent IS NULL) AND (amount IS NOT NULL)) OR ((percent IS NOT NULL) AND (amount IS NULL))))fund_allocation_percent_checkCHECK (((percent IS NULL) OR ((percent >= 0.0) AND (percent <= 100.0)))) + Constraints on fund_allocationallocation_amount_or_percentCHECK ((((percent IS NULL) AND (amount IS NOT NULL)) OR ((percent IS NOT NULL) AND (amount IS NULL))))fund_allocation_percent_checkCHECK (((percent IS NULL) OR ((percent >= 0.0) AND (percent <= 100.0)))) - fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger + fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger @@ -19823,7 +19847,7 @@ OFFSET 7; - fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger + fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger @@ -19838,7 +19862,7 @@ OFFSET 7; - fund_debitfund_debitFieldData TypeConstraints and Referencesidserial + fund_debitfund_debitFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19935,12 +19959,12 @@ OFFSET 7; Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail + •acq.lineitem_detail - fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger + fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger @@ -19960,7 +19984,7 @@ OFFSET 7; - fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger + fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger @@ -19975,7 +19999,7 @@ OFFSET 7; - fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger + fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger @@ -19990,7 +20014,7 @@ OFFSET 7; - fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger + fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger @@ -20005,7 +20029,7 @@ OFFSET 7; - fund_tagfund_tagFieldData TypeConstraints and Referencesidserial + fund_tagfund_tagFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20060,12 +20084,12 @@ OFFSET 7; Tables referencing acq.fund_tag_map via Foreign Key Constraints - •acq.fund_tag_map + •acq.fund_tag_map - fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial + fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20124,7 +20148,7 @@ OFFSET 7; - funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial + funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20205,12 +20229,12 @@ OFFSET 7; Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.funding_source_credit + •acq.fund_allocation•acq.funding_source_credit - funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger + funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger @@ -20225,7 +20249,7 @@ OFFSET 7; - funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger + funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger @@ -20240,7 +20264,7 @@ OFFSET 7; - funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial + funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20283,7 +20307,7 @@ OFFSET 7; - funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger + funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger @@ -20298,7 +20322,7 @@ OFFSET 7; - lineitemlineitemFieldData TypeConstraints and Referencesidbigserial + lineitemlineitemFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -20454,19 +20478,19 @@ OFFSET 7; - Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) + Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) Tables referencing acq.lineitem_attr via Foreign Key Constraints - •acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note + •acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note - lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial + lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -20525,7 +20549,7 @@ OFFSET 7; - lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial + lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -20574,7 +20598,7 @@ OFFSET 7; - lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial + lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -20702,7 +20726,7 @@ OFFSET 7; - lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -20760,7 +20784,7 @@ OFFSET 7; - lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -20811,7 +20835,7 @@ OFFSET 7; - lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -20869,7 +20893,7 @@ OFFSET 7; - lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial + lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20953,7 +20977,7 @@ OFFSET 7; - lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -21025,7 +21049,7 @@ OFFSET 7; - lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -21090,7 +21114,7 @@ OFFSET 7; - picklistpicklistFieldData TypeConstraints and Referencesidserial + picklistpicklistFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21205,12 +21229,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem + •acq.lineitem - po_notepo_noteFieldData TypeConstraints and Referencesidserial + po_notepo_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21294,7 +21318,7 @@ OFFSET 7; - providerproviderFieldData TypeConstraints and Referencesidserial + providerproviderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21391,12 +21415,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.purchase_order + •acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.purchase_order - provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial + provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21486,7 +21510,7 @@ OFFSET 7; - provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial + provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21538,12 +21562,12 @@ OFFSET 7; Tables referencing acq.provider_contact_address via Foreign Key Constraints - •acq.provider_contact_address + •acq.provider_contact_address - provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial + provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21633,7 +21657,7 @@ OFFSET 7; - provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial + provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21694,7 +21718,7 @@ OFFSET 7; - purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial + purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21821,12 +21845,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.po_note + •acq.lineitem•acq.po_note - Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext + Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext @@ -22048,7 +22072,7 @@ OFFSET 7; - all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint + all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint @@ -22218,7 +22242,7 @@ OFFSET 7; - billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint + billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint @@ -22353,7 +22377,7 @@ OFFSET 7; - circulationcirculationFieldData TypeConstraints and Referencesidbigint + circulationcirculationFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -22556,13 +22580,13 @@ OFFSET 7; - Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text]))) + Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text]))) - hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidserial + hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22623,7 +22647,7 @@ OFFSET 7; - hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial + hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22687,7 +22711,7 @@ OFFSET 7; - hold_requesthold_requestFieldData TypeConstraints and Referencesidserial + hold_requesthold_requestFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22921,19 +22945,19 @@ OFFSET 7; - Constraints on hold_requesthold_request_hold_type_checkCHECK ((hold_type = ANY (ARRAY['M'::text, 'T'::text, 'V'::text, 'C'::text]))) + Constraints on hold_requesthold_request_hold_type_checkCHECK ((hold_type = ANY (ARRAY['M'::text, 'T'::text, 'V'::text, 'C'::text]))) Tables referencing action.hold_copy_map via Foreign Key Constraints - •action.hold_copy_map•action.hold_notification•action.hold_transit_copy + •action.hold_copy_map•action.hold_notification•action.hold_transit_copy - hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial + hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22963,12 +22987,12 @@ OFFSET 7; Tables referencing action.hold_request via Foreign Key Constraints - •action.hold_request + •action.hold_request - hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger + hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger PRIMARY KEY @@ -23058,7 +23082,7 @@ OFFSET 7; - in_house_usein_house_useFieldData TypeConstraints and Referencesidserial + in_house_usein_house_useFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23126,7 +23150,7 @@ OFFSET 7; - non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial + non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23194,7 +23218,7 @@ OFFSET 7; - non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial + non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23276,7 +23300,7 @@ OFFSET 7; - open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint + open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint @@ -23411,7 +23435,7 @@ OFFSET 7; - reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger + reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger PRIMARY KEY @@ -23501,7 +23525,7 @@ OFFSET 7; - surveysurveyFieldData TypeConstraints and Referencesidserial + surveysurveyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23601,12 +23625,12 @@ OFFSET 7; Tables referencing action.survey_question via Foreign Key Constraints - •action.survey_question•action.survey_response + •action.survey_question•action.survey_response - survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial + survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23645,12 +23669,12 @@ OFFSET 7; Tables referencing action.survey_response via Foreign Key Constraints - •action.survey_response + •action.survey_response - survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial + survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23689,12 +23713,12 @@ OFFSET 7; Tables referencing action.survey_answer via Foreign Key Constraints - •action.survey_answer•action.survey_response + •action.survey_answer•action.survey_response - survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial + survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -23777,7 +23801,7 @@ OFFSET 7; - transit_copytransit_copyFieldData TypeConstraints and Referencesidserial + transit_copytransit_copyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23882,12 +23906,12 @@ OFFSET 7; Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy + •action.transit_copy - unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial + unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -23934,7 +23958,7 @@ OFFSET 7; - Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext + Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -23957,12 +23981,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - collectorcollectorFieldData TypeConstraints and Referencesmoduletext + collectorcollectorFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -23985,12 +24009,12 @@ OFFSET 7; Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment + •action_trigger.environment - environmentenvironmentFieldData TypeConstraints and Referencesidserial + environmentenvironmentFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24057,13 +24081,13 @@ OFFSET 7; - Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) + Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) - eventeventFieldData TypeConstraints and Referencesidbigserial + eventeventFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24166,13 +24190,13 @@ OFFSET 7; - Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text]))) + Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text]))) - event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial + event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24379,12 +24403,12 @@ OFFSET 7; Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment•action_trigger.event•action_trigger.event_params + •action_trigger.environment•action_trigger.event•action_trigger.event_params - event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial + event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24427,12 +24451,12 @@ OFFSET 7; Tables referencing action_trigger.event via Foreign Key Constraints - •action_trigger.event + •action_trigger.event - event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial + event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24493,7 +24517,7 @@ OFFSET 7; - hookhookFieldData TypeConstraints and Referenceskeytext + hookhookFieldData TypeConstraints and Referenceskeytext PRIMARY KEY @@ -24532,12 +24556,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - reactorreactorFieldData TypeConstraints and Referencesmoduletext + reactorreactorFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -24560,12 +24584,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - validatorvalidatorFieldData TypeConstraints and Referencesmoduletext + validatorvalidatorFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -24588,12 +24612,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial + Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24647,7 +24671,7 @@ OFFSET 7; - hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger + hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger @@ -24797,7 +24821,7 @@ OFFSET 7; - org_addressorg_addressFieldData TypeConstraints and Referencesidserial + org_addressorg_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24892,12 +24916,12 @@ OFFSET 7; Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit + •actor.org_unit - org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial + org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24927,12 +24951,12 @@ OFFSET 7; Tables referencing actor.org_lasso_map via Foreign Key Constraints - •actor.org_lasso_map + •actor.org_lasso_map - org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial + org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24977,7 +25001,7 @@ OFFSET 7; - org_unitorg_unitFieldData TypeConstraints and Referencesidserial + org_unitorg_unitFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25116,12 +25140,12 @@ OFFSET 7; Tables referencing acq.distribution_formula via Foreign Key Constraints - •acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_tag•acq.funding_source•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•action.circulation•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_transparency•asset.stat_cat•asset.stat_cat_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.record_entry•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition + •acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_tag•acq.funding_source•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•action.circulation•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_transparency•asset.stat_cat•asset.stat_cat_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.record_entry•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition - org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial + org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25171,7 +25195,7 @@ OFFSET 7; - org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial + org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25203,7 +25227,7 @@ OFFSET 7; - org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial + org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25264,7 +25288,7 @@ OFFSET 7; - org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial + org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25333,12 +25357,12 @@ OFFSET 7; Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint + •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint - stat_catstat_catFieldData TypeConstraints and Referencesidserial + stat_catstat_catFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25402,12 +25426,12 @@ OFFSET 7; Tables referencing actor.stat_cat_entry via Foreign Key Constraints - •actor.stat_cat_entry•actor.stat_cat_entry_usr_map + •actor.stat_cat_entry•actor.stat_cat_entry_usr_map - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial + stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25483,7 +25507,7 @@ OFFSET 7; - stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial + stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25551,7 +25575,7 @@ OFFSET 7; - usrusrFieldData TypeConstraints and Referencesidserial + usrusrFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25895,12 +25919,12 @@ OFFSET 7; Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.purchase_order•action.circulation•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•vandelay.queue + •acq.fund_allocation•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.purchase_order•action.circulation•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•vandelay.queue - usr_addressusr_addressFieldData TypeConstraints and Referencesidserial + usr_addressusr_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26025,12 +26049,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•actor.usr_address + •actor.usr•actor.usr_address - usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial + usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -26105,7 +26129,7 @@ OFFSET 7; - usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial + usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26203,7 +26227,7 @@ OFFSET 7; - usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial + usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26259,7 +26283,7 @@ OFFSET 7; - usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial + usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -26320,7 +26344,7 @@ OFFSET 7; - usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial + usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26408,7 +26432,7 @@ OFFSET 7; - workstationworkstationFieldData TypeConstraints and Referencesidserial + workstationworkstationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26454,12 +26478,12 @@ OFFSET 7; Tables referencing actor.usr_org_unit_opt_in via Foreign Key Constraints - •actor.usr_org_unit_opt_in•money.bnm_desk_payment + •actor.usr_org_unit_opt_in•money.bnm_desk_payment - Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial + Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -26563,12 +26587,12 @@ OFFSET 7; Tables referencing asset.call_number_note via Foreign Key Constraints - •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.subscription + •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.subscription - call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial + call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -26643,7 +26667,7 @@ OFFSET 7; - copycopyFieldData TypeConstraints and Referencesidbigserial + copycopyFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -26889,19 +26913,19 @@ OFFSET 7; - Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) + Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_ loan_ duration_ checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail•action.circulation•action.hold_copy_map•action.hold_request•action.hold_transit_copy•action.in_house_use•action.transit_copy•asset.copy_note•asset.copy_transparency_map•asset.stat_cat_entry_copy_map•container.copy_bucket_item•extend_reporter.legacy_circ_count•serial.issuance + •acq.lineitem_detail•action.circulation•action.hold_copy_map•action.hold_request•action.hold_transit_copy•action.in_house_use•action.transit_copy•asset.copy_note•asset.copy_transparency_map•asset.stat_cat_entry_copy_map•container.copy_bucket_item•extend_reporter.legacy_circ_count•serial.issuance - copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial + copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26992,12 +27016,12 @@ OFFSET 7; Tables referencing acq.distribution_formula_entry via Foreign Key Constraints - •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•serial.issuance + •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•serial.issuance - copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial + copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -27072,7 +27096,7 @@ OFFSET 7; - copy_transparencycopy_transparencyFieldData TypeConstraints and Referencesidserial + copy_transparencycopy_transparencyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -27186,19 +27210,19 @@ OFFSET 7; - Constraints on copy_transparencycopy_transparency_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_transparency_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) + Constraints on copy_transparencycopy_ transparency_ fine_ level_ checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_transparency_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) Tables referencing asset.copy_transparency_map via Foreign Key Constraints - •asset.copy_transparency_map + •asset.copy_transparency_map - copy_transparency_mapcopy_transparency_mapFieldData TypeConstraints and Referencesidbigserial + copy_transparency_mapcopy_transparency_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -27250,7 +27274,7 @@ OFFSET 7; - stat_catstat_catFieldData TypeConstraints and Referencesidserial + stat_catstat_catFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -27314,12 +27338,12 @@ OFFSET 7; Tables referencing asset.stat_cat_entry via Foreign Key Constraints - •asset.stat_cat_entry•asset.stat_cat_entry_copy_map + •asset.stat_cat_entry•asset.stat_cat_entry_copy_map - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial + stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -27396,12 +27420,12 @@ OFFSET 7; Tables referencing asset.stat_cat_entry_copy_map via Foreign Key Constraints - •asset.stat_cat_entry_copy_map + •asset.stat_cat_entry_copy_map - stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial + stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -27476,7 +27500,7 @@ OFFSET 7; - stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial + stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -27530,7 +27554,7 @@ OFFSET 7; - uriuriFieldData TypeConstraints and Referencesidserial + uriuriFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -27574,12 +27598,12 @@ OFFSET 7; Tables referencing asset.uri_call_number_map via Foreign Key Constraints - •asset.uri_call_number_map•serial.subscription + •asset.uri_call_number_map•serial.subscription - uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial + uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -27640,7 +27664,7 @@ OFFSET 7; - Schema auditorSchema auditoractor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint + Schema auditorSchema auditoractor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -27741,7 +27765,7 @@ OFFSET 7; - actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint + actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint @@ -27821,7 +27845,7 @@ OFFSET 7; - actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint + actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -27944,7 +27968,7 @@ OFFSET 7; - actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint + actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint @@ -28034,7 +28058,7 @@ OFFSET 7; - actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint + actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -28304,7 +28328,7 @@ OFFSET 7; - actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint + actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint @@ -28519,7 +28543,7 @@ OFFSET 7; - asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint + asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -28609,7 +28633,7 @@ OFFSET 7; - asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint + asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint @@ -28674,7 +28698,7 @@ OFFSET 7; - asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint + asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -28874,7 +28898,7 @@ OFFSET 7; - asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint + asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint @@ -29029,7 +29053,7 @@ OFFSET 7; - biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint + biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -29152,7 +29176,7 @@ OFFSET 7; - biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint + biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint @@ -29242,7 +29266,7 @@ OFFSET 7; - Schema authoritySchema authorityfull_recfull_recFieldData TypeConstraints and Referencesidbigserial + Schema authoritySchema authorityfull_recfull_recFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -29302,7 +29326,7 @@ OFFSET 7; - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial + rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -29334,7 +29358,7 @@ OFFSET 7; - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -29441,12 +29465,12 @@ OFFSET 7; Tables referencing authority.record_note via Foreign Key Constraints - •authority.record_note•vandelay.authority_match•vandelay.queued_authority_record + •authority.record_note•vandelay.authority_match•vandelay.queued_authority_record - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial + record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -29520,7 +29544,7 @@ OFFSET 7; - tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint + tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint @@ -29580,7 +29604,7 @@ OFFSET 7; - Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -29713,12 +29737,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•asset.call_number•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•vandelay.bib_match•vandelay.queued_bib_record + •acq.lineitem•asset.call_number•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•vandelay.bib_match•vandelay.queued_bib_record - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial + record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -29815,7 +29839,7 @@ OFFSET 7; - Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint + Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -30003,12 +30027,12 @@ OFFSET 7; Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation_attr_value_map + •action.reservation_transit_copy•booking.reservation_attr_value_map - reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial + reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30069,7 +30093,7 @@ OFFSET 7; - resourceresourceFieldData TypeConstraints and Referencesidserial + resourceresourceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30174,12 +30198,12 @@ OFFSET 7; Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map + •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map - resource_attrresource_attrFieldData TypeConstraints and Referencesidserial + resource_attrresource_attrFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30257,12 +30281,12 @@ OFFSET 7; Tables referencing booking.resource_attr_map via Foreign Key Constraints - •booking.resource_attr_map•booking.resource_attr_value + •booking.resource_attr_map•booking.resource_attr_value - resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial + resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30337,7 +30361,7 @@ OFFSET 7; - resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial + resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30414,12 +30438,12 @@ OFFSET 7; Tables referencing booking.reservation_attr_value_map via Foreign Key Constraints - •booking.reservation_attr_value_map•booking.resource_attr_map + •booking.reservation_attr_value_map•booking.resource_attr_map - resource_typeresource_typeFieldData TypeConstraints and Referencesidserial + resource_typeresource_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30534,12 +30558,12 @@ OFFSET 7; Tables referencing booking.reservation via Foreign Key Constraints - •booking.reservation•booking.resource•booking.resource_attr + •booking.reservation•booking.resource•booking.resource_attr - Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext + Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -30568,7 +30592,7 @@ OFFSET 7; - bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext + bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -30592,7 +30616,7 @@ OFFSET 7; - bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial + bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30635,19 +30659,19 @@ OFFSET 7; - Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) + Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record + •vandelay.queued_bib_record - billing_typebilling_typeFieldData TypeConstraints and Referencesidserial + billing_typebilling_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30707,12 +30731,12 @@ OFFSET 7; Tables referencing money.billing via Foreign Key Constraints - •money.billing + •money.billing - circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial + circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30751,12 +30775,12 @@ OFFSET 7; Tables referencing config.circ_matrix_circ_mod_test_map via Foreign Key Constraints - •config.circ_matrix_circ_mod_test_map + •config.circ_matrix_circ_mod_test_map - circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial + circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30817,7 +30841,7 @@ OFFSET 7; - circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial + circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31089,12 +31113,12 @@ OFFSET 7; Tables referencing config.circ_matrix_circ_mod_test via Foreign Key Constraints - •config.circ_matrix_circ_mod_test + •config.circ_matrix_circ_mod_test - circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext + circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31149,12 +31173,12 @@ OFFSET 7; Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial + copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31204,12 +31228,12 @@ OFFSET 7; Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy•asset.copy + •action.transit_copy•asset.copy - hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial + hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31548,7 +31572,7 @@ OFFSET 7; - i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial + i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -31600,7 +31624,7 @@ OFFSET 7; - i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext + i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31651,12 +31675,12 @@ OFFSET 7; Tables referencing config.i18n_core via Foreign Key Constraints - •config.i18n_core + •config.i18n_core - identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial + identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31688,12 +31712,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr + •actor.usr - idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial + idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -31745,7 +31769,7 @@ OFFSET 7; - item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext + item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31770,12 +31794,12 @@ OFFSET 7; Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext + item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31800,12 +31824,12 @@ OFFSET 7; Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - language_maplanguage_mapFieldData TypeConstraints and Referencescodetext + language_maplanguage_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31830,12 +31854,12 @@ OFFSET 7; Tables referencing config.i18n_locale via Foreign Key Constraints - •config.i18n_locale + •config.i18n_locale - lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext + lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31864,7 +31888,7 @@ OFFSET 7; - metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial + metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31936,19 +31960,19 @@ OFFSET 7; - Constraints on metabib_fieldmetabib_field_field_class_checkCHECK ((lower(field_class) = ANY (ARRAY['title'::text, 'author'::text, 'subject'::text, 'keyword'::text, 'series'::text]))) + Constraints on metabib_fieldmetabib_field_field_class_checkCHECK ((lower(field_class) = ANY (ARRAY['title'::text, 'author'::text, 'subject'::text, 'keyword'::text, 'series'::text]))) Tables referencing metabib.author_field_entry via Foreign Key Constraints - •metabib.author_field_entry•metabib.keyword_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment + •metabib.author_field_entry•metabib.keyword_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment - net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial + net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31980,12 +32004,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr + •actor.usr - non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial + non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32051,12 +32075,12 @@ OFFSET 7; Tables referencing action.non_cat_in_house_use via Foreign Key Constraints - •action.non_cat_in_house_use•action.non_cataloged_circulation + •action.non_cat_in_house_use•action.non_cataloged_circulation - rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial + rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32099,19 +32123,19 @@ OFFSET 7; - Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.hold_matrix_matchpoint via Foreign Key Constraints - •config.hold_matrix_matchpoint + •config.hold_matrix_matchpoint - rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial + rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32168,19 +32192,19 @@ OFFSET 7; - Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint + •config.circ_matrix_matchpoint - rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial + rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32225,19 +32249,19 @@ OFFSET 7; - Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint + •config.circ_matrix_matchpoint - rule_recuring_finerule_recuring_fineFieldData TypeConstraints and Referencesidserial + rule_recuring_finerule_recuring_fineFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32296,19 +32320,19 @@ OFFSET 7; - Constraints on rule_recuring_finerule_recuring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_recuring_finerule_recuring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint + •config.circ_matrix_matchpoint - standingstandingFieldData TypeConstraints and Referencesidserial + standingstandingFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32340,12 +32364,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr + •actor.usr - standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial + standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32389,12 +32413,12 @@ OFFSET 7; Tables referencing actor.usr_standing_penalty via Foreign Key Constraints - •actor.usr_standing_penalty•permission.grp_penalty_threshold + •actor.usr_standing_penalty•permission.grp_penalty_threshold - upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext + upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext PRIMARY KEY @@ -32420,7 +32444,7 @@ OFFSET 7; - videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext + videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -32445,12 +32469,12 @@ OFFSET 7; Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - xml_transformxml_transformFieldData TypeConstraints and Referencesnametext + xml_transformxml_transformFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32488,7 +32512,7 @@ OFFSET 7; - z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial + z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32578,7 +32602,7 @@ OFFSET 7; - z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext + z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32658,12 +32682,12 @@ OFFSET 7; Tables referencing config.z3950_attr via Foreign Key Constraints - •config.z3950_attr + •config.z3950_attr - Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial + Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32760,12 +32784,12 @@ OFFSET 7; Tables referencing container.biblio_record_entry_bucket_item via Foreign Key Constraints - •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note + •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note - biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial + biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32825,12 +32849,12 @@ OFFSET 7; Tables referencing container.biblio_record_entry_bucket_item_note via Foreign Key Constraints - •container.biblio_record_entry_bucket_item_note + •container.biblio_record_entry_bucket_item_note - biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial + biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32868,7 +32892,7 @@ OFFSET 7; - biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial + biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32906,7 +32930,7 @@ OFFSET 7; - biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext + biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -32938,12 +32962,12 @@ OFFSET 7; Tables referencing container.biblio_record_entry_bucket via Foreign Key Constraints - •container.biblio_record_entry_bucket + •container.biblio_record_entry_bucket - call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial + call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33040,12 +33064,12 @@ OFFSET 7; Tables referencing container.call_number_bucket_item via Foreign Key Constraints - •container.call_number_bucket_item•container.call_number_bucket_note + •container.call_number_bucket_item•container.call_number_bucket_note - call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial + call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33105,12 +33129,12 @@ OFFSET 7; Tables referencing container.call_number_bucket_item_note via Foreign Key Constraints - •container.call_number_bucket_item_note + •container.call_number_bucket_item_note - call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial + call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33148,7 +33172,7 @@ OFFSET 7; - call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial + call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33186,7 +33210,7 @@ OFFSET 7; - call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext + call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33218,12 +33242,12 @@ OFFSET 7; Tables referencing container.call_number_bucket via Foreign Key Constraints - •container.call_number_bucket + •container.call_number_bucket - copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial + copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33320,12 +33344,12 @@ OFFSET 7; Tables referencing container.copy_bucket_item via Foreign Key Constraints - •container.copy_bucket_item•container.copy_bucket_note + •container.copy_bucket_item•container.copy_bucket_note - copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial + copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33385,12 +33409,12 @@ OFFSET 7; Tables referencing container.copy_bucket_item_note via Foreign Key Constraints - •container.copy_bucket_item_note + •container.copy_bucket_item_note - copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial + copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33428,7 +33452,7 @@ OFFSET 7; - copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial + copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33466,7 +33490,7 @@ OFFSET 7; - copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext + copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33498,12 +33522,12 @@ OFFSET 7; Tables referencing container.copy_bucket via Foreign Key Constraints - •container.copy_bucket + •container.copy_bucket - user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial + user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33600,12 +33624,12 @@ OFFSET 7; Tables referencing container.user_bucket_item via Foreign Key Constraints - •container.user_bucket_item•container.user_bucket_note + •container.user_bucket_item•container.user_bucket_note - user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial + user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33665,12 +33689,12 @@ OFFSET 7; Tables referencing container.user_bucket_item_note via Foreign Key Constraints - •container.user_bucket_item_note + •container.user_bucket_item_note - user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial + user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33708,7 +33732,7 @@ OFFSET 7; - user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial + user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33746,7 +33770,7 @@ OFFSET 7; - user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext + user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33778,12 +33802,12 @@ OFFSET 7; Tables referencing container.user_bucket via Foreign Key Constraints - •container.user_bucket + •container.user_bucket - Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint + Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint @@ -33798,7 +33822,7 @@ OFFSET 7; - global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint + global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint @@ -33818,7 +33842,7 @@ OFFSET 7; - legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint + legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint @@ -33849,7 +33873,7 @@ OFFSET 7; - Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial + Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -33908,7 +33932,7 @@ OFFSET 7; - full_recfull_recFieldData TypeConstraints and Referencesidbigint + full_recfull_recFieldData TypeConstraints and Referencesidbigint @@ -33953,7 +33977,7 @@ OFFSET 7; - keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial + keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34012,7 +34036,7 @@ OFFSET 7; - metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial + metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34054,12 +34078,12 @@ OFFSET 7; Tables referencing metabib.metarecord_source_map via Foreign Key Constraints - •metabib.metarecord_source_map + •metabib.metarecord_source_map - metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial + metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34104,7 +34128,7 @@ OFFSET 7; - real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint + real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -34173,7 +34197,7 @@ OFFSET 7; - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial + rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34277,7 +34301,7 @@ OFFSET 7; - series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial + series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34322,7 +34346,7 @@ OFFSET 7; - subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial + subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34381,7 +34405,7 @@ OFFSET 7; - title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial + title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34440,7 +34464,7 @@ OFFSET 7; - Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial + Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34490,7 +34514,7 @@ OFFSET 7; - billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint + billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint @@ -34565,7 +34589,7 @@ OFFSET 7; - billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint + billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint @@ -34640,7 +34664,7 @@ OFFSET 7; - billingbillingFieldData TypeConstraints and Referencesidbigserial + billingbillingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34725,7 +34749,7 @@ OFFSET 7; - bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint + bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -34807,7 +34831,7 @@ OFFSET 7; - bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint + bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -34877,7 +34901,7 @@ OFFSET 7; - bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint + bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint @@ -34927,7 +34951,7 @@ OFFSET 7; - cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint + cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35002,7 +35026,7 @@ OFFSET 7; - cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger + cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger @@ -35042,7 +35066,7 @@ OFFSET 7; - check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint + check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35124,7 +35148,7 @@ OFFSET 7; - collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial + collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -35188,7 +35212,7 @@ OFFSET 7; - credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint + credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35288,7 +35312,7 @@ OFFSET 7; - credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint + credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35358,7 +35382,7 @@ OFFSET 7; - desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint + desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint @@ -35413,7 +35437,7 @@ OFFSET 7; - forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint + forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35483,7 +35507,7 @@ OFFSET 7; - goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint + goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35553,7 +35577,7 @@ OFFSET 7; - grocerygroceryFieldData TypeConstraints and Referencesidbigint + grocerygroceryFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35610,7 +35634,7 @@ OFFSET 7; - materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint + materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint @@ -35685,7 +35709,7 @@ OFFSET 7; - non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint + non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint @@ -35735,7 +35759,7 @@ OFFSET 7; - open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint + open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint @@ -35815,7 +35839,7 @@ OFFSET 7; - open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint + open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint @@ -35845,7 +35869,7 @@ OFFSET 7; - open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint + open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint @@ -35875,7 +35899,7 @@ OFFSET 7; - open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint + open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint @@ -35905,7 +35929,7 @@ OFFSET 7; - open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger + open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger @@ -35930,7 +35954,7 @@ OFFSET 7; - open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger + open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger @@ -35955,7 +35979,7 @@ OFFSET 7; - paymentpaymentFieldData TypeConstraints and Referencesidbigserial + paymentpaymentFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -36009,7 +36033,7 @@ OFFSET 7; - payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint + payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint @@ -36049,7 +36073,7 @@ OFFSET 7; - transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36079,7 +36103,7 @@ OFFSET 7; - transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36109,7 +36133,7 @@ OFFSET 7; - transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36139,7 +36163,7 @@ OFFSET 7; - transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36169,7 +36193,7 @@ OFFSET 7; - transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36199,7 +36223,7 @@ OFFSET 7; - usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger + usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger @@ -36224,7 +36248,7 @@ OFFSET 7; - usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger + usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger @@ -36249,7 +36273,7 @@ OFFSET 7; - work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint + work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -36319,7 +36343,7 @@ OFFSET 7; - Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial + Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36385,7 +36409,7 @@ OFFSET 7; - sessionsessionFieldData TypeConstraints and Referenceskeytext + sessionsessionFieldData TypeConstraints and Referenceskeytext PRIMARY KEY @@ -36452,7 +36476,7 @@ OFFSET 7; - Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial + Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36542,7 +36566,7 @@ OFFSET 7; - grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial + grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36619,7 +36643,7 @@ OFFSET 7; - grp_treegrp_treeFieldData TypeConstraints and Referencesidserial + grp_treegrp_treeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36691,12 +36715,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map + •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map - perm_listperm_listFieldData TypeConstraints and Referencesidserial + perm_listperm_listFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36733,12 +36757,12 @@ OFFSET 7; Tables referencing permission.grp_perm_map via Foreign Key Constraints - •permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map + •permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map - usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial + usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36799,7 +36823,7 @@ OFFSET 7; - usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial + usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36899,7 +36923,7 @@ OFFSET 7; - usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial + usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36976,7 +37000,7 @@ OFFSET 7; - usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial + usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37037,7 +37061,7 @@ OFFSET 7; - Schema publicSchema publicSchema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint + Schema publicSchema publicSchema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint @@ -37052,7 +37076,7 @@ OFFSET 7; - currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger + currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger @@ -37082,7 +37106,7 @@ OFFSET 7; - demographicdemographicFieldData TypeConstraints and Referencesidinteger + demographicdemographicFieldData TypeConstraints and Referencesidinteger @@ -37102,7 +37126,7 @@ OFFSET 7; - hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger + hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger @@ -37127,7 +37151,7 @@ OFFSET 7; - materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint + materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -37194,7 +37218,7 @@ OFFSET 7; - old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint + old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint @@ -37254,7 +37278,7 @@ OFFSET 7; - output_folderoutput_folderFieldData TypeConstraints and Referencesidserial + output_folderoutput_folderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37335,12 +37359,12 @@ OFFSET 7; Tables referencing reporter.output_folder via Foreign Key Constraints - •reporter.output_folder•reporter.schedule + •reporter.output_folder•reporter.schedule - overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint + overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint @@ -37475,7 +37499,7 @@ OFFSET 7; - overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger + overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger @@ -37505,7 +37529,7 @@ OFFSET 7; - pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger + pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger @@ -37535,7 +37559,7 @@ OFFSET 7; - reportreportFieldData TypeConstraints and Referencesidserial + reportreportFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37643,12 +37667,12 @@ OFFSET 7; Tables referencing reporter.schedule via Foreign Key Constraints - •reporter.schedule + •reporter.schedule - report_folderreport_folderFieldData TypeConstraints and Referencesidserial + report_folderreport_folderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37729,12 +37753,12 @@ OFFSET 7; Tables referencing reporter.report via Foreign Key Constraints - •reporter.report•reporter.report_folder + •reporter.report•reporter.report_folder - schedulescheduleFieldData TypeConstraints and Referencesidserial + schedulescheduleFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37881,7 +37905,7 @@ OFFSET 7; - simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint + simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint @@ -37996,7 +38020,7 @@ OFFSET 7; - super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint + super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint @@ -38056,7 +38080,7 @@ OFFSET 7; - templatetemplateFieldData TypeConstraints and Referencesidserial + templatetemplateFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38132,12 +38156,12 @@ OFFSET 7; Tables referencing reporter.report via Foreign Key Constraints - •reporter.report + •reporter.report - template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial + template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38218,12 +38242,12 @@ OFFSET 7; Tables referencing reporter.template via Foreign Key Constraints - •reporter.template•reporter.template_folder + •reporter.template•reporter.template_folder - xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint + xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint @@ -38248,7 +38272,7 @@ OFFSET 7; - xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint + xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint @@ -38273,7 +38297,7 @@ OFFSET 7; - Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial + Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38327,13 +38351,13 @@ OFFSET 7; - Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) + Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) - Schema serialSchema serialbib_summarybib_summaryFieldData TypeConstraints and Referencesidserial + Schema serialSchema serialbib_summarybib_summaryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38383,7 +38407,7 @@ OFFSET 7; - binding_unitbinding_unitFieldData TypeConstraints and Referencesidserial + binding_unitbinding_unitFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38438,12 +38462,12 @@ OFFSET 7; Tables referencing serial.issuance via Foreign Key Constraints - •serial.issuance + •serial.issuance - index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial + index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38493,7 +38517,7 @@ OFFSET 7; - issuanceissuanceFieldData TypeConstraints and Referencesidserial + issuanceissuanceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38565,7 +38589,7 @@ OFFSET 7; - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -38683,7 +38707,7 @@ OFFSET 7; - subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial + subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38737,12 +38761,12 @@ OFFSET 7; Tables referencing serial.bib_summary via Foreign Key Constraints - •serial.bib_summary•serial.binding_unit•serial.index_summary•serial.issuance•serial.sup_summary + •serial.bib_summary•serial.binding_unit•serial.index_summary•serial.issuance•serial.sup_summary - sup_summarysup_summaryFieldData TypeConstraints and Referencesidserial + sup_summarysup_summaryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38792,7 +38816,7 @@ OFFSET 7; - Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint + Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint @@ -38877,7 +38901,7 @@ OFFSET 7; - fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint + fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint @@ -39057,7 +39081,7 @@ OFFSET 7; - fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint + fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint @@ -39242,7 +39266,7 @@ OFFSET 7; - Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial + Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39304,12 +39328,12 @@ OFFSET 7; Tables referencing vandelay.queued_authority_record_attr via Foreign Key Constraints - •vandelay.queued_authority_record_attr + •vandelay.queued_authority_record_attr - authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial + authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -39362,7 +39386,7 @@ OFFSET 7; - authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint + authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -39435,19 +39459,19 @@ OFFSET 7; - Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) Tables referencing vandelay.queued_authority_record via Foreign Key Constraints - •vandelay.queued_authority_record + •vandelay.queued_authority_record - bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial + bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39509,12 +39533,12 @@ OFFSET 7; Tables referencing vandelay.queued_bib_record_attr via Foreign Key Constraints - •vandelay.queued_bib_record_attr + •vandelay.queued_bib_record_attr - bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial + bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -39572,13 +39596,13 @@ OFFSET 7; - Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) + Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) - bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint + bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -39663,19 +39687,19 @@ OFFSET 7; - Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record + •vandelay.queued_bib_record - import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial + import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -39729,7 +39753,7 @@ OFFSET 7; - import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial + import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -39869,7 +39893,7 @@ OFFSET 7; - import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial + import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -40045,12 +40069,12 @@ OFFSET 7; Tables referencing vandelay.bib_queue via Foreign Key Constraints - •vandelay.bib_queue•vandelay.import_item + •vandelay.bib_queue•vandelay.import_item - queuequeueFieldData TypeConstraints and Referencesidbigserial + queuequeueFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -40128,13 +40152,13 @@ OFFSET 7; - Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint + queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -40207,19 +40231,19 @@ OFFSET 7; - Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match•vandelay.queued_authority_record_attr + •vandelay.authority_match•vandelay.queued_authority_record_attr - queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial + queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -40272,12 +40296,12 @@ OFFSET 7; Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match + •vandelay.authority_match - queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint + queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -40362,19 +40386,19 @@ OFFSET 7; - Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr + •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr - queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial + queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -40427,12 +40451,12 @@ OFFSET 7; Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match + •vandelay.bib_match - queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial + queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -40477,7 +40501,7 @@ OFFSET 7; - Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) @@ -40494,33 +40518,33 @@ OFFSET 7; The Evergreen DIG was established in May 2009 at the first Evergreen International Conference, where members of the Evergreen community committed to developing single-source, standards-based documentation for Evergreen. Since then, the DIG has been actively working toward that goal. - Table B.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences - Table B.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software + Table B.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences + Table B.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software Special thanks goes to: - •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for + •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for contributing large portions of content on the wiki. There have been many other who have contributed there time to the Book of Evergreen project. Without their contributions to this community driven project, this documentation would not be possible. How to ParticipateHow to Participate - + Contributing to documentation is an excellent way to support Evergreen, even if you are new to documentation. In fact, beginners often have a distinct advantage over the experts, more easily spotting the places where documentation is lacking or where it is unclear. We welcome your contribution with planning, writing, editing, testing, translating to DocBook, and other tasks. Whatever your background or experience we are keen to have your help! What you can do: - •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. - Please send an email introducing yourself to the list.•Add yourself to the participant list - if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, - and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. - Volunteer RolesVolunteer Roles + •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. + Please send an email introducing yourself to the list.•Add yourself to the participant list + if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, + and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. + Volunteer RolesVolunteer Roles We are now looking for people to help produce the documentation. If you interested in participating, email the DIG facilitators at <docs@evergreen-ils.org> or post on the documentation mailing list. We're looking for volunteers to work on the following: - •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as - Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not - officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style - guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. + •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as + Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not + officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style + guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. @@ -40532,11 +40556,11 @@ OFFSET 7; This documentation is just one way to learn about Evergreen and find solutions to Evergreen challenges. Below is a list of many other resources to help you find answers to almost any question you might have. - Evergreen Wiki - Loads of information and the main portal to the Evergreen community. + Evergreen Wiki - Loads of information and the main portal to the Evergreen community. Evergreen mailing lists - These are excellent for initiating questions. There are several lists including: - •General list - General inquiries regarding Evergreen. If unsure about - which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including - questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and + •General list - General inquiries regarding Evergreen. If unsure about + which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including + questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and feedback regarding this documentation, the Documentation Interest Group and other documentation related ideas and issues. Evergreen Blog - Great for getting general news and updates about Evergreen. It is also an interesting historical read @@ -40544,10 +40568,10 @@ OFFSET 7; Evergreen IRC channel - Allows live chat. Many developers hang out here and will try to field technical questions. This is often the quickest way to get a solution to a specific problem. Just remember that while the channel is open 24/7, there are times when no one is available in the channel. The most active times for the IRC channel seem to be weekday afternoons (Eastern Standard Time). There is also an archive of logs from the chat sessions available on the - IRC page. + IRC page. Evergreen related community blogs - Evergreen related blog entries from the community. Resource Sharing Cooperative of Evergreen Libraries (RSCEL) - Provides some technical documents and a means for the - Evergreen community to collaborate with other libraries. + Evergreen community to collaborate with other libraries. List of current Evergreen libraries - Locate other libraries who are using Evergreen. @@ -40558,36 +40582,36 @@ OFFSET 7; GlossaryGlossary In this section we expand acronyms, define terms, and generally try to explain concepts used by Evergreen software. - AApacheOpen-source web server software used to serve both static + AApacheOpen-source web server software used to serve both static content and dynamic web pages in a secure and reliable way. More information is available at - http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of + http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of purposes. For example, to keep track of what books you have read, books you would like to read, to maintain a class reading list, to maintain a reading list for a book club, to keep a list of books you would like for your birthday. There are an unlimited number of - uses.CCentOSA popular open-source operating system based on Red Hat + uses.CCentOSA popular open-source operating system based on Red Hat Enterprises Linux (also known as "RHEL") and often used for in web servers. More information is available at - http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with + http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with Javascript; originally developed by Google. It is used to create special builds of the Evergreen Staff Client. More information is available at - http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in + http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in Perl. More information is available at - http://www.cpan.org.See Also Perl.DDebianOne of the most popular open-source operating system using the + http://www.cpan.org.See Also Perl.DDebianOne of the most popular open-source operating system using the Linux kernel that provides over 25000 useful precompiled software packages. Also known as Debian GNU/Linux. More information is available at - http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings + http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings separated by periods that are used to name organizations, web sites and addresses on the Internet (e.g.: www.esilibrary.com). Domain names can be reserved via third-party registration services, and can be associated with a - unique IP address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that is + unique IP address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that is used for client-server message passing within Evergreen. It runs under popular operating systems (e.g., Mac OSX, @@ -40597,108 +40621,108 @@ OFFSET 7; services for a Jabber domain across an extendable cluster of cheap, easily-replaced machine nodes. More information is available at - http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the + http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the Linux kernel. More information is available at - http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of + http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of four numbers separated by periods (e.g., "192.168.1.15") assigned to individual members of networked computing systems. It uniquely identifies each system on the network and allows controlled communication between such systems. The numerical label scheme must adhere to a strictly defined naming convention that is currently defined and overseen by the Internet Corporation for Assigned Names - and Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing + and Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing of item or copy records. They can be used to perform various - cataloging/holdings maintenance tasks in batch.JJabberThe communications protocol used for client-server message + cataloging/holdings maintenance tasks in batch.JJabberThe communications protocol used for client-server message passing within Evergreen. Now known as XMPP (eXtensible Messaging and - Presence Protocol), it was originally named "Jabber".See Also XMPP, ejabberd.KLMMARCThe MARC formats are standards for the representation and + Presence Protocol), it was originally named "Jabber".See Also XMPP, ejabberd.KLMMARCThe MARC formats are standards for the representation and communication of bibliographic and related information in - machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to + machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to provide secure updates to their users. It is used to create special builds of the Evergreen Staff Client. More information is available at - http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually + http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually with a client-server architecture spread over multiple computing systems. It reduces the number of times a data source (e.g., a database) must be directly accessed by temporarily caching data in memory, therefore dramatically speeding up database-driven web - applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows + applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows installers. It is used to create special builds of the Evergreen Staff Client. More information is available at - http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a + http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a library's holdings; used to find resources in their collections; possibly searchable by keyword, title, author, subject or call - number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') + number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') is a stateful, decentralized service architecture that allows developers to create applications for Evergreen with a minimum of - knowledge of its structure.PPerlThe high-level scripting language in which most of the business logic of Evergreen is written.See Also CPAN.PKIPublic Key Infrastructure (PKI) describes the schemes needed - to generate and maintain digital SSL Certificates.See Also ???TITLE???.PostgreSQLA popular open-source object-relational database management - system that underpins Evergreen software.PuTTYA popular open-source telnet/ssh client for the Windows and + knowledge of its structure.PPerlThe high-level scripting language in which most of the business logic of Evergreen is written.See Also CPAN.PKIPublic Key Infrastructure (PKI) describes the schemes needed + to generate and maintain digital SSL Certificates.See Also SSL Certificate.PostgreSQLA popular open-source object-relational database management + system that underpins Evergreen software.PuTTYA popular open-source telnet/ssh client for the Windows and Unix platforms. As used in Evergreen, a handy utility used to create an SSH Tunnel for connecting Staff Clients to Evergreen servers over insecure networks. More information is available at - http://www.chiark.greenend.org.uk/~sgtatham/putty/.See Also SSH tunnel.QRResource HackerAn open-source utility used to view, modify, rename, add, + http://www.chiark.greenend.org.uk/~sgtatham/putty/.See Also SSH tunnel.QRResource HackerAn open-source utility used to view, modify, rename, add, delete and extract resources in 32bit Windows executables. It is used to create special builds of the Evergreen Staff Client. More information is available at - Resource HackerRHELAlso known as "Red Hat Enterprises + Resource HackerRHELAlso known as "Red Hat Enterprises Linux". An official Linux distribution that is targeted at the commercial market. It is the basis of other popular Linux distributions, e.g., CentOS. More information is available at - http://www.redhat.com.SSIPSIP (Standard Interchange Protocol) is a communications + http://www.redhat.com.SSIPSIP (Standard Interchange Protocol) is a communications protocol used within Evergreen for transferring data to and from other third party devices, such as RFID and barcode scanners that handle patron and library material information. Version 2.0 (also known as "SIP2") is the current standard. It was originally - developed by the 3M Corporation.srfshA command language interpreter (shell) that executes commands + developed by the 3M Corporation.srfshA command language interpreter (shell) that executes commands read from the standard input. It is used to test the Open Service - Request Framework (OpenSRF).SRUSRU (Search & Retrieve URL Service) is a search protocol + Request Framework (OpenSRF).SRUSRU (Search & Retrieve URL Service) is a search protocol used in web search and retrieval. It expresses queries in Contextual Query Language (CQL) and transmits them as a URL, returning XML data - as if it were a web page.See Also SRW.SRWSRW (Search & Retrieve Web Service), also known as "SRU + as if it were a web page.See Also SRW.SRWSRW (Search & Retrieve Web Service), also known as "SRU via HTTP SOAP", is a search protocol used in web search and retrieval. It uses a SOAP interface and expresses both the query and - result as XML data streams.See Also SRU.SSHAn encrypted network protocol using public-key cryptography + result as XML data streams.See Also SRU.SSHAn encrypted network protocol using public-key cryptography that allows secure communications between systems on an insecure network. Typically used to access shell accounts but also supports tunneling, forwarding TCP ports and X11 connections, and - transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff + transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff Clients to communicate with one or more Evergreen servers over an insecure network by sending data through a secure SSH tunnel. It also buffers and caches all data travelling to and from Staff - Clients to speed up access to resources on Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network + Clients to speed up access to resources on Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network connection. Used to securely transfer unencrypted data streams over - insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff + insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff Clients are able to connect to legitimate Evergreen servers.In general, it is a special electronic document used to guarantee authenticity of a digital message. Also known as a "public key", or "identity" or "digital" certificate. It combines an identity (of a person or an organization) and a unique public key to form a so-called digital signature, and is used to verify that the public key does, in fact, belong with that particular - identity.See Also PKI.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients + identity.See Also PKI.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients to securely connect to legitimate Evergreen servers.In general, it is a method of encapsulating data provided in one network protocol (the "delivery"protocol), within data in a different network protocol (the "tunneling" protocol). Used to provide a secure path and secure communications through an insecure or incompatible network. Can be used to bypass firewalls by communicating via a protocol the firewall normally blocks, but - "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the + "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the Linux kernel that was originally based on the Debian GNU/Linux operating system. More information is available at - http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that + http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that emulates the x86 microprocessor architecture. It is installed on a Windows "host" operating system and allows other "guest" (typically including Linux and Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that + to be loaded and executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that emulates the x86 microprocessor architecture. It can be installed on Linux, Mac OS X, @@ -40707,7 +40731,7 @@ OFFSET 7; systems and allows other "guest" (typically including Linux and Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that + to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that is partitioned or separated from the real underlying hardware and software resources. In typical usage, it allows a host operating system to encapsulate or emulate @@ -40716,7 +40740,7 @@ OFFSET 7; hosting environment. As used in Evergreen, it enables a copy of the Linux operating system running Evergreen software to execute within a - Windows environment.See Also VirtualBox, Virtual PC, VMware.VMwareA popular commercial package of virtualization software that + Windows environment.See Also VirtualBox, Virtual PC, VMware.VMwareA popular commercial package of virtualization software that emulates the x86 microprocessor architecture. It can be installed on Linux, Mac OS X, @@ -40725,55 +40749,55 @@ OFFSET 7; and allows other "guest" (typically including Linux and Windows) operating systems - to be loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing + to be loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing of multiple volumes. They can be used to perform various - cataloging/holdings maintenance tasks in batch.WWineA popular open-source application that allows + cataloging/holdings maintenance tasks in batch.WWineA popular open-source application that allows Linux and Unix systems to run Windows executables. More information is available at - http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of + http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of rules for encoding information in a way that is both human- and machine-readable. It is primarily used to define documents but can also be used to define arbitrary data structures. It was originally - defined by the World Wide Web Consortium (W3C).XMPPThe open-standard communications protocol (based on XML) used + defined by the World Wide Web Consortium (W3C).XMPPThe open-standard communications protocol (based on XML) used for client-server message passing within Evergreen. It supports the concept of a consistent domain of message types that flow between software applications, possibly on different operating systems and architectures. More information is available - at http://xmpp.org.See Also Jabber, ejabberd.xpathThe XML Path Language, a query language based on a tree + at http://xmpp.org.See Also Jabber, ejabberd.xpathThe XML Path Language, a query language based on a tree representation of an XML document. It is used to programmatically select nodes from an XML document and to do minor computation involving strings, numbers and Boolean values. It allows you to identify parts of the XML document tree, to navigate around the tree, and to uniquely select nodes. The currently version is "XPath 2.0". It was originally defined by the World Wide Web Consortium - (W3C).XULThe XML User Interface Language, a specialized interface + (W3C).XULThe XML User Interface Language, a specialized interface language that allows building cross-platform applications that drive Mozilla-based browsers such as Firefox. More information is available at - https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides + https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides support for installing, upgrading and uninstalling XUL applications. It operates with Mozilla-based applications such as the Firefox browser. More information is available at - https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of - Z39.50 / SRW / SRU clients and servers.See Also SRU, SRW, Z39.50.yaz-clientA Z39.50/SRU client for connecting to YAZ servers. + https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of + Z39.50 / SRW / SRU clients and servers.See Also SRU, SRW, Z39.50.yaz-clientA Z39.50/SRU client for connecting to YAZ servers. More information is available at - http://www.indexdata.com/yaz/doc/yaz-client.htmlSee Also SRU.ZZ39.50An international standard client–server protocol for + http://www.indexdata.com/yaz/doc/yaz-client.htmlSee Also SRU.ZZ39.50An international standard client–server protocol for communication between computer systems, primarily library and information related systems.See Also SRU. IndexIndex Report errors in this documentation using Launchpad. Index Report any errors in this documentation using Launchpad. - IndexIndex + IndexIndex -Aaction triggers, , , , , creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, advanced searchgroup formats and editions, limit to available, MARC expert search, quick search, search filter, search library, sort criteria, ANSI, Apache, , , logs, Apache modules, authority record ingest, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , Bbarcode scanner, BibTemplate, , , billing types, adding, deleting, editing, bills and payments, adding new grocery bills, bill history, forgiving bills, making change, making payments, refunds, voiding bills, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, boolean, AND operator, BRE JSON, Ccash reports, catalogingtemplates, cataloguingadding and editing items, adding bibliographic records, adding holdings, , buckets, removing records, copy alerts, copy buckets, copy notes, creating new MARC records, deleting records, editing MARC records, electronic resources, importing records, locating records, MARC editor, merging bibliographic records, overlaying MARC records, record buckets, batch edit, shared buckets, templates, , uploading MARC files, circulationbackdated check in, barring patrons, check in, circulating items, claimed returned, cloning users, damaged items, due dates, extending account expiration date, in-house use, item alerts, item status, lost items, lost library cards, merging patron records, missing items, non-cataloged items, patron alerts, patron notes, patron records, pre-cataloged items, registering new patrons, renewelEditing an item's due date, resetting patron password, updating patron information, circulation modifiers, , adding, deleting, editing, closed dates editor, due dates, fines, comma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , , , opensrf_core.xml, , , , , startup.pl, copy buckets, (see also item buckets)copy locations editor, copy stat cats, copy status, , CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , , Digital Geospatial Metadata (CSDGM), directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, field documentationadministering field documentation, patron field documentation, Fieldmapper, firewall, formatsaudiobooks, books, electronic resources, large print, music, video, GGNU General Public License, Google Books, group penalty thresholds, creating local penalty thresholds, Hhardwareclustering, holds, canceling, capturing, FIFO, , levels, managing, placing holds, pull list, retargeting, shelf list, transferring, viewing holds, Holds, hours of operationdue dates policy, overdue fines policy, setting, , HTTPtranslator, IIn-Databasehold testing, record merging, in-databasecirculation, IP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, Llabelspocket, spine, languages, disabling a localization, enabling a localization, library addresses setting, , library settings editor, Linux, CentOS, commands, Debian, , , , , , Gentoo, RHEL, Ubuntu, , , , , , Wine, localization and languages, logs, Apache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , fixed field 008, MARC leader, MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, importing data, patrons, MODS, , My Accountaccount summary, bookbags, fines, first login, items on hold, logging in, password reset, preferences, Nnetwork address, (see also ip address)non-catalogued type editor, noticesoverdue, notifications, hold, overdueactivating action triggers, creating using action triggers, creating using the configuration file, preduecreating using action triggers, creating using the configuration file, Ooffline mode, offline transactions, Check In, (see also circulation)check out, (see also circulation)create a session, exceptions, in house use, (see also circulation)patron registration, (see also circulation)process transactions, renew, (see also circulation)upload workstation transactions to a session, uploading, OPAC, , added content, Google Books, customizing, changing the color scheme, details page, text and labels, testing, OpenSRF, , , , , , Communication Flows, configure, download, installation, services, , , organization unitsadding, deleting, editing, volumes - and copies, Organization Unitsopac visible, organizational unit types, , adding, deleting, editing, organizational units, , Ppatron stat cats, pcrud, penalty threshholds, Perl, , CPAN, , , permissions, , group, , user, PKI, PostgreSQL, printers, proximity map, Python, , RRAID, receipt template editor, , record buckets, RefWorks, reportsstarting, , Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, searchauthor, keyword, series, subject, title, search box, search methodologyorder of results, stemming, truncation, search relevancy, , combining index weighting and match point weighting, indexed-field weighting, keyword search adjusting, match point weighting, search resultsavailability, caching, group formats and editions, limit to available, related subjects, authors, and series, sort criteria, , searching parameterslocation, request types, sorting criteria, security, self check, overdue, SelfCheck, , Simple2ZOOM, SIP, , , , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SRW, SSH, proxy, Putty, tunneling, SSL, certificates, staff accounts, permissions, , working location, staff client, assigning workstation names, automatic updates, activating the update server, building, building with clients, building, advanced build options, building on the server, copy/paste, customizing, , labels and messages, fonts, installing, Mac OS, Windows, keyboard shortcuts, logging into, navigating, printer settings, runninglinux, running through an SSH tunnel, sounds, testing, using wine to install on Linux, XULRunner, staging table, statistical categories editor, SuperCat, formats, adding, customizing, ISBNs, recent records, records, surveys, syslog, , syslog-NG, Ttelnet, transit items, aborting, cancelling, receiving, rlist, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVandelay, Version Control Systemgit, Subversion, , virtual image, virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, removing previous versions, Yyaz, , , ZZ39.50, , , , , , , importing records, Unicode, Zotero, +Aaction triggers, , , , , creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, advanced searchgroup formats and editions, limit to available, MARC expert search, quick search, search filter, search library, sort criteria, ANSI, Apache, , , logs, Apache modules, authority record ingest, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , Bbarcode scanner, BibTemplate, , , billing types, adding, deleting, editing, bills and payments, adding new grocery bills, bill history, forgiving bills, making change, making payments, refunds, voiding bills, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, boolean, AND operator, BRE JSON, Ccash reports, catalogingtemplates, cataloguingadding and editing items, adding bibliographic records, adding holdings, , buckets, removing records, copy alerts, copy buckets, copy notes, creating new MARC records, deleting records, editing MARC records, electronic resources, importing records, locating records, MARC editor, merging bibliographic records, overlaying MARC records, record buckets, batch edit, shared buckets, templates, , uploading MARC files, circulationbackdated check in, barring patrons, check in, circulating items, claimed returned, cloning users, damaged items, due dates, extending account expiration date, in-house use, item alerts, item status, lost items, lost library cards, merging patron records, missing items, non-cataloged items, patron alerts, patron notes, patron records, pre-cataloged items, registering new patrons, renewelEditing an item's due date, resetting patron password, updating patron information, circulation modifiers, , adding, deleting, editing, closed dates editor, due dates, fines, comma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , , , opensrf_core.xml, , , , , startup.pl, copy buckets, (see also item buckets)copy locations editor, copy stat cats, copy status, , CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , , Digital Geospatial Metadata (CSDGM), directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, field documentationadministering field documentation, patron field documentation, Fieldmapper, firewall, formatsaudiobooks, books, electronic resources, large print, music, video, GGNU General Public License, Google Books, group penalty thresholds, creating local penalty thresholds, Hhardwareclustering, holds, canceling, capturing, FIFO, , levels, managing, placing holds, pull list, retargeting, shelf list, transferring, viewing holds, Holds, hours of operationdue dates policy, overdue fines policy, setting, , HTTPtranslator, IIn-Databasehold testing, record merging, in-databasecirculation, IP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, Llabelspocket, spine, languages, disabling a localization, enabling a localization, library addresses setting, , library settings editor, Linux, CentOS, commands, Debian, , , , , , Gentoo, RHEL, Ubuntu, , , , , , Wine, localization and languages, logs, Apache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , fixed field 008, MARC leader, MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, importing data, patrons, MODS, , My Accountaccount summary, bookbags, fines, first login, items on hold, logging in, password reset, preferences, Nnetwork address, (see also ip address)non-catalogued type editor, noticesoverdue, notifications, hold, overdueactivating action triggers, creating using action triggers, creating using the configuration file, preduecreating using action triggers, creating using the configuration file, Ooffline mode, offline transactions, Check In, (see also circulation)check out, (see also circulation)create a session, exceptions, in house use, (see also circulation)patron registration, (see also circulation)process transactions, renew, (see also circulation)upload workstation transactions to a session, uploading, OPAC, , added content, Google Books, customizing, changing the color scheme, details page, text and labels, testing, OpenSRF, , , , , , Communication Flows, configure, download, installation, services, , , organization unitsadding, deleting, editing, volumes + and copies, Organization Unitsopac visible, organizational unit types, , adding, deleting, editing, organizational units, , Ppatron stat cats, pcrud, penalty threshholds, Perl, , CPAN, , , permissions, , group, , user, PKI, PostgreSQL, printers, proximity map, Python, , RRAID, receipt template editor, , record buckets, RefWorks, reportsstarting, , Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, searchauthor, keyword, series, subject, title, search box, search methodologyorder of results, stemming, truncation, search relevancy, , combining index weighting and match point weighting, indexed-field weighting, keyword search adjusting, match point weighting, search resultsavailability, caching, group formats and editions, limit to available, related subjects, authors, and series, sort criteria, , searching parameterslocation, request types, sorting criteria, security, self check, overdue, SelfCheck, , Simple2ZOOM, SIP, , , , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SRW, SSH, proxy, Putty, tunneling, SSL, certificates, staff accounts, permissions, , working location, staff client, assigning workstation names, automatic updates, activating the update server, building, building with clients, building, advanced build options, building on the server, copy/paste, customizing, , labels and messages, fonts, installing, Mac OS, Windows, keyboard shortcuts, logging into, navigating, printer settings, runninglinux, running through an SSH tunnel, sounds, testing, using wine to install on Linux, XULRunner, staging table, statistical categories editor, SuperCat, formats, adding, customizing, ISBNs, recent records, records, surveys, syslog, , syslog-NG, Ttelnet, transit items, aborting, cancelling, receiving, rlist, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVandelay, Version Control Systemgit, Subversion, , virtual image, virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, removing previous versions, Yyaz, , , ZZ39.50, , , , , , , importing records, Unicode, Zotero, diff --git a/2.0/admin/admin-lsa_2.0.xml b/2.0/admin/admin-lsa_2.0.xml index 7971281327..4a59bccb59 100644 --- a/2.0/admin/admin-lsa_2.0.xml +++ b/2.0/admin/admin-lsa_2.0.xml @@ -581,8 +581,7 @@ Closed Dates Editor closed dates editor - These dates are in addition to your regular weekly closed days (see ).    Both regular closed days and those entered in the + These dates are in addition to your regular weekly closed days. Both regular closed days and those entered in the Closed Dates Editor affect due dates and fines: @@ -881,9 +880,8 @@ Default Item Price - Replacement charge for lost items if price is unset in the - Copy Editor - . Does not apply if item price is set to $0 + Replacement charge for lost items if price is unset in the Copy Editor. + Does not apply if item price is set to $0 Number (dollars) @@ -910,7 +908,7 @@ GUI: Above-Tab Button Bar - If true the staff client button bar + If true the staff client button bar appears by default on all workstations registered to your library; staff can override this setting at each login. diff --git a/2.0/pdf/temp.fo b/2.0/pdf/temp.fo index 1b54299857..d87c8b3393 100644 --- a/2.0/pdf/temp.fo +++ b/2.0/pdf/temp.fo @@ -1,24 +1,24 @@ -Evergreen 2.0 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 2.0 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. 2.0 Feature ListPart II. Public Access CatalogPart III. Core Staff TasksChapter 3. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsChapter 4. The Acquisitions Module (from GPLS)Brief RecordsCancel/suspend acquisitionsClaim itemsExport Single Attribute ListFundsInvoice acquisitionsLine ItemsLoad Bib Records and Items Into the CatalogPatron RequestsPurchase OrdersReceivingSearchingSelection ListsView/Place OrdersChapter 5. Acquisitions Module Processes - KCLSOrderingReceiving Print MaterialsReceiving Non-print MaterialsChapter 6. The Serials ModuleSerial Control View, Alternate Serial Control View, and MFHD Records: A SummaryCopy Templates for SerialsAlternate Serial Control ViewSerial Control ViewMFHD RecordChapter 7. Alternate Serial ControlPart IV. AdministrationChapter 8. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.6.3 On Ubuntu or DebianInstalling Evergreen 2.0 On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsChapter 9. Upgrading Evergreen to 2.0Backing Up DataUpgrading OpenSRF to 1.6.3Upgrade Evergreen from 1.6.1 to 2.0Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4 (if required)Chapter 10. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 11. Server Operations and MaintenanceStarting, Stopping and RestartingBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 12. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 13. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 14. Troubleshooting System ErrorsChapter 15. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 16. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsChapter 17. Administration Functions in the Acquisitions ModuleCurrency TypesExchange RatesFunding SourcesFund TagsFundsProvidersEDIClaimingInvoice menusInvoice payment methodDistribution FormulasLine item featuresLine Item MARC Attribute DefinitionsCancel/Suspend reasonsAcquisitions Permissions in the Admin modulePart V. ReportsPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 18. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 19. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 20. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 21. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 22. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 23. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 24. JSON QueriesChapter 25. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesChapter 26. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema querySchema reporterSchema searchSchema serialSchema stagingSchema statsSchema vandelayAppendix A. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix B. Getting More InformationGlossaryIndex +Evergreen 2.0 Documentation - Draft VersionDocumentation Interest GroupDocBook XSL Stylesheets with Apache FOPEvergreen 2.0 DocumentationTable of ContentsPart I. IntroductionChapter 1. About EvergreenChapter 2. 2.0 Feature ListPart II. Public Access CatalogPart III. Core Staff TasksChapter 3. Using the Booking ModuleCreating a Booking ReservationCancelling a ReservationCreating a Pull ListCapturing Items for ReservationsPicking Up ReservationsReturning ReservationsChapter 4. The Acquisitions Module (from GPLS)Brief RecordsCancel/suspend acquisitionsClaim itemsExport Single Attribute ListFundsInvoice acquisitionsLine ItemsLoad Bib Records and Items Into the CatalogPatron RequestsPurchase OrdersReceivingSearchingSelection ListsView/Place OrdersChapter 5. Acquisitions Module Processes - KCLSOrderingReceiving Print MaterialsReceiving Non-print MaterialsChapter 6. The Serials ModuleSerial Control View, Alternate Serial Control View, and MFHD Records: A SummaryCopy Templates for SerialsAlternate Serial Control ViewSerial Control ViewMFHD RecordChapter 7. Alternate Serial ControlPart IV. AdministrationChapter 8. Server-side Installation of Evergreen SoftwareInstalling Server-Side SoftwareInstalling OpenSRF 1.6.3 On Ubuntu or DebianInstalling Evergreen 2.0 On Ubuntu or DebianStarting EvergreenTesting Your Evergreen InstallationPost-Installation ChoresRemove temporary Apache configuration changesConfigure a permanent SSL key(OPTIONAL) IP-Redirection(OPTIONAL) Set Up Support For ReportsChapter 9. Upgrading Evergreen to 2.0Backing Up DataUpgrading OpenSRF to 1.6.3Upgrade Evergreen from 1.6.1 to 2.0Restart Evergreen and TestUpgrading PostgreSQL from 8.2 to 8.4 (if required)Chapter 10. Migrating DataMigrating Bibliographic RecordsMigrating Bibliographic Records Using the ESI Migration ToolsAdding Copies to Bibliographic RecordsMigrating Patron DataRestoring your Evergreen Database to an Empty StateExporting Bibliographic Records into MARC filesImporting Authority RecordsChapter 11. Server Operations and MaintenanceStarting, Stopping and RestartingBacking UpSecurityManaging Log FilesInstalling PostgreSQL from SourceConfiguring PostgreSQLChapter 12. SIP ServerInstalling the SIP ServerSIP CommunicationChapter 13. SRU and Z39.50 ServerTesting SRU with yaz-clientSetting up Z39.50 server supportChapter 14. Troubleshooting System ErrorsChapter 15. Local Administration MenuOverviewReceipt Template EditorGlobal Font and Sound SettingsPrinter Settings EditorClosed Dates EditorCopy Locations EditorLibrary Settings EditorNon-Catalogued Type EditorGroup Penalty ThresholdsStatistical Categories EditorField DocumentationSurveysCash ReportsChapter 16. Action TriggersEvent DefinitionsHooksReactorsValidatorsProcessing Action TriggersChapter 17. Booking Module AdministrationMake a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable On the FlyCreate a Bookable Status for Non-Bibliographic ItemsSetting Booking PermissionsChapter 18. Administration Functions in the Acquisitions ModuleCurrency TypesExchange RatesFunding SourcesFund TagsFundsProvidersEDIClaimingInvoice menusInvoice payment methodDistribution FormulasLine item featuresLine Item MARC Attribute DefinitionsCancel/Suspend reasonsAcquisitions Permissions in the Admin moduleChapter 19. Languages and LocalizationEnabling and Disabling LanguagesPart V. ReportsChapter 20. Starting and Stopping the Reporter DaemonPart VI. Third Party System IntegrationPart VII. DevelopmentChapter 21. Evergreen File Structure and Configuration FilesEvergreen Directory StructureEvergreen Configuration FilesChapter 22. Customizing the Staff ClientChanging Colors and ImagesChanging Labels and MessagesChanging the Search SkinChapter 23. Customizing the OPACChange the Color Schemecustomizing Opac Text and LabelsLogo ImagesAdded ContentCustomizing the Results PageCustomizing the Details PageBibTemplateCustomizing the SlimpacIntegrating a Evergreen Search Form on a Web PageChapter 24. OpenSRFIntroducing OpenSRFWriting an OpenSRF ServiceOpenSRF Communication FlowsEvergreen-specific OpenSRF servicesChapter 25. Evergreen Data Models and AccessExploring the Database SchemaDatabase access methodsEvergreen Interface Definition Language (IDL)open-ils.cstore data access interfacesopen-ils.pcrud data access interfacesTransaction and savepoint controlAdding an IDL entry for ResolverResolverChapter 26. Introduction to SQL for Evergreen AdministratorsIntroduction to SQL DatabasesBasic SQL queriesAdvanced SQL queriesUnderstanding query performance with EXPLAINInserting, updating, and deleting dataQuery requestsChapter 27. JSON QueriesChapter 28. SuperCatUsing SuperCatAdding new SuperCat FormatsCustomizing SuperCat FormatsPart VIII. AppendicesChapter 29. Database SchemaSchema acqSchema actionSchema action_triggerSchema actorSchema assetSchema auditorSchema authoritySchema biblioSchema bookingSchema configSchema containerSchema extend_reporterSchema metabibSchema moneySchema offlineSchema permissionSchema publicSchema querySchema reporterSchema searchSchema serialSchema stagingSchema statsSchema vandelayAppendix A. About this DocumentationAbout the Documentation Interest Group (DIG)How to ParticipateAppendix B. Getting More InformationGlossaryIndex Report errors in this documentation using Launchpad. Report any errors in this documentation using Launchpad. - Evergreen 2.0 DocumentationDraft VersionDocumentation Interest GroupEvergreen 2.0 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2011 Evergreen Community + Evergreen 2.0 DocumentationDraft VersionDocumentation Interest GroupEvergreen 2.0 Documentation: Draft VersionDocumentation Interest GroupCopyright © 2011 Evergreen Community - This document was updated 2011-03-23. + This document was updated 2011-03-26. Evergreen 2.0 DocumentationEvergreen 2.0 Documentation Report errors in this documentation using Launchpad. Evergreen 2.0 Documentation Report any errors in this documentation using Launchpad. Evergreen 2.0 DocumentationTable of ContentsI. Introduction 1. About Evergreen 2. 2.0 Feature List II. Public Access Catalog III. Core Staff Tasks 3. Using the Booking Module Creating a Booking Reservation Cancelling a Reservation Creating a Pull List Capturing Items for Reservations Picking Up Reservations Returning Reservations 4. The Acquisitions Module (from GPLS) Brief Records Cancel/suspend acquisitions Claim items Export Single Attribute List Funds Invoice acquisitions Line Items Load Bib Records and Items Into the Catalog Patron Requests Purchase Orders Receiving Searching Selection Lists View/Place Orders 5. Acquisitions Module Processes - KCLS Ordering Receiving Print Materials Receiving Non-print Materials 6. The Serials Module Serial Control View, Alternate Serial Control View, and MFHD Records: A Summary Copy Templates for Serials Alternate Serial Control View Serial Control View MFHD Record 7. Alternate Serial Control IV. Administration 8. Server-side Installation of Evergreen Software Installing Server-Side Software Installing OpenSRF 1.6.3 On Ubuntu or Debian Installing Evergreen 2.0 On Ubuntu or - Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores 9. Upgrading Evergreen to 2.0 Backing Up Data Upgrading OpenSRF to 1.6.3 Upgrade Evergreen from 1.6.1 to 2.0 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 (if required) 10. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 11. Server Operations and Maintenance Starting, Stopping and Restarting Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 12. SIP Server Installing the SIP Server SIP Communication 13. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 14. Troubleshooting System Errors 15. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 16. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions 17. Administration Functions in the Acquisitions Module Currency Types Exchange Rates Funding Sources Fund Tags Funds Providers EDI Claiming Invoice menus Invoice payment method Distribution Formulas Line item features Line Item MARC Attribute Definitions Cancel/Suspend reasons Acquisitions Permissions in the Admin module V. Reports VI. Third Party System Integration VII. Development 18. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 19. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 20. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 21. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 22. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 23. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 24. JSON Queries 25. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices 26. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index Evergreen 2.0 DocumentationEvergreen 2.0 Documentation + Debian Starting Evergreen Testing Your Evergreen Installation Post-Installation Chores 9. Upgrading Evergreen to 2.0 Backing Up Data Upgrading OpenSRF to 1.6.3 Upgrade Evergreen from 1.6.1 to 2.0 Restart Evergreen and Test Upgrading PostgreSQL from 8.2 to 8.4 (if required) 10. Migrating Data Migrating Bibliographic Records Migrating Bibliographic Records Using the ESI Migration Tools Adding Copies to Bibliographic Records Migrating Patron Data Restoring your Evergreen Database to an Empty State Exporting Bibliographic Records into MARC files Importing Authority Records 11. Server Operations and Maintenance Starting, Stopping and Restarting Backing Up Security Managing Log Files Installing PostgreSQL from Source Configuring PostgreSQL 12. SIP Server Installing the SIP Server SIP Communication 13. SRU and Z39.50 Server Testing SRU with yaz-client Setting up Z39.50 server support 14. Troubleshooting System Errors 15. Local Administration Menu Overview Receipt Template Editor Global Font and Sound Settings Printer Settings Editor Closed Dates Editor Copy Locations Editor Library Settings Editor Non-Catalogued Type Editor Group Penalty Thresholds Statistical Categories Editor Field Documentation Surveys Cash Reports 16. Action Triggers Event Definitions Hooks Reactors Validators Processing Action Triggers 17. Booking Module Administration Make a Cataloged Item Bookable in Advance Make a Cataloged Item Bookable On the Fly Create a Bookable Status for Non-Bibliographic Items Setting Booking Permissions 18. Administration Functions in the Acquisitions Module Currency Types Exchange Rates Funding Sources Fund Tags Funds Providers EDI Claiming Invoice menus Invoice payment method Distribution Formulas Line item features Line Item MARC Attribute Definitions Cancel/Suspend reasons Acquisitions Permissions in the Admin module 19. Languages and Localization Enabling and Disabling Languages V. Reports 20. Starting and Stopping the Reporter Daemon VI. Third Party System Integration VII. Development 21. Evergreen File Structure and Configuration Files Evergreen Directory Structure Evergreen Configuration Files 22. Customizing the Staff Client Changing Colors and Images Changing Labels and Messages Changing the Search Skin 23. Customizing the OPAC Change the Color Scheme customizing Opac Text and Labels Logo Images Added Content Customizing the Results Page Customizing the Details Page BibTemplate Customizing the Slimpac Integrating a Evergreen Search Form on a Web Page 24. OpenSRF Introducing OpenSRF Writing an OpenSRF Service OpenSRF Communication Flows Evergreen-specific OpenSRF services 25. Evergreen Data Models and Access Exploring the Database Schema Database access methods Evergreen Interface Definition Language (IDL) open-ils.cstore data access interfaces open-ils.pcrud data access interfaces Transaction and savepoint control Adding an IDL entry for ResolverResolver 26. Introduction to SQL for Evergreen Administrators Introduction to SQL Databases Basic SQL queries Advanced SQL queries Understanding query performance with EXPLAIN Inserting, updating, and deleting data Query requests 27. JSON Queries 28. SuperCat Using SuperCat Adding new SuperCat Formats Customizing SuperCat Formats VIII. Appendices 29. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index Evergreen 2.0 DocumentationEvergreen 2.0 Documentation Report errors in this documentation using Launchpad. Evergreen 2.0 Documentation Report any errors in this documentation using Launchpad. - Evergreen 2.0 DocumentationList of Tables8.1. Evergreen Software Dependencies 8.2. Sample XPath syntax for editing "opensrf_core.xml" 8.3. Sample XPath syntax for editing "opensrf_core.xml" 11.1. Suggested configuration values 15.1. Action Trigger Event Definitions 15.2. Hooks 15.3. Action Trigger Reactors 15.4. Action Trigger Validators 18.1. Evergreen Directory Structure 18.2. Key Evergreen Configuration Files 18.3. Useful Evergreen Scripts 23.1. Examples: database object names 23.2. Evergreen schema names 23.3. PostgreSQL data types used by Evergreen 23.4. Example: Some potential natural primary keys for a table of people 23.5. Example: Evergreen’s copy / call number / bibliographic record relationships A.1. Evergreen DIG Participants A.2. Past DIG Participants + Evergreen 2.0 DocumentationList of Tables8.1. Evergreen Software Dependencies 8.2. Sample XPath syntax for editing "opensrf_core.xml" 8.3. Sample XPath syntax for editing "opensrf_core.xml" 11.1. Suggested configuration values 16.1. Action Trigger Event Definitions 16.2. Hooks 16.3. Action Trigger Reactors 16.4. Action Trigger Validators 21.1. Evergreen Directory Structure 21.2. Key Evergreen Configuration Files 21.3. Useful Evergreen Scripts 26.1. Examples: database object names 26.2. Evergreen schema names 26.3. PostgreSQL data types used by Evergreen 26.4. Example: Some potential natural primary keys for a table of people 26.5. Example: Evergreen’s copy / call number / bibliographic record relationships A.1. Evergreen DIG Participants A.2. Past DIG Participants @@ -34,9 +34,9 @@ Chapter 1. About EvergreenChapter 1. About Evergreen Evergreen is an open source library automation software designed to meet the needs of the very smallest to the very largest libraries and consortia. Through its staff interface, it facilitates the management, cataloging, and circulation of library materials, and through its online public access interface it helps patrons find those materials. - The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. + The Evergreen software is freely licensed under the GNU General Public License, meaning that it is free to download, use, view, modify, and share. It has an active development and user community, as well as several companies offering migration, support, hosting, and development services. The community’s development requirements state that Evergreen must be: - •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. + •Stable, even under extreme load.•Robust, and capable of handling a high volume of transactions and simultaneous users.•Flexible, to accommodate the varied needs of libraries.•Secure, to protect our patrons’ privacy and data.•User-friendly, to facilitate patron and staff use of the system. Evergreen, which first launched in 2006 now powers over 544 libraries of every type – public, academic, special, school, and even tribal and home libraries – in over a dozen countries worldwide. Chapter 2. 2.0 Feature ListChapter 2. 2.0 Feature List @@ -47,172 +47,172 @@ - CirculationCirculation + CirculationCirculation - Patron Registration EnhancementsPatron Registration Enhancements + Patron Registration EnhancementsPatron Registration Enhancements - •Zip code information can be added to a local table which will pre-populate the City/State fields during patron registration. •Added the ability to delete patrons by anonymizing the patron's personally identifiable data and purging the related data from other tables - without destroying information important to the integrity of the database as a whole (does not delete the actor.usr row). •Supports the ability to merge patrons; when it is determined that more than one account exists for a single patron. There is an interface for + •Zip code information can be added to a local table which will pre-populate the City/State fields during patron registration. •Added the ability to delete patrons by anonymizing the patron's personally identifiable data and purging the related data from other tables + without destroying information important to the integrity of the database as a whole (does not delete the actor.usr row). •Supports the ability to merge patrons; when it is determined that more than one account exists for a single patron. There is an interface for side-by-side comparison of the records; ability to delete addresses on merged accounts, delete cards and deactivate cards. Patrons with a status of in collections - are not eligible for merging. •Added quick links for staff to copy and paste patron address information. Information will paste in a standard mailing format. •Patrons with an address alert (invalid/bad address) will be displayed at the top of a duplicates list. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this - is done.•The system recognizes certain categories of patrons like Card Canceled, Deceased, etc. and will not place holds for these categories. •The patron record screen obscures certain information which can be considered sensitive. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this - is done.•Evergreen has the ability to automatically enter date, user, and location in messages and notes. + are not eligible for merging. •Added quick links for staff to copy and paste patron address information. Information will paste in a standard mailing format. •Patrons with an address alert (invalid/bad address) will be displayed at the top of a duplicates list. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this + is done.•The system recognizes certain categories of patrons like Card Canceled, Deceased, etc. and will not place holds for these categories. •The patron record screen obscures certain information which can be considered sensitive. •Patrons may create library accounts through the OPAC. These are set as pending until they can be confirmed by staff. The backend support for this + is done.•Evergreen has the ability to automatically enter date, user, and location in messages and notes. - Item Checkout enhancementsItem Checkout enhancements + Item Checkout enhancementsItem Checkout enhancements - •During check-out, the patron's fines list appears first if there is a balance. If there is an alert, the alert page will show first, then fines - screen. •Evergreen has the ability to track hourly checkout stats. Self-check now operates by workstation and it's possible to gather statistics for checkouts - between staff workstations and self-check workstations. (There is a workstation registration wizard built into the self-check UI.) •Audible cue support, for successful and unsuccessful check-out, at self check-out stations has been added. This is customizable at the database level.•Evergreen has fast-add capability. During check-out, if an item is found not to be cataloged,you can pre-cat the item quickly, we've added other field - such as library, ISBN and circ modifier to this pre-cat. •The system supports sets or kits of items and has the ability to display the number of items and a list of descriptions. •Evergreen allows patrons to renew a title as long as they have not exceeded the allowed number of renewals and there are more available items + •During check-out, the patron's fines list appears first if there is a balance. If there is an alert, the alert page will show first, then fines + screen. •Evergreen has the ability to track hourly checkout stats. Self-check now operates by workstation and it's possible to gather statistics for checkouts + between staff workstations and self-check workstations. (There is a workstation registration wizard built into the self-check UI.) •Audible cue support, for successful and unsuccessful check-out, at self check-out stations has been added. This is customizable at the database level.•Evergreen has fast-add capability. During check-out, if an item is found not to be cataloged,you can pre-cat the item quickly, we've added other field + such as library, ISBN and circ modifier to this pre-cat. •The system supports sets or kits of items and has the ability to display the number of items and a list of descriptions. •Evergreen allows patrons to renew a title as long as they have not exceeded the allowed number of renewals and there are more available items than there are unfrozen holds. This is an administration setting. - Self Check module enhancementsSelf Check module enhancements + Self Check module enhancementsSelf Check module enhancements - •In self check and SC, if a staff member checks out an item to a patron that is already checked out to that patron, the item will simply renew. This does - have configurable age-based parameters to prevent a double scan at checkout resulting in a renewal. •For self check receipts, receipts include the same information for renewal as checkouts and includes notes on items that failed to renew. •In the self-check UI, patrons can view holds and patron position vs. the number of circulating copies. •The self check-out station displays holds ready for pickup, then removes each hold as the item is checked out. •Evergreen supports the ability to pay fines with a credit card at self check-out stations. This requires the library to have a merchant account with a credit + •In self check and SC, if a staff member checks out an item to a patron that is already checked out to that patron, the item will simply renew. This does + have configurable age-based parameters to prevent a double scan at checkout resulting in a renewal. •For self check receipts, receipts include the same information for renewal as checkouts and includes notes on items that failed to renew. •In the self-check UI, patrons can view holds and patron position vs. the number of circulating copies. •The self check-out station displays holds ready for pickup, then removes each hold as the item is checked out. •Evergreen supports the ability to pay fines with a credit card at self check-out stations. This requires the library to have a merchant account with a credit card processor like Paypal. The current supported processors include Authorize.net and Paypal. - Item Check-in enhancementsItem Check-in enhancements + Item Check-in enhancementsItem Check-in enhancements - •Evergreen supports a set number of claim returns allowed; beyond that, additional claim returns require supervisor authorization. This is based off the + •Evergreen supports a set number of claim returns allowed; beyond that, additional claim returns require supervisor authorization. This is based off the claims returned counter. This only blocks another claim returned, and circulation can still occur. Also, there is a new permission to allow changing the claims - returned count for a patron. In order to use this feature, staff needs to have the appropriate permission. •There's a new calendar widget in the backdating function in the item check-in module. The system has the ability to select items already checked in + returned count for a patron. In order to use this feature, staff needs to have the appropriate permission. •There's a new calendar widget in the backdating function in the item check-in module. The system has the ability to select items already checked in and retroactively backdate those items, using a button with a calendar selector. Any fines resulting from original check-in are removed. When a check-in is backdated, - the item record retains both the actual date of check-in and backdate used. This information will display in the copy details interface. •When marking an item damaged, several library settings are checked to determine whether the patron should be charged the copy price and/or a processing fee. + the item record retains both the actual date of check-in and backdate used. This information will display in the copy details interface. •When marking an item damaged, several library settings are checked to determine whether the patron should be charged the copy price and/or a processing fee. Staff is prompted with this amount, which can either be applied or modified or canceled. - Holds EnhancementsHolds Enhancements + Holds EnhancementsHolds Enhancements - •Evergreen allows for hold slips to be customized to include any field from the patron record and/or item record, in any position and orientation on the - slip. Font, font size, and font weight are customizable. In addition, the hold slip may include a branch symbol (gif or jpg format) •Evergreen supports behind the desk indicator printing on holds slip for patrons who have this flag in their patron record. (This would be for libraries - with public hold shelves.) •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is changed - to On Hold Shelf. •Evergreen has the ability to ensure that manually edited copies (either deleting or changing to a non-holdable status) will have their holds - retargeted. •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is - changed to On Hold Shelf. •The system supports a Clear Hold Shelf process. First, it removes holds from items that have expired on the hold shelf, and generates a report (aka clear + •Evergreen allows for hold slips to be customized to include any field from the patron record and/or item record, in any position and orientation on the + slip. Font, font size, and font weight are customizable. In addition, the hold slip may include a branch symbol (gif or jpg format) •Evergreen supports behind the desk indicator printing on holds slip for patrons who have this flag in their patron record. (This would be for libraries + with public hold shelves.) •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is changed + to On Hold Shelf. •Evergreen has the ability to ensure that manually edited copies (either deleting or changing to a non-holdable status) will have their holds + retargeted. •In Evergreen, between the time that a hold is checked in and a hold is placed on the hold shelf, there is a configurable delay before the status is + changed to On Hold Shelf. •The system supports a Clear Hold Shelf process. First, it removes holds from items that have expired on the hold shelf, and generates a report (aka clear hold shelf report) listing items to be cleared from hold shelf. Then staff can print the list, go out and physically pull the items off of the hold shelf. Next, staff scan the items in EG to either reset the items to the correct shelving location, capture the next hold or put the items in transit to the correct owning - location.•Staff can extend pickup deadlines for holds.•In the patron view in the SC (staff client), you can select multiple holds in actions for selected holds and choose to change the pickup location. + location.•Staff can extend pickup deadlines for holds.•In the patron view in the SC (staff client), you can select multiple holds in actions for selected holds and choose to change the pickup location. Evergreen has the ability to change the pickup location for all of a patron's holds in a single process. Additionally, Evergreen has the ability to modify all holds attached to a bibliographic record, including the ability to change the hold expiration date. This functionality is covered with current bib holds list - interface. •Evergreen allows patrons with specific permissions to place holds on items they have already checked out. All other patrons cannot. This works by warning the - user that the item is already checked out to them and, if they have the permission, the system gives them the ability to override. •The system supports the ability to place holds on titles with status on-order. For additional information, see the Acquisitions notes later in this - document. •Evergreen has the ability to designate specific org units that will not trigger a hold upon check-in. •Evergreen added logic to hold targeting to skip branches that are closed at the time of hold placement and x time (x + interface. •Evergreen allows patrons with specific permissions to place holds on items they have already checked out. All other patrons cannot. This works by warning the + user that the item is already checked out to them and, if they have the permission, the system gives them the ability to override. •The system supports the ability to place holds on titles with status on-order. For additional information, see the Acquisitions notes later in this + document. •Evergreen has the ability to designate specific org units that will not trigger a hold upon check-in. •Evergreen added logic to hold targeting to skip branches that are closed at the time of hold placement and x time (x time being a set interval). This is to prevent the hold being targeted at branches that will be closed Saturday and Sunday (for example), making it impossible for patrons to receive their hold. This - presumes there is another copy available at another branch. •There are more options now for hold settings. One option is library weighting as well as looping. If looping is set, the holds targeter will skip any + presumes there is another copy available at another branch. •There are more options now for hold settings. One option is library weighting as well as looping. If looping is set, the holds targeter will skip any libraries that it targeted in a previous loop and will continue doing so until it has tried all libraries at which point it will start the process over again. If max loops are being used in hold management, at the end of the last determined loop, if there are no copies that could potentially fill a hold, the hold may be canceled. - If there are checked-out copies, the hold stays in queue; otherwise, the hold is canceled and a cancellation notice is sent to the patron. •The system offers the ability to secondarily sort the Holds Pull List by physical shelving location within the library.•The system offers the ability to distinguish between staff-placed holds and patron-placed holds through a column in the holds interface. •Hold cancellation can be displayed, along with information regarding the cancellation (e.g., cause, cancellation type, date, item, patron, etc.) •There is support now in the system to allow configuration to disallow holds for items that are on the shelf at the location from which the patron is - searching. •The system supports patron specific hold notes that can display in the OPAC and print in the hold notice, but do not necessarily print on hold slips. •The system supports ability for staff to move someone to the top of the holds queue. This was developed due to cases where a patron picked up a hold but the - item was damaged. Since the patron had picked up the hold, it was considered filled. •The patron can change the pickup location before the hold is ready for pickup. Then, the item is put in transit & a new holds slip is printed with a + If there are checked-out copies, the hold stays in queue; otherwise, the hold is canceled and a cancellation notice is sent to the patron. •The system offers the ability to secondarily sort the Holds Pull List by physical shelving location within the library.•The system offers the ability to distinguish between staff-placed holds and patron-placed holds through a column in the holds interface. •Hold cancellation can be displayed, along with information regarding the cancellation (e.g., cause, cancellation type, date, item, patron, etc.) •There is support now in the system to allow configuration to disallow holds for items that are on the shelf at the location from which the patron is + searching. •The system supports patron specific hold notes that can display in the OPAC and print in the hold notice, but do not necessarily print on hold slips. •The system supports ability for staff to move someone to the top of the holds queue. This was developed due to cases where a patron picked up a hold but the + item was damaged. Since the patron had picked up the hold, it was considered filled. •The patron can change the pickup location before the hold is ready for pickup. Then, the item is put in transit & a new holds slip is printed with a special symbol to indicate that the pickup location has been changed. If the location is changed while the item is in transit, than at next checkin the item is put - in transit to the new location. A new holds slip is printed. •The system supports a separate hold note field for staff use that can print on hold slip.•Ability for patrons to view recently canceled holds and easily re-place holds. + in transit to the new location. A new holds slip is printed. •The system supports a separate hold note field for staff use that can print on hold slip.•Ability for patrons to view recently canceled holds and easily re-place holds. - Staff Client Interface EnhancementsStaff Client Interface Enhancements + Staff Client Interface EnhancementsStaff Client Interface Enhancements - •Evergreen includes color-coding into staff view of patrons when there is a bad or invalid address. Also included is an alert to patrons in the My Account view - in the OPAC to alert them to the bad address problem. System automatically blocks /unblocks a patron when an address is marked invalid/valid. •Ability to have the staff client automatically minimize after a settable period of inactivity to protect patron privacy. This is controlled through an org - unit setting.•Summary of bills, checkouts, and holds are visible from all of the patron screens.•Historical summary of paid fines is sortable by column and displays sub-totals for each column; also allows the ability to limit by voided/non-voided - payments. Fines history detail includes more information including location and time/date where item was returned and much more. •More streamlined display of copy information including number of copies, copy status, and number of holds in both staff client interface and patron - OPAC.•The system supports the ability to edit item records from any item record access point. •From holding maintenance or item status by barcode, you can retrieve more item details. For example, total circulations by current and previous year, last - status change, last checkout date & works station, last checkin time and workstation, and more. •The system includes a separate date field for the last change to the item in the item record. •In the item record, the system displays total check-outs and renewals for year-to-date, previous year, and lifetime. •Better audio signal handling.•In Evergreen, there is an org setting to disable all staff client circ popups unless an unhandled exception occurs. The exception handling has been automated + •Evergreen includes color-coding into staff view of patrons when there is a bad or invalid address. Also included is an alert to patrons in the My Account view + in the OPAC to alert them to the bad address problem. System automatically blocks /unblocks a patron when an address is marked invalid/valid. •Ability to have the staff client automatically minimize after a settable period of inactivity to protect patron privacy. This is controlled through an org + unit setting.•Summary of bills, checkouts, and holds are visible from all of the patron screens.•Historical summary of paid fines is sortable by column and displays sub-totals for each column; also allows the ability to limit by voided/non-voided + payments. Fines history detail includes more information including location and time/date where item was returned and much more. •More streamlined display of copy information including number of copies, copy status, and number of holds in both staff client interface and patron + OPAC.•The system supports the ability to edit item records from any item record access point. •From holding maintenance or item status by barcode, you can retrieve more item details. For example, total circulations by current and previous year, last + status change, last checkout date & works station, last checkin time and workstation, and more. •The system includes a separate date field for the last change to the item in the item record. •In the item record, the system displays total check-outs and renewals for year-to-date, previous year, and lifetime. •Better audio signal handling.•In Evergreen, there is an org setting to disable all staff client circ popups unless an unhandled exception occurs. The exception handling has been automated as much as possible, based in settings, to prevent the amount of popups that require staff attention at the circ desk. Alerts are communicated visually (e.g., screen - color change) or audibly. •The system supports two views of patron information: horizontal and vertical. •From the patrons screen, under holds, clicking place hold will bring up an embedded catalog. Placing a hold from the embedded catalog will automatically - generate a hold for the current account of the patron you are viewing. •The system supports a new messages (notes) UI in the info tab of the patron screen. •The system supports a new interface that shows the most recent activity on the workstation (checkout/checkin/ renewal/ patron-reg activity, with links to - relevant patron from each item). This would be helpful to a supervisor trying to backtrack an issue to assist a staff member. •The system now captures and displays check-in and workstation history. •Added the ability to pre-define messages, populated in a drop-down menu, to be applied to patron accounts. Includes: the ability to configure the message to + color change) or audibly. •The system supports two views of patron information: horizontal and vertical. •From the patrons screen, under holds, clicking place hold will bring up an embedded catalog. Placing a hold from the embedded catalog will automatically + generate a hold for the current account of the patron you are viewing. •The system supports a new messages (notes) UI in the info tab of the patron screen. •The system supports a new interface that shows the most recent activity on the workstation (checkout/checkin/ renewal/ patron-reg activity, with links to + relevant patron from each item). This would be helpful to a supervisor trying to backtrack an issue to assist a staff member. •The system now captures and displays check-in and workstation history. •Added the ability to pre-define messages, populated in a drop-down menu, to be applied to patron accounts. Includes: the ability to configure the message to act as a penalty (if desired), record the date and staff who applied the message, include a flag to mark item as resolved. If item is marked as resolved it will not - display as an alert. •Under grocery billings in Evergreen, billing type can be pre-populated with a list of common fine events (such as types and costs). •Evergreen has the ability to retrieve users by numberic ID (separate from the barcode) in the staff client. This functionality is optional and set to false - by default. •Backend support for other types of receipts (like holds/fines). + display as an alert. •Under grocery billings in Evergreen, billing type can be pre-populated with a list of common fine events (such as types and costs). •Evergreen has the ability to retrieve users by numberic ID (separate from the barcode) in the staff client. This functionality is optional and set to false + by default. •Backend support for other types of receipts (like holds/fines). - OPAC and My Account EnhancementsOPAC and My Account Enhancements + OPAC and My Account EnhancementsOPAC and My Account Enhancements - •There is backend code support for a method to allow patrons to link their records in a way that grant privileges. This could be utilized in future - implementations for social networking features. •Patron passwords are now more flexible in length and content (shorter and numeric-only passwords are now allowed). Libraries can set minimum and maximum - limits on password length in Password format in the Library Settings Editor. •Patrons can select a username, which can then be used to access OPAC and self check-out stations.•My Account can allow patrons to update some information including: street address, e-mail address and preferred pick-up library for holds. Changes to address - will be marked as pending in the patron's file until a staff member verifies the new address and completes the change. •From the My Account interface, patrons can see their estimated wait time for a hold. Evergreen calculates the estimated wait time from the circ mods on the set + •There is backend code support for a method to allow patrons to link their records in a way that grant privileges. This could be utilized in future + implementations for social networking features. •Patron passwords are now more flexible in length and content (shorter and numeric-only passwords are now allowed). Libraries can set minimum and maximum + limits on password length in Password format in the Library Settings Editor. •Patrons can select a username, which can then be used to access OPAC and self check-out stations.•My Account can allow patrons to update some information including: street address, e-mail address and preferred pick-up library for holds. Changes to address + will be marked as pending in the patron's file until a staff member verifies the new address and completes the change. •From the My Account interface, patrons can see their estimated wait time for a hold. Evergreen calculates the estimated wait time from the circ mods on the set of potential copies available to fill the holds on that title. Hold wait estimate is configurable at the consortial level and each Evergreen implementation would need - to take into consideration their avg circulation time, hold wait time or other factors like transit time which might influence hold wait estimates. •Patrons can title their bookbags (aka reading list) and place holds from it. •Backend support has been developed to allow patrons to waive certain kinds of notices. •The system supports combining multiple notices of the same type to the same patron into one event, so long as the system is configured to batch notices + to take into consideration their avg circulation time, hold wait time or other factors like transit time which might influence hold wait estimates. •Patrons can title their bookbags (aka reading list) and place holds from it. •Backend support has been developed to allow patrons to waive certain kinds of notices. •The system supports combining multiple notices of the same type to the same patron into one event, so long as the system is configured to batch notices on an approximately daily basis. - Billing, Collections and Fine/Fee EnhancementsBilling, Collections and Fine/Fee Enhancements + Billing, Collections and Fine/Fee EnhancementsBilling, Collections and Fine/Fee Enhancements - •Fines now consistently link to item record details.•The fine record includes a comments field, editable by staff. Staff can annotate payments and add notes to a specific billing. Staff can sort on payment - type. When adding note, the current text shows as default in a pop-up window, so it can be appended or over-written. •Staff and users can now only pay using the latest user data, which prevents accidental/duplicate payments against the same transaction or against stale - data. •The system supports setting the maximum fine based on item type (e.g. generic=.50) AND not to exceed the cost of item. This works as an inheritable OU - setting, circ.max_fine.cap_at_price, that changes the max_fine amount to the price IF the price is not null and is less than the rule-based max_fine amount. •The system has the ability to run a report of accounts with users with overall negative balances, including the balance owed and last billing activity + •Fines now consistently link to item record details.•The fine record includes a comments field, editable by staff. Staff can annotate payments and add notes to a specific billing. Staff can sort on payment + type. When adding note, the current text shows as default in a pop-up window, so it can be appended or over-written. •Staff and users can now only pay using the latest user data, which prevents accidental/duplicate payments against the same transaction or against stale + data. •The system supports setting the maximum fine based on item type (e.g. generic=.50) AND not to exceed the cost of item. This works as an inheritable OU + setting, circ.max_fine.cap_at_price, that changes the max_fine amount to the price IF the price is not null and is less than the rule-based max_fine amount. •The system has the ability to run a report of accounts with users with overall negative balances, including the balance owed and last billing activity time, optionally filtered by home org. There is an option for issuing refunds for selected accounts on the resulting list. The report also captures patrons with - any refundable transaction. •Evergreen provides 3 distinct and independent types of blocks: system, manual and collections. Manual and collections are set manually by staff. •A new penalty type of PATRON_IN_COLLECTIONS has been added. Its set when the collections agency puts the patron into collections, staff can define the blocks + any refundable transaction. •Evergreen provides 3 distinct and independent types of blocks: system, manual and collections. Manual and collections are set manually by staff. •A new penalty type of PATRON_IN_COLLECTIONS has been added. Its set when the collections agency puts the patron into collections, staff can define the blocks and clear threshold for each group, etc. The system supports removing collection block immediately once charges are paid down to zero (applies to both ecommerce and at CIRC desk). - Action/Triggered Event and Notice EnhancementsAction/Triggered Event and Notice Enhancements + Action/Triggered Event and Notice EnhancementsAction/Triggered Event and Notice Enhancements - •Action Triggers (AT) support many new notices for events such as items that are about to expire off of the hold shelf; items that are on hold and are about + •Action Triggers (AT) support many new notices for events such as items that are about to expire off of the hold shelf; items that are on hold and are about to reach the max hold time (if one is set); courtesy notices that are prior to due date. AT also logs all notices sent to patrons and this is accessible to staff in the - SC to view all notices or cancel all pending notices.•The system has the ability to cancel unsent notices before they are sent and the ability to search pending notices by item barcode.•Administrators can choose to implement a collections warning prior to sending patrons to collections. When the account balance of the patron meets a + SC to view all notices or cancel all pending notices.•The system has the ability to cancel unsent notices before they are sent and the ability to search pending notices by item barcode.•Administrators can choose to implement a collections warning prior to sending patrons to collections. When the account balance of the patron meets a certain threshold, they are sent a bill notice. This is driven by the total amount owed, not by individual bills. The patron is sent to collections after a configurable number of days since the bill notice was sent. The billing notice is handled with a new PATRON_EXCEEDS_COLLECTIONS_WARNING penalty. Files can be sent via SCP and FTP. - AcquisitionsAcquisitions + AcquisitionsAcquisitions - •From within the general acquisitions search page, users are able to search on many fields in the acquisitions /serials workflow. For example on attributes - of invoices, purchase orders, selection lists, bib records, etc.•General catalog searching is now supported for explicit truncation/wildcard searches.•Acquisitions line item searches support NOT searches. •Money can be transferred from one fund to another (or to none).•All transactions (except batch EDI delivery to vendors) post in real time including: purchase orders, invoices, fund balances, vendors balances, vendor - statistics and history. EDI delivery delay is configurable at the system level admin interface.•In the User Interface, users now have access to all active funds, spanning multiple years, in the various ordering/invoicing/etc interfaces.•There is support for year-end fiscal turnover process that closes out funds and transfers encumbered amounts into a new fiscal year. This includes the ability - to selectively roll certain funds over, while not rolling over others. • Evergreen handles validation of ordering, receiving,and invoicing processes, using validated data, to satisfy auditor requirements. In the staff client, - there is a menu option which allows staff to locate the PO that resulted in the purchase of that copy.•Selection lists are collections of bibliographic records (short or full) that temporarily store titles being considered for purchase. Selection lists can be - shared for collaborative input.•Library staff have the ability to create distribution formulas for ease of receiving, processing and distributing materials. Branch, shelving location, and fund need + •From within the general acquisitions search page, users are able to search on many fields in the acquisitions /serials workflow. For example on attributes + of invoices, purchase orders, selection lists, bib records, etc.•General catalog searching is now supported for explicit truncation/wildcard searches.•Acquisitions line item searches support NOT searches. •Money can be transferred from one fund to another (or to none).•All transactions (except batch EDI delivery to vendors) post in real time including: purchase orders, invoices, fund balances, vendors balances, vendor + statistics and history. EDI delivery delay is configurable at the system level admin interface.•In the User Interface, users now have access to all active funds, spanning multiple years, in the various ordering/invoicing/etc interfaces.•There is support for year-end fiscal turnover process that closes out funds and transfers encumbered amounts into a new fiscal year. This includes the ability + to selectively roll certain funds over, while not rolling over others. • Evergreen handles validation of ordering, receiving,and invoicing processes, using validated data, to satisfy auditor requirements. In the staff client, + there is a menu option which allows staff to locate the PO that resulted in the purchase of that copy.•Selection lists are collections of bibliographic records (short or full) that temporarily store titles being considered for purchase. Selection lists can be + shared for collaborative input.•Library staff have the ability to create distribution formulas for ease of receiving, processing and distributing materials. Branch, shelving location, and fund need to be separate from the distribution formula, so that staff can enter the distribution sets. Staff are able to use that formula for any shelving location they decide. Staff also have the ability to add multiple distribution formulas together and the ability to override distribution formulas. After applying the distribution formula; it will be an all or none redistribution of copies from one branch to another. Staff can add or delete individual copies because the distribution pattern may not account for the exact total of copies. If the total number of copies has not been allocated, the user will receive a flag or warning. This puts the use count for each distribution formula in the DF - dropdown for users to see.•The system supports Batch ISBN/UPC search. This is located in the general Acquisitions search page, where you can choose to search by single isbn, upc, or you can + dropdown for users to see.•The system supports Batch ISBN/UPC search. This is located in the general Acquisitions search page, where you can choose to search by single isbn, upc, or you can choose to upload a batch of isbns. The ISBN search method looks at MARC tag 024, where UPC codes are supposed to live. For LI searching, the system uses - open-ils.acq.lineitem.search.ident. Catalog records are included in the batch isbn/upc search and staff can now search catalog records in the acq search. •Backend support has been integrated to give patrons the ability to submit purchase requests through the OPAC. The UI for this has not yet been integrated into the - OPAC.• The system supports claiming, specifically there is: - •a place to store the default claim interval for each vendor•a way to show the selected claim date during the order•a way to show the selected claim date during the order•a way to override the claim date during order•a way to list items/orders that have reached the claim date. - A list of items that meet claims requirements can be generated, but claims must be initiated by librarians. •From the UI, staff can access the lineitem and PO history. Entries in the history table are ordered from most recent to oldest.•The purchase order printout is customizable, including the ability to break up a single order into separate purchase orders. Also, staff can print groups + open-ils.acq.lineitem.search.ident. Catalog records are included in the batch isbn/upc search and staff can now search catalog records in the acq search. •Backend support has been integrated to give patrons the ability to submit purchase requests through the OPAC. The UI for this has not yet been integrated into the + OPAC.• The system supports claiming, specifically there is: + •a place to store the default claim interval for each vendor•a way to show the selected claim date during the order•a way to show the selected claim date during the order•a way to override the claim date during order•a way to list items/orders that have reached the claim date. + A list of items that meet claims requirements can be generated, but claims must be initiated by librarians. •From the UI, staff can access the lineitem and PO history. Entries in the history table are ordered from most recent to oldest.•The purchase order printout is customizable, including the ability to break up a single order into separate purchase orders. Also, staff can print groups of POs from a search as a single printout, which can be used to generate physical POs for vendors who do not support EDI. Staff can add notes and there is an indicator - in the PO interface of the existence/number of attached notes. •Staff are able to see all of the lineitems (with prices, copy counts, etc.) for a set of Pos and summary information is listed along the top of the page. + in the PO interface of the existence/number of attached notes. •Staff are able to see all of the lineitems (with prices, copy counts, etc.) for a set of Pos and summary information is listed along the top of the page. The summary information includes: total price, total # lineitems, and total # of copies. Additionally, staff can do a PO search by vendor for all - activated-but-not-yet-sent Pos (i.e., show me what we are about to order) and view the results.•The system supports flagging prepaid orders so that invoicing is handled correctly.•The system allows building orders based on templates (distribution formulas); by shelving location or owning library.•The system supports the ability to gather orders together and send them all at once, instead of manually and individually, a rolling FTP function that runs + activated-but-not-yet-sent Pos (i.e., show me what we are about to order) and view the results.•The system supports flagging prepaid orders so that invoicing is handled correctly.•The system allows building orders based on templates (distribution formulas); by shelving location or owning library.•The system supports the ability to gather orders together and send them all at once, instead of manually and individually, a rolling FTP function that runs every 15 minutes (or other set interval) with detailed log information and control of frequency and action. Additionally, there is an automatic retrieval of status - reports records from the vendor, which are then automatically inserted into the order records. •Staff have the ability to apply and view notes and cancel causes on purchase orders as well as cancel causes on lineitems. In the UI, there is a staff client - menu entry for cancel cause.•There is an interface in the ACQ system for viewing what was sent to vendors via EDI. There are two ways to approach the viewing of sent orders: via PO - search interface (for the general case) which gives finer detail on EDI orders and the ability to reset failed outbound EDI deliveries. •Pending final UI work in the OPAC, the system has the ability to allow patrons to place volume level and issue level holds. •Ability to create and print routing worksheets for manual receiving processes.•Nothing in the selection lists is holdable (either by patrons or by most staff, apart from acquisitions staff). When an on-order title has been canceled and + reports records from the vendor, which are then automatically inserted into the order records. •Staff have the ability to apply and view notes and cancel causes on purchase orders as well as cancel causes on lineitems. In the UI, there is a staff client + menu entry for cancel cause.•There is an interface in the ACQ system for viewing what was sent to vendors via EDI. There are two ways to approach the viewing of sent orders: via PO + search interface (for the general case) which gives finer detail on EDI orders and the ability to reset failed outbound EDI deliveries. •Pending final UI work in the OPAC, the system has the ability to allow patrons to place volume level and issue level holds. •Ability to create and print routing worksheets for manual receiving processes.•Nothing in the selection lists is holdable (either by patrons or by most staff, apart from acquisitions staff). When an on-order title has been canceled and the lineitem is canceled, the corresponding bib record and on-order copies will be deleted so the copies will no longer be holdable. The lineitem has a cancel cause to show why order was canceled. Selection list records are never visible in the OPAC. Catalog records with no visible copies (within the search scope) do not show up in - the public OPAC. This also applies to on-order records. •Deleted bibs, callnumbers, copies, and patrons are retained for reporting purposes. Only patrons can be purged (by staff). “Deleted” items are more - accurately described as “inactive.” However, patrons can now be complete purged, however this isn't recommended as you lose historical data.• The system supports shared and floating items by collection. Item records can be added or removed from the collection group and can be updated in batch - via buckets in the copy edit interface.•ACQ permissions control which workgroups have view/edit access to lineitem and catalog records while PO/PL and copy-level ownership and permission depths - affect viewing in other, more location-specific interfaces. •The system supports the ability to transfer patron holds queue from one bibliographic record to another, singly or in batch, while preserving the original - hold order. •The system has a reporting view which allows staff to identify bibs (shows ISBNs) for which the last item was removed based on the date of removal. - Report templates can be built from this view for external processes.•The system supports lineitem alerts, lineitem receive alerts, and lineitem detail alerts for EDI messaging.•The system supports the ability to exclude some types of items from patron hold limits.•There is support for new, locally defined, cancel reasons for EDI. There is also support for EG interpretation of EDI defined cancellation standards. •The system supports the ability to send batches of orders to vendors, including orders for multiple accounts. The process of breaking outbound EDI messages - into controlled and timed batch sizes is automated but settable to a specific, preferred, time interval.•The system supports the ability to FTP orders directly to vendors and receive acknowledgements and status reports from vendors. More specifically, the + the public OPAC. This also applies to on-order records. •Deleted bibs, callnumbers, copies, and patrons are retained for reporting purposes. Only patrons can be purged (by staff). “Deleted” items are more + accurately described as “inactive.” However, patrons can now be complete purged, however this isn't recommended as you lose historical data.• The system supports shared and floating items by collection. Item records can be added or removed from the collection group and can be updated in batch + via buckets in the copy edit interface.•ACQ permissions control which workgroups have view/edit access to lineitem and catalog records while PO/PL and copy-level ownership and permission depths + affect viewing in other, more location-specific interfaces. •The system supports the ability to transfer patron holds queue from one bibliographic record to another, singly or in batch, while preserving the original + hold order. •The system has a reporting view which allows staff to identify bibs (shows ISBNs) for which the last item was removed based on the date of removal. + Report templates can be built from this view for external processes.•The system supports lineitem alerts, lineitem receive alerts, and lineitem detail alerts for EDI messaging.•The system supports the ability to exclude some types of items from patron hold limits.•There is support for new, locally defined, cancel reasons for EDI. There is also support for EG interpretation of EDI defined cancellation standards. •The system supports the ability to send batches of orders to vendors, including orders for multiple accounts. The process of breaking outbound EDI messages + into controlled and timed batch sizes is automated but settable to a specific, preferred, time interval.•The system supports the ability to FTP orders directly to vendors and receive acknowledgements and status reports from vendors. More specifically, the system supports push and pull of files via FTP, SFTP and - SSH. •The system supports MARC file import with PO data.•The OPAC accepts enhanced content from the following vendors: ChiliFresh, Content Café & Novelist. (note that these are subscription services) - •You can set up vendor profiles and flag those that are active. Those that aren't can be saved for historical purposes.•The system supports the ability to “flag” vendor records for vendor who require pre-payment of purchase orders with a number of visual cues in the UI. During + SSH. •The system supports MARC file import with PO data.•The OPAC accepts enhanced content from the following vendors: ChiliFresh, Content Café & Novelist. (note that these are subscription services) + •You can set up vendor profiles and flag those that are active. Those that aren't can be saved for historical purposes.•The system supports the ability to “flag” vendor records for vendor who require pre-payment of purchase orders with a number of visual cues in the UI. During PO creation, the pre-payment flag in the form will show and pre-populate it's value with the value from the chosen provider. During PO activation, if prepayment - is required, a confirmation dialog is inserted before sending the activate request. It indicates in the PO summary when a PO requires pre-payment. •The system supports sequential barcode generation for ease of receiving and processing of new items and easily changing large groups of barcodes. There is + is required, a confirmation dialog is inserted before sending the activate request. It indicates in the PO summary when a PO requires pre-payment. •The system supports sequential barcode generation for ease of receiving and processing of new items and easily changing large groups of barcodes. There is a choice to use auto generated barcodes in interfaces where they would normally be used (such as receiving). Some parameters about the barcode symbology may need to - be entered in the admin interface to correctly calculate the barcodes.•The system supports the ability to manually select libraries to receive items when partial orders are received or when items come in multiple deliveries. - Orders with multiple copies will have an owning library per copy, so staff can pick which copies to mark as received.•The system is compatible with Zebra Z4M thermal transfer printers. •The system supports the ability to create, format and print spine labels. •In the ACQ UI, there is a batch fund updater. When there is a given set of line items, the batch fund updater updates the fund for all attached copies in - batch.•The system has a configurable drop-down of alerts for line items that staff can control. •The system supports the ability to update order records at the receiving stage; the ability to receive partial orders and unreceive orders; and the order - record is updated automatically when the balance of a partial order is received. •The system supports the ability to transfer item records from one bibliographic record to another. •The system supports a worksheet for each title received, to include title, call number, number of copies received, distribution, and processing notes. •The system supports the ability to easily scan over a “dummy” or placeholder barcode in a temporary, brief or on-order record by simply scanning the - “real” barcode.•The system supports the import/export of MARC bibliographic and authority records via Vandelay. An option has been added to use the internal bib ID as the TCN - for all records while retaining the OCLC number in the record. The authority import now matches bib import in overlay/merge functionality.•The system is fully compatible with OCLC Connexion for editing and transferring bibliographic and authority records (Z39.50). •The system supports the ability to create a “short bib” record pending creation of the full MARC record. Short bibs can be created from a lineitem - search. •The system supports a utility to facilitate searching for full bibliographic records and create temporary “short” bibliographic records if no full - records are found. •Added the ability to perform electronic receiving and invoicing as follows: ability to receive electronic packing slips and invoices by purchase order or + be entered in the admin interface to correctly calculate the barcodes.•The system supports the ability to manually select libraries to receive items when partial orders are received or when items come in multiple deliveries. + Orders with multiple copies will have an owning library per copy, so staff can pick which copies to mark as received.•The system is compatible with Zebra Z4M thermal transfer printers. •The system supports the ability to create, format and print spine labels. •In the ACQ UI, there is a batch fund updater. When there is a given set of line items, the batch fund updater updates the fund for all attached copies in + batch.•The system has a configurable drop-down of alerts for line items that staff can control. •The system supports the ability to update order records at the receiving stage; the ability to receive partial orders and unreceive orders; and the order + record is updated automatically when the balance of a partial order is received. •The system supports the ability to transfer item records from one bibliographic record to another. •The system supports a worksheet for each title received, to include title, call number, number of copies received, distribution, and processing notes. •The system supports the ability to easily scan over a “dummy” or placeholder barcode in a temporary, brief or on-order record by simply scanning the + “real” barcode.•The system supports the import/export of MARC bibliographic and authority records via Vandelay. An option has been added to use the internal bib ID as the TCN + for all records while retaining the OCLC number in the record. The authority import now matches bib import in overlay/merge functionality.•The system is fully compatible with OCLC Connexion for editing and transferring bibliographic and authority records (Z39.50). •The system supports the ability to create a “short bib” record pending creation of the full MARC record. Short bibs can be created from a lineitem + search. •The system supports a utility to facilitate searching for full bibliographic records and create temporary “short” bibliographic records if no full + records are found. •Added the ability to perform electronic receiving and invoicing as follows: ability to receive electronic packing slips and invoices by purchase order or invoice number; ability to edit number of copies, amount due, freight and service charges, and tax; ability to delete line items; ability to recalculate total - amounts; ability to authorize payment within ILS.•System supports the ability to do both regular and generic or blanket invoicing (refers to invoices without a purchase order number, e.g., direct charges to - a fund).•System supports simultaneous access to invoice interface.•System supports a number of fields including: date, invoice number, invoice type, shipping vendor name, billing vendor, purchase order number, title, + amounts; ability to authorize payment within ILS.•System supports the ability to do both regular and generic or blanket invoicing (refers to invoices without a purchase order number, e.g., direct charges to + a fund).•System supports simultaneous access to invoice interface.•System supports a number of fields including: date, invoice number, invoice type, shipping vendor name, billing vendor, purchase order number, title, author, number of copies ordered, number of copies paid or received, number of copies available for payment, number of copies being paid for, amount, notes, - invoice subtotal, freight charge, service charge, tax, invoice total, & vendor order was placed with. •The system prevents overpayment in the invoice view page by linking invoices to PO/Lineitems. •Staff can print a list of invoices paid before/after a specified date. When searching for invoices in the unified search interface, there's now a button that - will print a voucher for whichever invoices have checked checkboxes.•The system supports the ability to search invoices by number or vendor name, with links to vendors, and vendor records include links to invoice history.•Staff can retrieve a PO or lineitem and access all the related invoicing data. •The system supports reopening a closed invoice (example: an invoice was paid from the wrong fund; staff wants to go back and change the fund). There is - a Reopen button, which requires permissions. •The system has the ability to pay a partial invoice for partial receipt of shipment, and then generate claims for the items that were not received. Also, - the system supports invoicing extra copies when a vendor sends more copies than what staff ordered and staff decides to keep the extra copies. •Issues can be automatically moved to a configured shelving location upon receipt of the newer issue. This can be done on a per item basis and is based on - the owning library of the copies. •When using full serials control, the default behavior for serials issue sorting and display in the holdings display will be reverse chronological order.•Staff can label serials issuances with easily identifiable text such as “YYYYMONTH” or “V.12 NO.1”.•In serials receiving staff are able to choose which issues to receive and distribute to which locations.•Staff can add regular, supplemental, and index issues in the serials interface. •The system supports purchase alert query (aka holds ratio report, holds alert report) compares holds to items and flags titles that need more copies. + invoice subtotal, freight charge, service charge, tax, invoice total, & vendor order was placed with. •The system prevents overpayment in the invoice view page by linking invoices to PO/Lineitems. •Staff can print a list of invoices paid before/after a specified date. When searching for invoices in the unified search interface, there's now a button that + will print a voucher for whichever invoices have checked checkboxes.•The system supports the ability to search invoices by number or vendor name, with links to vendors, and vendor records include links to invoice history.•Staff can retrieve a PO or lineitem and access all the related invoicing data. •The system supports reopening a closed invoice (example: an invoice was paid from the wrong fund; staff wants to go back and change the fund). There is + a Reopen button, which requires permissions. •The system has the ability to pay a partial invoice for partial receipt of shipment, and then generate claims for the items that were not received. Also, + the system supports invoicing extra copies when a vendor sends more copies than what staff ordered and staff decides to keep the extra copies. •Issues can be automatically moved to a configured shelving location upon receipt of the newer issue. This can be done on a per item basis and is based on + the owning library of the copies. •When using full serials control, the default behavior for serials issue sorting and display in the holdings display will be reverse chronological order.•Staff can label serials issuances with easily identifiable text such as “YYYYMONTH” or “V.12 NO.1”.•In serials receiving staff are able to choose which issues to receive and distribute to which locations.•Staff can add regular, supplemental, and index issues in the serials interface. •The system supports purchase alert query (aka holds ratio report, holds alert report) compares holds to items and flags titles that need more copies. The option exists to include inprint/out-of-print status from the bibliographic record. The system also handles the ability to add query results directly to selection lists, singly or in batch, and the ability to create order records directly from query results. This is handled by an interface for uploading a CSV file to generate a page of bib records that can have lineitems created from them to go into selection lists and/or POs. @@ -245,117 +245,117 @@ Creating a Booking ReservationCreating a Booking Reservation - + Only staff members can create reservations. To initiate a reservation, staff can - •search the catalog,•enter a patron record,•or use the booking module. + •search the catalog,•enter a patron record,•or use the booking module. - Search the catalog to create a reservationSearch the catalog to create a reservation + Search the catalog to create a reservationSearch the catalog to create a reservation - 1. - In the staff client, select Search → Search the Catalog2. - Search for the item to be booked.3. - Click Submit Search.4. - A list of results will appear. Select the title of the item to be reserved.5. + 1. + In the staff client, select Search → Search the Catalog2. + Search for the item to be booked.3. + Click Submit Search.4. + A list of results will appear. Select the title of the item to be reserved.5. After clicking the title, the record summary appears. Beneath the record summary, - the copy summary will appear. In the Actions column, select Copy Details.6. + the copy summary will appear. In the Actions column, select Copy Details.6. The Copy Details will appear in a new row. In the barcode column, click the book now - link.7. - A screen showing the title and barcodes of available copies will appear.8. + link.7. + A screen showing the title and barcodes of available copies will appear.8. Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode does not exist, a pop up box will appear to alert you to the error. After entering the patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.9. + screen.9. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the reservation is set has already passed, the boxes will appear in red. The times must be set correctly for the reservation to be accomplished. If the item has already been reserved at the time for which you are trying to reserve the item, then you will receive - an error message.10. + an error message.10. Finally, select the barcode of the item that you want to reserve. If multiple copies of the item exist, choose the barcode of the copy that you want to reserve, and click Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you will receive an error message. If you do not have a preference, you do not have to select a barcode, and you may click Reserve Any. One of the barcodes will be pulled from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.11. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail.11. + After you have made the reservation, a message will confirm that the action succeeded. Click OK.12. The screen will refresh, and the reservation will appear below the user’s name. - Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation + Enter a patron’s record to create a reservationEnter a patron’s record to create a reservation - 1. - Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. + 1. + Enter the barcode or patron information, and click Search to retrieve the patron’s record.2. The match(es) should appear in the right pane. Click the desired patron’s name. In the left panel, a summary of the patron’s information will appear. Click the Retrieve - Patron button in the right corner to access more options in the patron’s record.3. - Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. + Patron button in the right corner to access more options in the patron’s record.3. + Eight buttons will appear in the top right corner. Select Other → Booking to create, cancel, pick up, and return reservations.4. The Copy Details will appear in a new row. In the barcode column, click the book now - link.5. - A screen showing the title and barcodes of available copies will appear.6. + link.5. + A screen showing the title and barcodes of available copies will appear.6. Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode does not exist, a pop up box will appear to alert you to the error. After entering the patron’s barcode, the user’s existing reservations will appear at the bottom of the - screen.7. + screen.7. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the reservation is set has already passed, the boxes will appear in red. The times must be set correctly for the reservation to be accomplished. If the item has already been reserved at the time for which you are trying to reserve the item, then you will receive - an error message.8. + an error message.8. Finally, select the barcode of the item that you want to reserve. If multiple copies of the item exist, choose the barcode of the copy that you want to reserve, and click Reserve Selected. If you do not select a barcode, and you click Reserve Selected, you will receive an error message. If you do not have a preference, you do not have to select a barcode, and you may click Reserve Any. One of the barcodes will be pulled from the list. - An item must have a status of available or reshelving in order to - be targeted for a reservation. If the item is in another status, the reservation will fail.9. - After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. + An item must have a status of available or reshelving in order to + be targeted for a reservation. If the item is in another status, the reservation will fail.9. + After you have made the reservation, a message will confirm that the action succeeded. Click OK.10. The screen will refresh, and the reservation will appear below the user’s name. - Use the booking module to create a reservationUse the booking module to create a reservation + Use the booking module to create a reservationUse the booking module to create a reservation - 1. - Select Booking → Create or Edit Reservations2. - Enter the barcode of the item and click Next.3. - A screen showing the name of the available resource will appear.4. + 1. + Select Booking → Create or Edit Reservations2. + Enter the barcode of the item and click Next.3. + A screen showing the name of the available resource will appear.4. Enter the user’s barcode in the Reserve to patron barcode box. If the patron barcode does not exist, a pop up box will appear to alert you to the error. After entering the - patron’s barcode, the user’s existing reservations will appear.5. + patron’s barcode, the user’s existing reservations will appear.5. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the reservation is set has already passed, the boxes will appear in red. The times must be set correctly for the reservation to be accomplished. If the resource has already been reserved at the time for which you want to reserve the item, then the item will - disappear.6. + disappear.6. Finally, select the resource that you want to reserve. If multiple items or rooms exist, choose the resource that you want to reserve, and click Reserve Selected. If you do not select a resource, and you click Reserve Selected, you will receive an error message. If you do not have a preference, you may click Reserve Any, and one of the - resources will be pulled from the list.7. + resources will be pulled from the list.7. After you have made the reservation, a message will confirm that the action - succeeded. Click OK.8. + succeeded. Click OK.8. The screen will refresh, and the reservation will appear below the user’s name. Cancelling a ReservationCancelling a Reservation - + Staff members can cancel a patron’s reservation through the Create or Cancel Reservations tab available in a patron’s record. Staff members can also cancel a reservation immediately after it has been made. - Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation - - 1. - Search for and retrieve a patron’s record.2. - Select Other → Booking → Create or Cancel Reservations.3. - The existing reservations will appear at the bottom of the screen.4. - To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. - A pop-up window will confirm that you cancelled the reservation. Click OK.6. - The screen will refresh, and the cancelled reservation will disappear.7. + Enter the patron’s record to cancel a reservationEnter the patron’s record to cancel a reservation + + 1. + Search for and retrieve a patron’s record.2. + Select Other → Booking → Create or Cancel Reservations.3. + The existing reservations will appear at the bottom of the screen.4. + To cancel a reservation, highlight the reservation that you want to cancel. Click Cancel Selected.5. + A pop-up window will confirm that you cancelled the reservation. Click OK.6. + The screen will refresh, and the cancelled reservation will disappear.7. To the right, a section titled, I need this resource... will allow you to set the dates and times for which the item should be reserved. If the date/time boxes appear in red, then the date and time set is incorrect. For example, if the time for which the @@ -364,55 +364,55 @@ reserved at the time for which you are trying to reserve the item, then you will receive an error message. - Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made + Cancel a reservation immediately after it has been madeCancel a reservation immediately after it has been made - 1. - Create the reservation.2. - Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. + 1. + Create the reservation.2. + Follow steps four through six in the section, Enter the patron’s record to cancel a reservation, to cancel the reservation.3. The existing reservations will appear at the bottom of the screen. Creating a Pull ListCreating a Pull List - + Staff members can create a pull list to retrieve items from the stacks. - 1. - To create a pull list, select Booking → Pull List.2. - To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. + 1. + To create a pull list, select Booking → Pull List.2. + To find a pull list for your library, select a library from the dropdown box adjacent to See pull list for library.3. You can decide how many days in advance you would like to select reserved items. Enter the number of days in the box adjacent to Generate - list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. - Click Fetch to retrieve the pull list.5. + list for this many days hence. For example, if you would like to pull items that are needed today, you can enter 1 in the box, and you will retrieve items that need to be pulled today.4. + Click Fetch to retrieve the pull list.5. The pull list will appear. Click Print to print the pull list. Capturing Items for ReservationsCapturing Items for Reservations - + Staff members can capture items for reservations. - 1. - In the staff client, select Booking → Capture Resources.2. - Enter the barcode of the items to be captured. Click Capture.3. + 1. + In the staff client, select Booking → Capture Resources.2. + Enter the barcode of the items to be captured. Click Capture.3. A Capture Succeeded message will appear to the right. Information about the item will appear below the message. You can print this information as a receipt and add it to the item if desired. Picking Up ReservationsPicking Up Reservations - + Staff members can help users pick up their reservations. - 1. - In the staff client, select Booking → Pick Up Reservations2. - Enter the user’s barcode. Click Go.3. - The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. + 1. + In the staff client, select Booking → Pick Up Reservations2. + Enter the user’s barcode. Click Go.3. + The title available for pickup will appear. Highlight the title of the item to pick up, and click Pick Up.4. The screen will refresh to show that the patron has picked up the reservation. Returning ReservationsReturning Reservations - + Staff members can help users return their reservations. - 1. - In the staff client, select Booking → Return Reservations.2. + 1. + In the staff client, select Booking → Return Reservations.2. You can return the item by patron or item barcode. Choose Resource or Patron, enter the - barcode, and click Go.3. - A pop up box will tell you that the item was returned. Click OK.4. + barcode, and click Go.3. + A pop up box will tell you that the item was returned. Click OK.4. The screen will refresh to show the reservations that remain out and the resources that have been returned. @@ -436,17 +436,17 @@ be added to selection lists or purchase orders and can be imported into the catalog. You can add brief records to new or existing selection lists. You can add brief records to new, pending or on-order purchase orders. - Add brief records to a selection listAdd brief records to a selection list + Add brief records to a selection listAdd brief records to a selection list - 1.Click Acquisitions → New Brief Record. You can also add brief records to + 1.Click Acquisitions → New Brief Record. You can also add brief records to an existing selection list by clicking the Actions menu on the selection list - and choosing Add Brief Record.2.Choose a selection list from the drop down menu, or enter the name of a new selection list.3.Enter bibliographic information in the desired fields.4.Click Save Record. + and choosing Add Brief Record.2.Choose a selection list from the drop down menu, or enter the name of a new selection list.3.Enter bibliographic information in the desired fields.4.Click Save Record. - Add brief records to purchase ordersAdd brief records to purchase orders + Add brief records to purchase ordersAdd brief records to purchase orders You can add brief records to new or existing purchase orders. - 1.Open or create a purchase order.2.Click Add Brief Record.3.Enter bibliographic information in the desired fields. Notice that the - record is added to the purchase order that you just created.4.Click Save Record. + 1.Open or create a purchase order.2.Click Add Brief Record.3.Enter bibliographic information in the desired fields. Notice that the + record is added to the purchase order that you just created.4.Click Save Record. Cancel/suspend acquisitionsCancel/suspend acquisitions @@ -457,21 +457,21 @@ indicate that the item will be purchased when the item is published. The purchase is, in effect, suspended rather than cancelled, but the state of the purchase order, line item, or copy would still become cancelled. - Cancel/suspend copiesCancel/suspend copies + Cancel/suspend copiesCancel/suspend copies You can cancel or suspend line items that are in a state of on order or pending order. - 1.Select the Copies link.2.Click the Cancel link adjacent to the copy that you wish to cancel.3.Select a cancel reason from the drop down menu that appears, and click Cancel copy. + 1.Select the Copies link.2.Click the Cancel link adjacent to the copy that you wish to cancel.3.Select a cancel reason from the drop down menu that appears, and click Cancel copy. - Cancel/suspend line itemsCancel/suspend line items + Cancel/suspend line itemsCancel/suspend line items You can cancel or suspend line items that are in a state of on order or pending order. - 1.Check the boxes of the line items that you wish to cancel.2.Click Actions →Cancel Selected Lineitems.3.Select a cancel reason from the drop down menu. Choose the cancel reason, + 1.Check the boxes of the line items that you wish to cancel.2.Click Actions →Cancel Selected Lineitems.3.Select a cancel reason from the drop down menu. Choose the cancel reason, and click Cancel Line Items. The status of the line item is now cancelled. - Cancel/suspend purchase ordersCancel/suspend purchase orders + Cancel/suspend purchase ordersCancel/suspend purchase orders - 1.Notice the Cancel column in the top half of the purchase order.2.Click the drop down arrow adjacent to Cancel order, and select a reason for - cancelling the order.3.Click Cancel order. The state of the purchase order is cancelled. + 1.Notice the Cancel column in the top half of the purchase order.2.Click the drop down arrow adjacent to Cancel order, and select a reason for + cancelling the order.3.Click Cancel order. The state of the purchase order is cancelled. Claim itemsClaim items @@ -480,7 +480,7 @@ claiming is not available in the 2.0 release. You can apply claim policies to line items or individual copies. You also can use the default claim policy associated with your provider to claim items. - Apply a claim policyApply a claim policy + Apply a claim policyApply a claim policy You can apply a claim policy to an item in one of two ways: apply a claim policy to a line item when the item is created on the selection list or @@ -488,43 +488,43 @@ the purchase order. The default claim policy for a provider is established when the provider is created and will be used for claiming if no claim policy has been applied. - 1.Open a selection list or purchase order.2.Click the Actions drop down menu on the line item.3.Click Apply Claim Policy.4.A drop down menu of claim policies will appear. Choose a claim policy to + 1.Open a selection list or purchase order.2.Click the Actions drop down menu on the line item.3.Click Apply Claim Policy.4.A drop down menu of claim policies will appear. Choose a claim policy to apply to the line item. The claim policy will be applied to all items that have - not been received or cancelled.5.Click Save. + not been received or cancelled.5.Click Save. - Change a claim policyChange a claim policy + Change a claim policyChange a claim policy You can manually change a claim policy that has been applied to a line item. - 1.Open a selection list or purchase order.2.Click the Actions drop down menu on the line item.3.Click Change Claim Policy.4.A drop down menu of claim policies will appear. Choose a claim policy to apply to the line - item.5.Click Save. + 1.Open a selection list or purchase order.2.Click the Actions drop down menu on the line item.3.Click Change Claim Policy.4.A drop down menu of claim policies will appear. Choose a claim policy to apply to the line + item.5.Click Save. - Claim an itemClaim an item + Claim an itemClaim an item You can manually claim items at any time after the item has been ordered. - 1.Open a purchase order.2.Click the Actions drop down menu on the line item.3.Click Claims. The number of existing claims appears in parentheses.4.A drop down menu of items to be claimed and possible claim actions appears. + 1.Open a purchase order.2.Click the Actions drop down menu on the line item.3.Click Claims. The number of existing claims appears in parentheses.4.A drop down menu of items to be claimed and possible claim actions appears. Check the boxes adjacent to the item that you want to claim and the action that - you will take. You can claim items that have not been received or cancelled.5.Click Claim Selected.6.Select a claim type from the drop down menu. Entering a note is optional.7.Click Claim.8.The number of existing claims on the line item updates, and a claim voucher + you will take. You can claim items that have not been received or cancelled.5.Click Claim Selected.6.Select a claim type from the drop down menu. Entering a note is optional.7.Click Claim.8.The number of existing claims on the line item updates, and a claim voucher appears. The voucher can be printed and mailed to the vendor to initiate the claim. - Produce a list of claim-ready itemsProduce a list of claim-ready items + Produce a list of claim-ready itemsProduce a list of claim-ready items If an item has not been received and meets the conditions for claiming according to the item’s claim policy, then the item will be eligible for claiming. Evergreen can produce a list of items, by ordering branch, which is ready to be claimed. You can use this list to manually claim items from your provider. - 1.Click Acquisitions →Claim-Ready Items.2.Choose a branch from the drop down menu to claim items that were ordered by this - branch.3.Any items that meet the conditions for claiming will appear.4.Check the box adjacent to the line items that you wish to claim. Click Claim selected items.5.Select a claim type from the drop down menu. Entering a note is optional.6.Click Claim. + 1.Click Acquisitions →Claim-Ready Items.2.Choose a branch from the drop down menu to claim items that were ordered by this + branch.3.Any items that meet the conditions for claiming will appear.4.Check the box adjacent to the line items that you wish to claim. Click Claim selected items.5.Select a claim type from the drop down menu. Entering a note is optional.6.Click Claim. Export Single Attribute ListExport Single Attribute List You can export ISBNs, ISSNs, or UPCs as a file from the list of line item(s). A list of ISBNs, for example, could be uploaded to vendor websites when placing orders. - 1.From a selection list or purchase order, check the boxes of the line items with attributes - that you wish to export.2.Click Actions → Export Single Attribute List.3.Choose the line item attribute that you would like to export from the drop down list of - attributes.4.Click Export List.5.Save the file to your computer.6.Open the file. Choose a program to open the file. The following is an example of an ISBN in + 1.From a selection list or purchase order, check the boxes of the line items with attributes + that you wish to export.2.Click Actions → Export Single Attribute List.3.Choose the line item attribute that you would like to export from the drop down list of + attributes.4.Click Export List.5.Save the file to your computer.6.Open the file. Choose a program to open the file. The following is an example of an ISBN in a spreadsheet. FundsFunds @@ -536,21 +536,21 @@ Funds can be applied to items from the Copies link that is located on a line item. Funds can also be applied to copies by batch updating line items and their attendant copies. - Apply funds to individual copiesApply funds to individual copies + Apply funds to individual copiesApply funds to individual copies - 1.Click the Copies link on the line item.2.To apply a fund to an individual item, click the drop down arrow in the Fund field. - A yellow fund name indicates that the balance in the fund has dropped to the + 1.Click the Copies link on the line item.2.To apply a fund to an individual item, click the drop down arrow in the Fund field. + A yellow fund name indicates that the balance in the fund has dropped to the warning percent that was entered in the admin module. A red fund name indicates that the balance in the fund has dropped to the stop percent that was entered in the admin module. Funds that have been closed out will no longer appear on the drop down list. - Apply funds to copies via batch updates to line itemsApply funds to copies via batch updates to line items + Apply funds to copies via batch updates to line itemsApply funds to copies via batch updates to line items You can apply funds to all copies on a line item(s) from the Actions menu on the selection list or the purchase order. - 1.Check the boxes of the line items with copies to which you would like to apply funds.2.Click Actions →Apply Funds to Selected Items.3.Select the fund that you wish to apply to the copies.4.Click Submit. + 1.Check the boxes of the line items with copies to which you would like to apply funds.2.Click Actions →Apply Funds to Selected Items.3.Select the fund that you wish to apply to the copies.4.Click Submit. Invoice acquisitionsInvoice acquisitions @@ -559,42 +559,42 @@ purchases. You can also link existing invoices to purchase orders. In 2.0, all invoicing is manual.You can invoice items before you receive the items if desired. You can also reopen closed invoices, and you can print all invoices. - Create a blanket invoiceCreate a blanket invoice + Create a blanket invoiceCreate a blanket invoice You can create a blanket invoice for purchases that are not attached to a purchase order. - 1.Click Acquisitions → Create invoice.2.Enter the invoice information in the top half of the screen. 3.Select a charge type from the drop down menu.4.Select a fund from the drop down menu.5.Enter a Title/Description of the resource.6.Enter the amount that you were billed.7.Enter the amount that you paid.8.Save the invoice. + 1.Click Acquisitions → Create invoice.2.Enter the invoice information in the top half of the screen. 3.Select a charge type from the drop down menu.4.Select a fund from the drop down menu.5.Enter a Title/Description of the resource.6.Enter the amount that you were billed.7.Enter the amount that you paid.8.Save the invoice. - Create an invoice for a purchase orderCreate an invoice for a purchase order + Create an invoice for a purchase orderCreate an invoice for a purchase order You can create an invoice for all of the line items on a purchase order. The only fields that are required to save the invoice are the Vendor Invoice ID and the number of items invoiced, billed, and paid for each line item. With the exception of fields with drop down menus, no limitations on the data that you enter exist. - 1.Open a purchase order.2.Click Create Invoice.3.Enter a Vendor Invoice ID. This number may be listed on the paper invoice sent from your - vendor.4.Choose a Receive Method from the drop down menu. - Only paper invoicing is available in the 2.0 release. Electronic invoicing + 1.Open a purchase order.2.Click Create Invoice.3.Enter a Vendor Invoice ID. This number may be listed on the paper invoice sent from your + vendor.4.Choose a Receive Method from the drop down menu. + Only paper invoicing is available in the 2.0 release. Electronic invoicing may be available in future releases. - 5.The Provider is generated from the purchase order and is entered by default.6.Enter a note.7.Select a payment method from the drop down menu.8.The Invoice Date is entered by default as the date that you create the invoice. You can - change the date by clicking in the field. A calendar drops down.9.Enter an Invoice Type.10.The Shipper defaults to the provider that was entered in the purchase order.11.Enter a Payment Authorization.12.The Receiver defaults to the branch at which your workstation is registered. You can + 5.The Provider is generated from the purchase order and is entered by default.6.Enter a note.7.Select a payment method from the drop down menu.8.The Invoice Date is entered by default as the date that you create the invoice. You can + change the date by clicking in the field. A calendar drops down.9.Enter an Invoice Type.10.The Shipper defaults to the provider that was entered in the purchase order.11.Enter a Payment Authorization.12.The Receiver defaults to the branch at which your workstation is registered. You can change the receiver by selecting an org unit from the drop down menu. The bibliographic line items are listed in the next section of the invoice. Along with the title and author of the line items is a summary of copies ordered, received, invoiced, claimed, and cancelled. You can also view the amounts estimated, encumbered, and paid for each line item. Finally, each line item has a line item ID and links to the selection list (if used) and the purchase - order.13.Enter the number of items that were invoiced, the amount that the organization was billed, - and the amount that the organization paid.14.You have the option to add charge types if applicable. Charge types are additional charges + order.13.Enter the number of items that were invoiced, the amount that the organization was billed, + and the amount that the organization paid.14.You have the option to add charge types if applicable. Charge types are additional charges that can be selected from the drop down menu. Common charge types include taxes and - handling fees.15.You have three options for saving an invoice. You can click Save, which saves the changes + handling fees.15.You have three options for saving an invoice. You can click Save, which saves the changes that you have made, but keeps the invoice open. You can click Save and Prorate, which enables you to save the invoice and prorate any additional charges, such as taxes, across funds, if multiple funds have been used to pay the invoice. You also can click Save and Close. Choose this option when you have completed the invoice. - You can re-open a closed invoice by clicking the link, Re-open invoice. This link + You can re-open a closed invoice by clicking the link, Re-open invoice. This link appears at the bottom of a closed invoice. - Link an existing invoice to a purchase orderLink an existing invoice to a purchase order + Link an existing invoice to a purchase orderLink an existing invoice to a purchase order You can use the link invoice feature to link an existing invoice to a purchase order. For example, an invoice is received for a shipment with items on @@ -602,15 +602,15 @@ order #1 is retrieved, and the invoice is created. To receive the items on purchase order #2, simply link the invoice to the purchase order. You do not need to recreate it. - 1.Open a purchase order.2.Click Link Invoice.3.Enter the Invoice # and the Provider of the invoice to which you wish to link.4.Click Link. + 1.Open a purchase order.2.Click Link Invoice.3.Enter the Invoice # and the Provider of the invoice to which you wish to link.4.Click Link. - View an invoiceView an invoice + View an invoiceView an invoice You can view an invoice in one of four ways: view open invoices; view invoices on a purchase order; view invoices by searching specific invoice fields; view invoices attached to a line item. - •To view open invoices, click Acquisitions → Open invoices. This opens the Acquisitions - Search screen. The default fields search for open invoices. Click Search.•To view invoices on a purchase order, open a purchase order, and click the View Invoices + •To view open invoices, click Acquisitions → Open invoices. This opens the Acquisitions + Search screen. The default fields search for open invoices. Click Search.•To view invoices on a purchase order, open a purchase order, and click the View Invoices link. The number in parentheses indicates the number of invoices that are attached to the purchase order. @@ -622,69 +622,69 @@ contain attributes, which are characteristics of the bibliographic record, such as ISBNs or Title. Line items also contain copy information, price information, and notes and alerts. - Add alerts to a line itemAdd alerts to a line item + Add alerts to a line itemAdd alerts to a line item Alerts are pop up messages that appear when an item is received. Alerts can be printed on the line item worksheet. - 1.Click the Notes link on the line item.2.Click the New Alert drop down button.3.Choose an alert code from the drop down menu.4.Add additional comments if desired.5.Click Create. The alert will display on the screen.6.Click Return to return to the line item. When you return to the line item, + 1.Click the Notes link on the line item.2.Click the New Alert drop down button.3.Choose an alert code from the drop down menu.4.Add additional comments if desired.5.Click Create. The alert will display on the screen.6.Click Return to return to the line item. When you return to the line item, a flag will appear to indicate that an alert is on the line item. - Add copies to a line itemAdd copies to a line item + Add copies to a line itemAdd copies to a line item Use the Copies link to add copy information to a line item. You can add copies to line items on a selection list or a purchase order. - 1.Click the Copies link on a line item.2.Enter the number of items that you want to order in Item Count, and click Go. The number - of items that you want to order will display below.3.If desired, apply a Distribution Formula from the drop down list. Distribution formulas tell - the ILS how many copies should be distributed to each location.4.The owning branch and shelving location populate with entries from the distribution - formula. Click Apply.5.Look back at the top gray row of text boxes above the distribution formula. Each text box in + 1.Click the Copies link on a line item.2.Enter the number of items that you want to order in Item Count, and click Go. The number + of items that you want to order will display below.3.If desired, apply a Distribution Formula from the drop down list. Distribution formulas tell + the ILS how many copies should be distributed to each location.4.The owning branch and shelving location populate with entries from the distribution + formula. Click Apply.5.Look back at the top gray row of text boxes above the distribution formula. Each text box in this row corresponds to the columns below. Changes made here will be applied to all - copies below. Click Batch Update.6.Click Save Changes.7.Click Return to return to the selection list or purchase order.8.Add the item’s price to the line item in the Estimated Price field. + copies below. Click Batch Update.6.Click Save Changes.7.Click Return to return to the selection list or purchase order.8.Add the item’s price to the line item in the Estimated Price field. - Add notes to a line itemAdd notes to a line item + Add notes to a line itemAdd notes to a line item Notes on line items can include any additional information that you want to add to the line item. Notes can be internal or can be made available to providers. Notes appear in a pop up box when an item is received. Notes can be printed on line item worksheets, which can be printed and placed in books for processing. - 1.Click the Notes link on the line item.2.Click the New Note drop down button.3.Enter a note.4.You have the option to make this note available to your provider. Click the - check box adjacent to Note is vendor-public.5.Click Create. The note will appear on the screen.6.Click Return to return to the line item. When you return to the line item, + 1.Click the Notes link on the line item.2.Click the New Note drop down button.3.Enter a note.4.You have the option to make this note available to your provider. Click the + check box adjacent to Note is vendor-public.5.Click Create. The note will appear on the screen.6.Click Return to return to the line item. When you return to the line item, a number in parentheses adjacent to notes indicates how many notes are attached to the item. - Holdings maintenanceHoldings maintenance + Holdings maintenanceHoldings maintenance After an item has been received, click Actions → Holdings Maintenance to edit holdings. The Holdings Maintenance screen opens in a new tab. - Link to invoiceLink to invoice + Link to invoiceLink to invoice Use the Link to invoice menu item to link the line item to an invoice that already exists in the ILS. - 1.Click Actions → Link to Invoice.2.A pop up box appears. Enter an invoice number.3.Enter a provider. The field will auto-complete.4.Click Link. + 1.Click Actions → Link to Invoice.2.A pop up box appears. Enter an invoice number.3.Enter a provider. The field will auto-complete.4.Click Link. - Update barcodesUpdate barcodes + Update barcodesUpdate barcodes After an item has been received, click Actions → Update Barcodes to edit holdings. The Volume and Copy Creator screen opens in a new tab. - View historyView history + View historyView history Click Actions → View history to view the changes that have occurred in the life of the line item. - View invoiceView invoice + View invoiceView invoice Click Actions → View invoice to view any invoices that are attached to the line item. - Line Item WorksheetLine Item Worksheet + Line Item WorksheetLine Item Worksheet The Line Item Worksheet was designed to be a printable sheet that contains details about the line item, including alerts and notes, and distribution of the copies. This worksheet could placed in a book that is sent to cataloging or processing. - 1.From a selection list or purchase order, click the worksheet link on the line item.2.The line item worksheet appears.3.To print the worksheet, click the Print Page link in the top right corner. + 1.From a selection list or purchase order, click the worksheet link on the line item.2.The line item worksheet appears.3.To print the worksheet, click the Print Page link in the top right corner. - Link line items to the catalogLink line items to the catalog + Link line items to the catalogLink line items to the catalog You can link a MARC record or brief record on a selection list to the corresponding MARC record in the catalog. This may be useful for librarians who have a brief MARC record in their catalog @@ -693,25 +693,25 @@ enables you to search for a matching record and link to it from the selection list or purchase order. When you import the record from the purchase order, the record will overlay the linked record in the catalog. - 1.From the line item, click Link to catalog.2.In the text box that pops up, search terms, such as ISBN and title, are entered by default.3.Click Search.4.Result(s) appear. Click the link to View MARC, or Select the record to link it to the record on - the selection list or purchase order.5.The screen will reload, and the line item displays with a catalog link. The records are linked. + 1.From the line item, click Link to catalog.2.In the text box that pops up, search terms, such as ISBN and title, are entered by default.3.Click Search.4.Result(s) appear. Click the link to View MARC, or Select the record to link it to the record on + the selection list or purchase order.5.The screen will reload, and the line item displays with a catalog link. The records are linked. Load Bib Records and Items Into the CatalogLoad Bib Records and Items Into the Catalog You can load bib records and items into the catalog at three different locations in the acquisitions module. - •You can import bib records and items (if holdings information is attached) when you upload + •You can import bib records and items (if holdings information is attached) when you upload MARC order records. Click Acquisitions → Load MARC Order Records and check the box - adjacent to Load Bibs and Items into the ILS.•You can import bib records and items into the catalog when you create a purchase order + adjacent to Load Bibs and Items into the ILS.•You can import bib records and items into the catalog when you create a purchase order from a selection list. From the selection list, click Actions → Create Purchase Order. Check - the box adjacent to Load Bibs and Items into the ILS to import the records into the catalog.•You can import bib records and items into the catalog from a purchase order by clicking + the box adjacent to Load Bibs and Items into the ILS to import the records into the catalog.•You can import bib records and items into the catalog from a purchase order by clicking Actions → Load Bibs and Items. - If you have not loaded bib records and items into the catalog before you activate + If you have not loaded bib records and items into the catalog before you activate a purchase order, then the ILS will automatically import the bib records and items into the catalog when you activate the purchase order. - Load Catalog Record IDsLoad Catalog Record IDs + Load Catalog Record IDsLoad Catalog Record IDs The Load Catalog Record IDs function enables you to create line items from a list of catalog records whose record IDs are saved in a CSV file.This would be useful if you want to batch order copies of items that your @@ -721,40 +721,40 @@ extra copy of six titles. Your copy/hold ratio report includes the record ID of each title. You can save the record IDs into a CSV file, upload the file into the ILS, and create a purchase order for the items. - 1.Create a CSV file with the record ID of each catalog record in the first + 1.Create a CSV file with the record ID of each catalog record in the first column of the spreadsheet. You can create this CSV file from a spreadsheet generated by a report, as suggested in the aforementioned example. You can also copy and paste record IDs from the catalog record into the CSV file. - Record IDs are auto-generated digits associated with each record. They are + Record IDs are auto-generated digits associated with each record. They are found in the Record Summary that appears at the top of each record. - 2.Save the CSV file to your computer.3.Click Acquisitions → Load Catalog Record IDs.4.Click Load More Terms.5.The screen will display the number of terms (record IDs) that have been loaded.6.Click Retrieve Records. The records will appear as line items to which you can add copies, + 2.Save the CSV file to your computer.3.Click Acquisitions → Load Catalog Record IDs.4.Click Load More Terms.5.The screen will display the number of terms (record IDs) that have been loaded.6.Click Retrieve Records. The records will appear as line items to which you can add copies, notes, and pricing information. Use the Actions menu to save these items to a selection list or purchase order. - Load MARC Order RecordsLoad MARC Order Records + Load MARC Order RecordsLoad MARC Order Records The Load MARC Order Records screen enables you to upload MARC records that have been saved on your computer into the ILS. You can add the records to a selection list and/or to a purchase order. You can both create and activate purchase orders in one step from this interface. Also, from this interface, you can load bibs and items into the catalog. - 1.Click Acquisitions → Load MARC Order Records2.If you want to upload the MARC records to a new purchase order, then click the check box - adjacent to Create Purchase Order.3.If you want to activate the purchase order at the time of creation, then click the check box - adjacent to Activate Purchase Order.4.If you want to load bibs and items into the catalog, then click the check box adjacent to - Load Bibs and Items into the ILS.5.Enter the name of the Provider. The text will auto-complete.6.Select an org unit from the drop down menu. The context org unit is the org unit that + 1.Click Acquisitions → Load MARC Order Records2.If you want to upload the MARC records to a new purchase order, then click the check box + adjacent to Create Purchase Order.3.If you want to activate the purchase order at the time of creation, then click the check box + adjacent to Activate Purchase Order.4.If you want to load bibs and items into the catalog, then click the check box adjacent to + Load Bibs and Items into the ILS.5.Enter the name of the Provider. The text will auto-complete.6.Select an org unit from the drop down menu. The context org unit is the org unit that "owns" the bib record. You should select a physical location rather than a political or administrative org unit as the context org unit. For example, the Smith County Library System is funding purchase of a copy of Gone with the Wind. The system owns the bib record, but it cannot receive the physical item. The acquisitions librarian will choose a physical branch of that system, a processing center or an individual branch, to receive the - item.7.If you want to upload the records to a selection list, you can select a list from the drop down - menu, or type in the name of the selection list that you want to create.8.Click Browse to search for the file of bibliographic records.9.Click Upload.10.A summary of the items that have been processed will appear.11.Click the links that appear to view the purchase order or the selection list. + item.7.If you want to upload the records to a selection list, you can select a list from the drop down + menu, or type in the name of the selection list that you want to create.8.Click Browse to search for the file of bibliographic records.9.Click Upload.10.A summary of the items that have been processed will appear.11.Click the links that appear to view the purchase order or the selection list. - MARC Federated SearchMARC Federated Search + MARC Federated SearchMARC Federated Search The MARC Federated Search enables you to import bibliographic records into a selection list or purchase order from a Z39.50 source. - 1.Click Acquisitions → MARC Federated Search.2.Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is - checked by default. Click Submit.3.A list of results will appear. Click the Copies link to add copy information to the line item.4.Click the Notes link to add notes or line item alerts to the line item.5.Enter a price in the Estimated Price field.6.You can save the line item(s) to a selection list by checking the box on the line item and + 1.Click Acquisitions → MARC Federated Search.2.Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is + checked by default. Click Submit.3.A list of results will appear. Click the Copies link to add copy information to the line item.4.Click the Notes link to add notes or line item alerts to the line item.5.Enter a price in the Estimated Price field.6.You can save the line item(s) to a selection list by checking the box on the line item and clicking Actions →Save Items to Selection List. You can also create a purchase order from the line item(s) by checking the box on the line item and clicking Actions → Create Purchase Order. @@ -772,77 +772,77 @@ order records, the View/Place Orders link in the catalog, or results from a MARC Federated Search. You can also create blanket purchase orders to which you can add brief records or generic charges and fees. - Activate a purchase orderActivate a purchase order + Activate a purchase orderActivate a purchase order Before you can active a purchase order, the following criteria must be met: - 1.The field, Activate Order?, is located in the top half of the purchase order. The answer - adjacent to this field must be Yes.2.Each line item must contain an estimated price. If the Activate Order? field in the top half + 1.The field, Activate Order?, is located in the top half of the purchase order. The answer + adjacent to this field must be Yes.2.Each line item must contain an estimated price. If the Activate Order? field in the top half of the purchase order reads, No: The lineitem has no price (ACQ_LINEITEM_NO_PRICE), then simply enter a price in the estimated price field, tab out of the field, and click Reload.When the above criteria have been met, proceed with the following: . Look at the Activate Order? field in the top half of the purchase order. Click the hyperlinked Activate Order. When you activate the order, the bibliographic records and copies will be imported into the catalog, and the funds associated with the purchases will be encumbered. - You can add brief records to new or existing purchase orders. + You can add brief records to new or existing purchase orders. - Add charges, taxes, fees, or discounts to a purchase orderAdd charges, taxes, fees, or discounts to a purchase order + Add charges, taxes, fees, or discounts to a purchase orderAdd charges, taxes, fees, or discounts to a purchase order You can add charges, taxes, fees, or discounts to a purchase order. These additional charges will be reflected in the amounts that are estimated and encumbered on the purchase order. - 1.Open or create a purchase order.2.Click New charge.3.Select a charge type from the drop down menu.4.Select a fund from the drop down menu.5.Enter a Title/Description, Author, and Note if applicable.6.Enter an estimated cost.7.Add another new charge, or click Save New Charges. - Discounts are not consistently supported in the 2.0 release. + 1.Open or create a purchase order.2.Click New charge.3.Select a charge type from the drop down menu.4.Select a fund from the drop down menu.5.Enter a Title/Description, Author, and Note if applicable.6.Enter an estimated cost.7.Add another new charge, or click Save New Charges. + Discounts are not consistently supported in the 2.0 release. - Add notes to a purchase orderAdd notes to a purchase order + Add notes to a purchase orderAdd notes to a purchase order You can add notes to each purchase order. These can be viewed by staff and/or by the provider. By default, notes are only visible to staff. - 1.Open a purchase order.2.In the top half of the purchase order, you see a Notes field. The number of notes that are - attached to the purchase order is hyperlinked in parentheses next to the Notes field.3.Click the hyperlinked number.4.Click New Note.5.Enter the note. If you wish to make it available to the provider, click the check box adjacent - to Note is vendor-public.6.Click Create. + 1.Open a purchase order.2.In the top half of the purchase order, you see a Notes field. The number of notes that are + attached to the purchase order is hyperlinked in parentheses next to the Notes field.3.Click the hyperlinked number.4.Click New Note.5.Enter the note. If you wish to make it available to the provider, click the check box adjacent + to Note is vendor-public.6.Click Create. - Create a purchase orderCreate a purchase order + Create a purchase orderCreate a purchase order - 1.Click Acquisitions → Create Purchase Order.2.A pop-up box appears. Select an owning library from the drop down menu.3.Enter a provider in the box. The text will auto complete.4.Check the box adjacent to Prepayment Required.5.Click Save.6.The purchase order has been created. You can now create a new charge type or add a brief + 1.Click Acquisitions → Create Purchase Order.2.A pop-up box appears. Select an owning library from the drop down menu.3.Enter a provider in the box. The text will auto complete.4.Check the box adjacent to Prepayment Required.5.Click Save.6.The purchase order has been created. You can now create a new charge type or add a brief record. - The Total Estimated is the sum of the prices. The Total Encumbered is the total + The Total Estimated is the sum of the prices. The Total Encumbered is the total estimated that is encumbered when the purchase order is activated. The Total Spent column automatically updates when the items are invoiced. - Mark ready for orderMark ready for order + Mark ready for orderMark ready for order After an item has been added to a selection list or purchase order, you can mark it ready for order. This step is optional but may be useful to individual workflows. - 1.If you want to mark part of a selection list ready for selector, then you can check the box(es) + 1.If you want to mark part of a selection list ready for selector, then you can check the box(es) of the line item(s) that you wish to mark ready for selector. If you want to mark the entire - list ready for selector, then skip to step 2.2.Click Actions → Mark Ready for Order.3.A pop up box will appear. Choose to mark the selected line items or all line items.4.Click Go.5.The screen will refresh. The line item will be highlighted gray, and the status will change to + list ready for selector, then skip to step 2.2.Click Actions → Mark Ready for Order.3.A pop up box will appear. Choose to mark the selected line items or all line items.4.Click Go.5.The screen will refresh. The line item will be highlighted gray, and the status will change to order-ready. - Name a purchase orderName a purchase order + Name a purchase orderName a purchase order A new purchase order is given the purchase order ID as a default name. However, you can change that name to any grouping of letters or numbers. You can reuse purchase order names as long as a name is never used twice in the same year. - 1.Open or create a purchase order.2.The Name of the purchase order is in the top left column of the purchase order. The - hyperlinked number is an internal ID number that Evergreen has assigned.3.To change this number, click on the hyperlinked ID.4.Enter a new purchase order number in the pop up box.5.Click OK. + 1.Open or create a purchase order.2.The Name of the purchase order is in the top left column of the purchase order. The + hyperlinked number is an internal ID number that Evergreen has assigned.3.To change this number, click on the hyperlinked ID.4.Enter a new purchase order number in the pop up box.5.Click OK. - Print purchase ordersPrint purchase orders + Print purchase ordersPrint purchase orders You can print a purchase order from the purchase order screen. If you add a note to a line item, the note will only appear in the Notes column on the printed purchase order if you make the note vendor-public. Currently, no notes appear in the Notes to the Vendor section of the printed purchase order. - 1.Open a purchase order.2.Click Actions → Print Purchase Order. + 1.Open a purchase order.2.Click Actions → Print Purchase Order. - Split order by line itemsSplit order by line items + Split order by line itemsSplit order by line items You can create a purchase order with multiple line items, and then split the purchase order so that each line item is on separate purchase orders.When a purchase order is in the status of pending, a link to split order by Lineitems appears in the bottom left corner of the top half of the screen. - 1.Click Split Order by Lineitems.2.A pop up box will confirm that you want to split the purchase order. Click OK to continue.3.The items will display by default as a virtual combined purchase order. Future + 1.Click Split Order by Lineitems.2.A pop up box will confirm that you want to split the purchase order. Click OK to continue.3.The items will display by default as a virtual combined purchase order. Future enhancements will allow you to activate the purchase order for each item from this screen. - View On-Order Purchase OrdersView On-Order Purchase Orders + View On-Order Purchase OrdersView On-Order Purchase Orders You can view a list of on-order purchase orders by clicking Acquisitions –> Purchase Orders. The ordering agency defaults to the branch at which your workstation is registered. The state of the @@ -850,14 +850,14 @@ Click Search to begin your search.If you want to expand or change your search of purchase orders, you can choose other criteria from the drop down menus. - View EDI messages on a purchase orderView EDI messages on a purchase order + View EDI messages on a purchase orderView EDI messages on a purchase order You can view electronic messages from your vendor about a specific purchase order. - 1.Open a purchase order.2.In the top half of the purchase order, you see an EDI Messages field. The number of + 1.Open a purchase order.2.In the top half of the purchase order, you see an EDI Messages field. The number of messages that are attached to the purchase order is hyperlinked in parentheses next to the - EDI Messages field.3.Click the hyperlinked number to view the messages. + EDI Messages field.3.Click the hyperlinked number to view the messages. - View Purchase Order HistoryView Purchase Order History + View Purchase Order HistoryView Purchase Order History In the top half of the purchase order, you can view the history of the purchase order. Click the View link in the History field. @@ -867,20 +867,20 @@ You can receive and un-receive entire purchase orders, line items, and individual copies. You can receive items before or after you invoice items. - Receive/un-receive copiesReceive/un-receive copies + Receive/un-receive copiesReceive/un-receive copies - •To receive copies, click the Copies link on the line item, and click the Mark Received link - adjacent to each copy.•To un-receive copies, click the Copies link on the line item, and click the Un-Receive link + •To receive copies, click the Copies link on the line item, and click the Mark Received link + adjacent to each copy.•To un-receive copies, click the Copies link on the line item, and click the Un-Receive link adjacent to each copy. - Receive/un-receive line itemsReceive/un-receive line items + Receive/un-receive line itemsReceive/un-receive line items - •To receive a line item, click the Actions → Mark Received link on the line item.•To un-receive a line item, click the Actions → Un-receive link on the line item. + •To receive a line item, click the Actions → Mark Received link on the line item.•To un-receive a line item, click the Actions → Un-receive link on the line item. - Receive/un-receive purchase ordersReceive/un-receive purchase orders + Receive/un-receive purchase ordersReceive/un-receive purchase orders - •To receive a purchase order, click Actions →Mark Purchase Order as Received. The - purchase order will have a state of received.•To un-receive a purchase order, click Actions →Un-Receive Purchase Order. The purchase + •To receive a purchase order, click Actions →Mark Purchase Order as Received. The + purchase order will have a state of received.•To un-receive a purchase order, click Actions →Un-Receive Purchase Order. The purchase will have a state of on order. @@ -889,14 +889,14 @@ In the acquisitions module, you can search line items, line items and catalog records, selection lists, purchase orders, and invoices. To access the searching interface, click Acquisitions → General Search. - Users may wish to begin their acquisitions process by searching line items + Users may wish to begin their acquisitions process by searching line items and catalog records. This ensures that they do not purchase an item that the library already owns or is on another selection list or purchase order. - 1.Choose the object that you would like to search from the drop down menu.2.Next, refine your search by choosing the specific fields that you would like to search. Click + 1.Choose the object that you would like to search from the drop down menu.2.Next, refine your search by choosing the specific fields that you would like to search. Click Add Search Term to add more fields. Search terms are ANDed together. Click the red X at the end of each row to delete search terms. Some search terms will be disabled depending - on your choice of items to search.3.After you have added search term(s), click Search or click the Enter key. A list of results - appears.4.If you want to edit your search, click the Reveal Search button in the top right corner of the + on your choice of items to search.3.After you have added search term(s), click Search or click the Enter key. A list of results + appears.4.If you want to edit your search, click the Reveal Search button in the top right corner of the results screen to display your search. Selection ListsSelection Lists @@ -905,61 +905,61 @@ may want to purchase. To view your selection list, click Acquisitions → My Selection Lists. Use the general search to view selection lists created by other users. - Create a selection listCreate a selection list + Create a selection listCreate a selection list Selection lists can be created in four areas within the module. Selection lists can be created when you Add Brief Records, Upload MARC Order Records, or find records through the MARC Federated Search. In each of these interfaces, you will find the Add to Selection List field. Enter the name of the selection list that you want to create in that field. Selection lists can also be created through the My Selection Lists interface: - 1.Click Acquisitions → My Selection Lists.2.Click the New Selection List drop down arrow.3.Enter the name of the selection list in the box that appears.4.Click Create. + 1.Click Acquisitions → My Selection Lists.2.Click the New Selection List drop down arrow.3.Enter the name of the selection list in the box that appears.4.Click Create. - Add items to a selection listAdd items to a selection list + Add items to a selection listAdd items to a selection list You can add items to a selection list in one of three ways: add a brief record; upload MARC order records; add records through a federated search; or use the View/Place Orders menu item in the catalog. - Clone selection listsClone selection lists + Clone selection listsClone selection lists Cloning selection lists enables you to copy one selection list into a new selection list. You can maintain both copies of the list, or you can delete the previous list. - 1.Click Acquisitions → My Selection Lists.2.Check the box adjacent to the list that you want to clone.3.Click Clone Selected.4.Enter a name into the box that appears, and click Clone. + 1.Click Acquisitions → My Selection Lists.2.Check the box adjacent to the list that you want to clone.3.Click Clone Selected.4.Enter a name into the box that appears, and click Clone. - Merge selection listsMerge selection lists + Merge selection listsMerge selection lists You can merge two or more selection lists into one selection list. - 1.Click Acquisitions → My Selection Lists.2.Check the boxes adjacent to the selection lists that you want to merge, and click Merge - Selected.3.Choose the Lead Selection List from the drop down menu. This is the list to which the items - on the other list(s) will be transferred.4.Click Merge. + 1.Click Acquisitions → My Selection Lists.2.Check the boxes adjacent to the selection lists that you want to merge, and click Merge + Selected.3.Choose the Lead Selection List from the drop down menu. This is the list to which the items + on the other list(s) will be transferred.4.Click Merge. - Delete selection listsDelete selection lists + Delete selection listsDelete selection lists You can delete selection lists that you do not want to save. You will not be able to retrieve these items through the General Search after you have deleted the list. You must delete all line items from a selection list before you can delete the list. - 1.Click Acquisitions → My Selection Lists.2.Check the box adjacent to the selection list(s) that you want to delete.3.Click Delete Selected. + 1.Click Acquisitions → My Selection Lists.2.Check the box adjacent to the selection list(s) that you want to delete.3.Click Delete Selected. - Mark Ready for SelectorMark Ready for Selector + Mark Ready for SelectorMark Ready for Selector After an item has been added to a selection list or purchase order, you can mark it ready for selector. This step is optional but may be useful to individual workflows. - 1.If you want to mark part of a selection list ready for selector, then you can check the box(es) + 1.If you want to mark part of a selection list ready for selector, then you can check the box(es) of the line item(s) that you wish to mark ready for selector. If you want to mark the entire - list ready for selector, then skip to step 2.2.Click Actions → Mark Ready for Selector.3.A pop up box will appear. Choose to mark the selected line items or all line items.4.Click Go.5.The screen will refresh. The marked line item(s) will be highlighted pink, and the status + list ready for selector, then skip to step 2.2.Click Actions → Mark Ready for Selector.3.A pop up box will appear. Choose to mark the selected line items or all line items.4.Click Go.5.The screen will refresh. The marked line item(s) will be highlighted pink, and the status changes to selector-ready. - Convert selection list to purchase orderConvert selection list to purchase order + Convert selection list to purchase orderConvert selection list to purchase order Use the Actions menu to convert a selection list to a purchase order. - 1.From a selection list, click Actions → Create Purchase Order.2.A pop up box will appear.3.Select the ordering agency from the drop down menu.4.Enter the provider.5.Check the box adjacent to prepayment required if prepayment is required.6.Choose if you will add All Lineitems or Selected Lineitems to your purchase order.7.Check the box if you want to Import Bibs and Create Copies in the catalog.8.Click Submit. + 1.From a selection list, click Actions → Create Purchase Order.2.A pop up box will appear.3.Select the ordering agency from the drop down menu.4.Enter the provider.5.Check the box adjacent to prepayment required if prepayment is required.6.Choose if you will add All Lineitems or Selected Lineitems to your purchase order.7.Check the box if you want to Import Bibs and Create Copies in the catalog.8.Click Submit. View/Place OrdersView/Place Orders - 1.Open a bib record.2.Click Actions for this Record → View/Place Orders.3.Click Add to Selection List, or click Create Purchase Order. + 1.Open a bib record.2.Click Actions for this Record → View/Place Orders.3.Click Add to Selection List, or click Create Purchase Order. Chapter 5. Acquisitions Module Processes - KCLSChapter 5. Acquisitions Module Processes - KCLS @@ -970,214 +970,214 @@ OrderingOrdering - Find or Create the RecordFind or Create the Record + Find or Create the RecordFind or Create the Record For adds: - 1.Search for title in the catalog.2.Click on the title link.3.Right-click on the Bib Call # at the top of screen and copy the call number.4.Go to Marc Edit on Actions for this Record menu. (You can set Marc Edit to be your default if you choose).5.Confirm correct ISBN/UPC is in top position. If not, move it to top. This can be done in the Flat Text Editor. Then copy/paste the fields where + 1.Search for title in the catalog.2.Click on the title link.3.Right-click on the Bib Call # at the top of screen and copy the call number.4.Go to Marc Edit on Actions for this Record menu. (You can set Marc Edit to be your default if you choose).5.Confirm correct ISBN/UPC is in top position. If not, move it to top. This can be done in the Flat Text Editor. Then copy/paste the fields where you need them to go. For new orders: - 1.For print orders, search for title in OCLC. If the record is in OCLC:2.Update holdings in OCLC.3.Confirm correct ISBN/UPC is in top position. If not, move it to top.4.Export it into Evergreen using the ACQMASTERMACRO OCLC macro (do not overlay).5.Search for title in the catalog. Click on the title link.6.For non-print orders OR if the record is not in OCLC, create a brief record: - a.Select Create New Marc Record on the Cataloging menu.b.Click the Load button. This will bring up a blank Marc record.c.Enter your short record information. Use tab or mouse to move from one field to the next. Click on the Help button to see shortcut keys.d.Enter the date in TWO places. Enter the date in the 260ǂc AND enter in the Date1 box at top of record.e.Add a row at the end of the record (put cursor in last row and type Ctrl+Enter). Type in 998 and 2 spaces. Type a “d” (the “d” should be blue). + 1.For print orders, search for title in OCLC. If the record is in OCLC:2.Update holdings in OCLC.3.Confirm correct ISBN/UPC is in top position. If not, move it to top.4.Export it into Evergreen using the ACQMASTERMACRO OCLC macro (do not overlay).5.Search for title in the catalog. Click on the title link.6.For non-print orders OR if the record is not in OCLC, create a brief record: + a.Select Create New Marc Record on the Cataloging menu.b.Click the Load button. This will bring up a blank Marc record.c.Enter your short record information. Use tab or mouse to move from one field to the next. Click on the Help button to see shortcut keys.d.Enter the date in TWO places. Enter the date in the 260ǂc AND enter in the Date1 box at top of record.e.Add a row at the end of the record (put cursor in last row and type Ctrl+Enter). Type in 998 and 2 spaces. Type a “d” (the “d” should be blue). Then type the letter code - that corresponds to the material type in lowercase (for example, book=a).f.Click the Create Record button. Reload if needed. + that corresponds to the material type in lowercase (for example, book=a).f.Click the Create Record button. Reload if needed. - if you need to go back and edit the short record after clicking Create Record, remember that the 901 field must be the last field in the record. + if you need to go back and edit the short record after clicking Create Record, remember that the 901 field must be the last field in the record. All fields following the 901 will be deleted when you save the record. - Create the OrderCreate the Order + Create the OrderCreate the Order - 1.From the catalog record, click View/Place Orders on the Actions for this Record menu.2.Click on the Create Purchase Order button.3.Enter the following as shown below: - a.Ordering Agency = PRb.Enter Provider code (type slowly)c.Uncheck “Prepayment Required” check box (unless valid).d.Confirm “All Line Items” button is selected. - 4.Click Submit.5.If you get a dialog box about prepayment being required even though you unchecked the box, click on OK to proceed anyway.6.If you get this error, click OK and Reload.7.Click on Copies.8.On the Copies screen: - a.Enter item count and click Go.b.Enter shelving location in 2nd drop down in Batch Update row.c.Enter fund in 4th drop down in Batch Update row.d.Enter Circ Modifier in 5th drop down in Batch Update row.e.For NEW orders, enter ON ORDER call number (format specific) in last box in Batch Update row.f.For ADDS, paste in the call number from the bib record.g.Click Batch Update.h.Enter Distribution Formula and click Apply.i.Click Save Changes.j.Click Return. - 9.Click Notes. Add a note for format (for example, paperback, library binding, etc.). Check the box in the note to make it vendor public. It will print on + 1.From the catalog record, click View/Place Orders on the Actions for this Record menu.2.Click on the Create Purchase Order button.3.Enter the following as shown below: + a.Ordering Agency = PRb.Enter Provider code (type slowly)c.Uncheck “Prepayment Required” check box (unless valid).d.Confirm “All Line Items” button is selected. + 4.Click Submit.5.If you get a dialog box about prepayment being required even though you unchecked the box, click on OK to proceed anyway.6.If you get this error, click OK and Reload.7.Click on Copies.8.On the Copies screen: + a.Enter item count and click Go.b.Enter shelving location in 2nd drop down in Batch Update row.c.Enter fund in 4th drop down in Batch Update row.d.Enter Circ Modifier in 5th drop down in Batch Update row.e.For NEW orders, enter ON ORDER call number (format specific) in last box in Batch Update row.f.For ADDS, paste in the call number from the bib record.g.Click Batch Update.h.Enter Distribution Formula and click Apply.i.Click Save Changes.j.Click Return. + 9.Click Notes. Add a note for format (for example, paperback, library binding, etc.). Check the box in the note to make it vendor public. It will print on PO or be transmitted to vendor electronically. Enter another note for cataloging instructions (for example, CAT A) but do not check the vendor public box. Enter other notes as needed. - 10.Click Return.11.Enter the item price in the Estimated Price box.12.Click Reload.13.Click on Activate Order link.14.Select Print Purchase Order (if not an EDI account) from the P.O. Actions drop down menu.15.If order has copies for suppressed libraries or Reference items, click the Catalog link next to the line item number (or go back to Bib Record tab). + 10.Click Return.11.Enter the item price in the Estimated Price box.12.Click Reload.13.Click on Activate Order link.14.Select Print Purchase Order (if not an EDI account) from the P.O. Actions drop down menu.15.If order has copies for suppressed libraries or Reference items, click the Catalog link next to the line item number (or go back to Bib Record tab). Select Holdings Maintenance from the Actions for this Record menu. Edit the items/apply templates. - •Example: Suppressed library•Example: Adult Reference - 16.If order has a hold(s), click the Catalog link next to the line item number (or go back to Bib Record tab) and place hold(s). + •Example: Suppressed library•Example: Adult Reference + 16.If order has a hold(s), click the Catalog link next to the line item number (or go back to Bib Record tab) and place hold(s). Receiving Print MaterialsReceiving Print Materials - 1. + 1. From the Cataloging menu Select Search the Catalog. - 2. + 2. Select your title. (If you cannot find the record linked to the order, use the Acquisitions General Search to search by line number – see #1 under Alternative Workflow section.) - 3. + 3. Click on Actions for this Record Select View/Place Orders. - 4. + 4. Verify the Line number and Purchase Order number matches numbers on packing slip/invoice. Click on Purchase Order Number link. - 5. + 5. Purchase Order will display. (Purchase Order Status & Line Item Status = on-order) - 6. + 6. Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - 7. + 7. Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - 8. + 8. Click on worksheet link. Print worksheet. Click Go Back. - 9. + 9. Switch tabs back to Bib Record tab. Catalog record and if it is a new title, update call number when possible. - 10. + 10. Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes. - 11. + 11. Or go to Holdings Maintenance to replace barcodes. - 12. + 12. Apply call number to all copies and replace barcodes. - 13. + 13. Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly flagged, and click Modify Copies. - 14. + 14. For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or by selecting Holdings Maintenance on the Line Item Actions drop down menu.) - Receiving Print Materials - Alternative WorkflowReceiving Print Materials - Alternative Workflow + Receiving Print Materials - Alternative WorkflowReceiving Print Materials - Alternative Workflow You can also receive in Acquisitions on one tab and then search the catalog by title on another tab. - 1.From the Acquisitions menu Select General Search - •To search by ISBN:Search for “line items” matching “all” of the following terms: “LIA – ISBN” is “[enter/scan ISBN].”•To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing + 1.From the Acquisitions menu Select General Search + •To search by ISBN:Search for “line items” matching “all” of the following terms: “LIA – ISBN” is “[enter/scan ISBN].”•To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing slip/invoice].” - 2.Click Search.3.Click Purchase Order number link.4.Purchase Order will display. (Purchase Order Status & Line Item Status = on-order)5.Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format + 2.Click Search.3.Click Purchase Order number link.4.Purchase Order will display. (Purchase Order Status & Line Item Status = on-order)5.Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see - Partial Receipts section.6.Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.”7.Click on worksheet link. Print worksheet. Click Go Back.8.Open a new tab and search the catalog by title. Select title.9.Catalog record and if it is a new title, update call number when possible.10.Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes.11.Or go to Holdings Maintenance to replace barcodes.12.Apply call number to all copies and replace barcodes.13.Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly - flagged, and click Modify Copies.14.For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or + Partial Receipts section.6.Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.”7.Click on worksheet link. Print worksheet. Click Go Back.8.Open a new tab and search the catalog by title. Select title.9.Catalog record and if it is a new title, update call number when possible.10.Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes.11.Or go to Holdings Maintenance to replace barcodes.12.Apply call number to all copies and replace barcodes.13.Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly + flagged, and click Modify Copies.14.For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or by selecting Holdings Maintenance on the Line Item Actions drop down menu.) - Receiving Print Materials - Partial ReceiptsReceiving Print Materials - Partial Receipts + Receiving Print Materials - Partial ReceiptsReceiving Print Materials - Partial Receipts First Shipment: - 1.From the Purchase Order screen, click Notes link. Check for any earlier partial receipts.2.If first shipment, then click Copies link.3.Click “Mark Received” for number of copies in hand (start with top copy). Click Return.4.If you have most of the copies in hand, you can also select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the - missing copy/copies (starting with top copy). Click Return.5.Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL).6.Catalog as usual.7.To replace barcodes and apply down call number, you must use Holdings Maintenance. Currently it is not possible to replace barcodes using Update Barcodes - in Line Item Actions drop down menu.8.Remember to flag title on invoice and change no. of copies on worksheet. + 1.From the Purchase Order screen, click Notes link. Check for any earlier partial receipts.2.If first shipment, then click Copies link.3.Click “Mark Received” for number of copies in hand (start with top copy). Click Return.4.If you have most of the copies in hand, you can also select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the + missing copy/copies (starting with top copy). Click Return.5.Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL).6.Catalog as usual.7.To replace barcodes and apply down call number, you must use Holdings Maintenance. Currently it is not possible to replace barcodes using Update Barcodes + in Line Item Actions drop down menu.8.Remember to flag title on invoice and change no. of copies on worksheet. Next Shipment: - 1.Check Notes to see how many items were previously received.2.If shipment completes the order, click on the Actions drop down menu and select Mark Received.3.If shipment does not complete order, click Copies and mark individual copies as received (as shown above).4.Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL).5.If the shipment completes the order, click on the Line Item Actions drop down menu Select Update Barcodes. Apply down call number and replace barcodes. - Or replace barcodes in Holdings Maintenance.6.Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off + 1.Check Notes to see how many items were previously received.2.If shipment completes the order, click on the Actions drop down menu and select Mark Received.3.If shipment does not complete order, click Copies and mark individual copies as received (as shown above).4.Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL).5.If the shipment completes the order, click on the Line Item Actions drop down menu Select Update Barcodes. Apply down call number and replace barcodes. + Or replace barcodes in Holdings Maintenance.6.Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off locations already received. - Unreceiving Print MaterialsUnreceiving Print Materials + Unreceiving Print MaterialsUnreceiving Print Materials - 1. To un-receive an order, go to the Purchase Order screen. + 1. To un-receive an order, go to the Purchase Order screen. - 2. To un-receive the complete order, click on Actions drop down menu Select Un-Receive. + 2. To un-receive the complete order, click on Actions drop down menu Select Un-Receive. - 3. To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. + 3. To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. - 4. If the barcodes have already been replaced, go to Holdings Maintenance and replace real barcode numbers with temporary barcode numbers. To create a temporary barcode use your initials and a number (example: cme1). Start with 1 and then auto-generate as needed. Keep track of last number used to start with the next time so you don’t create duplicate barcodes. + 4. If the barcodes have already been replaced, go to Holdings Maintenance and replace real barcode numbers with temporary barcode numbers. To create a temporary barcode use your initials and a number (example: cme1). Start with 1 and then auto-generate as needed. Keep track of last number used to start with the next time so you don’t create duplicate barcodes. Receiving Non-print MaterialsReceiving Non-print Materials - 1. + 1. From the Cataloging menu Search the Catalog. - 2. + 2. Select your title. - 3. + 3. Click on Actions for this Record Select View/Place Orders. - Verify that the line number and purchase order number matches numbers on packing slip/invoice. If purchase order number is not printed on packing slip/invoice, write + Verify that the line number and purchase order number matches numbers on packing slip/invoice. If purchase order number is not printed on packing slip/invoice, write the purchase order number on packing slip/invoice. - 4. + 4. Click on purchase order number link. - 5. + 5. Purchase Order will display. (Purchase Order status & Line Item status = on-order) - 6. + 6. Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - 7. + 7. Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - 8. + 8. Click on worksheet link. Print worksheet. Click Go Back. - 9. + 9. If new title, overlay short record with OCLC record if available. - a. + a. Switch tabs back to Bib Record tab. Copy TCN. - b. + b. Search OCLC for record. If found, export using overlay macro. - c. + c. Reload record to confirm overlay. - Receiving Non-print Materials - Alternative WorkflowReceiving Non-print Materials - Alternative Workflow + Receiving Non-print Materials - Alternative WorkflowReceiving Non-print Materials - Alternative Workflow - 1.From the Acquisitions menu Select General Search. - •To search by UPC or ISBN: Search for “line items” matching “all” of the following terms: “LIA – UPC” is “[enter/scan UPC] or “LIA - ISBN” is - “[enter/scan ISBN].”•To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing + 1.From the Acquisitions menu Select General Search. + •To search by UPC or ISBN: Search for “line items” matching “all” of the following terms: “LIA – UPC” is “[enter/scan UPC] or “LIA - ISBN” is + “[enter/scan ISBN].”•To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing slip/invoice].” - 2.Click Search.3.Click Purchase Order number link.4.Purchase Order will display. (Purchase Order status & Line Item status = on-order)5.Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format + 2.Click Search.3.Click Purchase Order number link.4.Purchase Order will display. (Purchase Order status & Line Item status = on-order)5.Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, - Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section.6.Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.”7.Click on worksheet link. Print worksheet. Click Go Back.8.If new title, overlay short record with OCLC record if available. - a.Switch to second tab and search for title in catalog. Copy TCN.b.Search OCLC for record. If found, export using overlay macro.c.Reload record to confirm overlay. + Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section.6.Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.”7.Click on worksheet link. Print worksheet. Click Go Back.8.If new title, overlay short record with OCLC record if available. + a.Switch to second tab and search for title in catalog. Copy TCN.b.Search OCLC for record. If found, export using overlay macro.c.Reload record to confirm overlay. - Receiving Non-print Materials - Partial ReceiptsReceiving Non-print Materials - Partial Receipts + Receiving Non-print Materials - Partial ReceiptsReceiving Non-print Materials - Partial Receipts First Shipment: - 1.From the Purchase Order screen, check Notes for any earlier partial receipts.2.If first shipment, then click Copies link.3.Click “Mark Received” for number of copies in hand (start with top copy). Click Return.4.Or if you have the majority of the copies in hand, you can select Mark Received (on Actions drop down menu) for entire order and then “un-receive” + 1.From the Purchase Order screen, check Notes for any earlier partial receipts.2.If first shipment, then click Copies link.3.Click “Mark Received” for number of copies in hand (start with top copy). Click Return.4.Or if you have the majority of the copies in hand, you can select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the missing copy/copies (starting - with top copy). Click Return.5.Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL).6.Remember to flag title on invoice and change no. of copies on worksheet. + with top copy). Click Return.5.Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL).6.Remember to flag title on invoice and change no. of copies on worksheet. Next Shipment: - 1.Check Notes to see how many items were previously received.2.If shipment completes the order, click on the Actions drop down menu and select Mark Received.3.If shipment does not complete order, click Copies and mark individual copies as received (as shown above).4.Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL).5.Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off + 1.Check Notes to see how many items were previously received.2.If shipment completes the order, click on the Actions drop down menu and select Mark Received.3.If shipment does not complete order, click Copies and mark individual copies as received (as shown above).4.Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL).5.Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off locations already received. Unreceiving Non-print MaterialsUnreceiving Non-print Materials - 1.To un-receive an order, go to the Purchase Order screen.2.To un-receive the complete order, click on Actions drop down menu Select Un-Receive.3.To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. + 1.To un-receive an order, go to the Purchase Order screen.2.To un-receive the complete order, click on Actions drop down menu Select Un-Receive.3.To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. @@ -1212,115 +1212,115 @@ A copy template enables you to specify item attributes that should be applied by default to copies of serials. You can create one copy template and apply it to multiple serials. You can also create multiple copy templates. Templates will be used in the Alternate Serial Control View or the Serial Control View. - Create a copy templateCreate a copy template + Create a copy templateCreate a copy template - 1. + 1. To create a copy template, click Admin → Local Administration → Copy Template Editor. - 2. + 2. Enter a Name for the template. - 3. + 3. Select an owning library from the Owning lib drop down menu. This organization owns the copy template. A staff member with permissions at that organization can modify the copy template. The menu is populated from the organizations that you created in Admin → Server Administration → Organizational Units. - 4. + 4. Click the box adjacent to Circulate? If you want the item to circulate. - 5. + 5. Check the box adjacent to Holdable? if patrons can place holds on the item. - 6. + 6. Check the box adjacent to OPAC Visible? if you want patrons to be able to see the item in the OPAC after you receive it. - 7. + 7. Select a loan duration rule from the drop down menu. - 8. + 8. Select a fine level for the item from the drop down menu. - 9. + 9. Select a copy Location from the drop down menu. The menu is populated from the copy locations that you created in Admin → Local Administration → Copy Locations. - 10. + 10. Select a circ modifier from the drop down box. The menu is populated from the modifiers that you created in Admin → Server Administration → Circulation Modifiers. - 11. + 11. Check the box adjacent to Floating? if the item is part of a floating collection. - 12. + 12. Check the box adjacent to Deposit? if patrons must place a deposit on the copy before they can use it. - 13. + 13. Check the box adjacent to Reference? if the item is a reference item. - 14. + 14. If the item is in mint condition, then check the box adjacent to Mint Condition? - 15. + 15. Enter age protection rules in the Age Protect field. Age protection allows you to control the extent to which an item can circulate after it has been received. For example, you may want to protect new copies of a serial so that only patrons who check out the item at your branch can use it. - 16. + 16. Enter a message in the Alert Message field. This message will appear every time the item is checked out to a patron. - 17. + 17. Enter a code from the MARC fixed fields if you want to control the circulation based on the item type in the Circ as Type field. - 18. + 18. Enter a deposit amount if patrons must place a deposit on the copy before they can use it. - 19. + 19. Enter the price of the item. - 20. + 20. Enter the ID of the copy status in the Status field. A list of copy statuses and their IDs can be found in Admin → Server Administration → Copy Status. - 21. + 21. Click Save. Fine level and loan duration are required fields in the Copy Template Editor. - Edit a copy templateEdit a copy template + Edit a copy templateEdit a copy template You can make changes to an existing copy template. Changes that you make to a copy template will apply to any items that you receive after you edited the template. - 1. + 1. To edit a copy template, click your cursor in the row that you want to edit. The row will turn blue. - 2. + 2. Double-click. The copy template will appear, and you can edit the fields. - 3. + 3. After making changes, click Save. @@ -1335,16 +1335,16 @@ and you can generate predictions and receive issues. To access Alternate Serial Control View, open a serials record, and click Actions for this Record → Alternate Serial Control. This opens the Subscriptions interface - SubscriptionsSubscriptions + SubscriptionsSubscriptions Add new subscriptions to a serials record that exists in the catalog. - Create a subscriptionCreate a subscription + Create a subscriptionCreate a subscription - 1. + 1. Click New Subscription. - 2. + 2. Select an owning library. The owning library indicates the organizational unit(s) whose staff can use this subscription. This menu is populated with the shortnames that you created for your libraries in the organizational units tree in Admin → @@ -1352,85 +1352,85 @@ The rule of parental inheritance applies to this list. For example, if a system is made the owner of a subscription, then users, with appropriate permissions, at the branches within the system could also use this subscription. - 3. + 3. Enter the date that the subscription begins in the start date. Recommended practice is that you select the date from the drop down calendar although you can manually enter a date. Owning library and start date are required fields in the new subscription pop up box. - 4. + 4. Enter the date that the subscription ends in the end date. Recommended practice is to select a date from the drop down calendar, but you can manually enter a date, also. - 5. + 5. Enter the difference between the nominal publishing date of an issue and the date that you expect to receive your copy in the Expected Date Offset. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter -2 days into this field. - 6. + 6. Click Save. - 7. + 7. After you save the subscription, it will appear in a list with a hyperlinked ID number. Use the drop down menu at the top of the screen to view subscriptions at other organizations. - Manage a subscriptionManage a subscription + Manage a subscriptionManage a subscription Click the hyperlinked ID number to manage the subscription. The tabbed interface enables you to create distributions, captions and patterns, and issuances. - Edit a subscriptionEdit a subscription + Edit a subscriptionEdit a subscription Edit a subscription as you would edit a copy template. - DistributionsDistributions + DistributionsDistributions Distributions indicate the branches that should receive copies of a serial. Distributions work together with streams to indicate the number of copies that should be sent to each branch. - Create a distributionCreate a distribution + Create a distributionCreate a distribution - 1. + 1. Click the Distributions tab. - 2. + 2. Click New Distribution. - 3. + 3. Enter a name for the distribution in the Label field. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. - 4. + 4. Select a holding library from the drop down menu. The holding library is the branch that will receive the copies. - 5. + 5. Select a copy template from the Receive Unit Template drop down menu. This menu is populated with the copy templates that you created in Copy Template Editor. - Label, Holding Library, and Receive Unit Template are required fields in the new distribution pop up box. - 6. + Label, Holding Library, and Receive Unit Template are required fields in the new distribution pop up box. + 6. Ignore the fields, Unit Label Prefix and Unit Label Suffix. These fields are not functional in Alternate Serial Control View. - 7. + 7. Click Save. The distribution will appear in a list in the Distributions tab in the Subscription Details. - Edit a distributionEdit a distribution + Edit a distributionEdit a distribution Edit a distribution just as you would edit a copy template. From the distribution interface, you can also delete distributions. Deleting the distribution would delete related data, @@ -1438,205 +1438,205 @@ Recommended practice is that you do not delete distributions. - StreamsStreams + StreamsStreams Distributions work together with streams to indicate the number of copies that should be sent to each branch. Distributions identify the branches that should receive copies of a serial. Streams identify how many copies should be sent to each branch. Streams are intended for copies that are received on a recurring, even if irregular, basis. In our example, the Apex Branch should receive copies, so we created a distribution to that branch. The Apex Branch should receive two copies, so we will create two streams to that branch. - Create a streamCreate a stream + Create a streamCreate a stream Click the hyperlinked title of the distribution. The number of streams that have already been created for this distribution displays adjacent to the title. You can choose one of two ways to create a stream: New Stream or Create Many Streams. The New Stream button allows you to create one new stream and assign it a routing label. - 1.Click New Stream2.Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print from in 2.0. This field is optional.3.Click Save. + 1.Click New Stream2.Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print from in 2.0. This field is optional.3.Click Save. The Create Many Streams button allows you to create multiple streams at once, but it does not allow you to add a routing label when you create the stream. - 1. + 1. Click Create Many Streams. - 2. + 2. Enter the number of streams that you want to create in the How many? Field. - 3. + 3. Click Create. - Edit a streamEdit a stream + Edit a streamEdit a stream Edit a stream just as you would edit a copy template. From the streams interface, you can also delete streams. Deleting the stream would delete related data, but it would not delete units, or the copy-equivalent objects that hold barcodes. Recommended practice is that you do not delete streams. - Captions and PatternsCaptions and Patterns + Captions and PatternsCaptions and Patterns The Captions and Patterns wizard allows you to enter caption and pattern data as it is described by the 853, 854, and 855 MARC tags. These tags allow you to define how issues will be captioned, and how often the library receives issues of the serial. In 2.0, it is not possible to create a caption and pattern and apply it to multiple subscriptions. However, you can re-use patterns if you copy and paste to and from the pattern code field in the Captions and Patterns tab. - Create a Caption and PatternCreate a Caption and Pattern + Create a Caption and PatternCreate a Caption and Pattern - 1. + 1. Open the Subscription Details. - 2. + 2. Click the Captions and Patterns tab. -3. +3. Click Add Caption and Pattern. -4. +4. In the Type drop down box, select the MARC tag to which you would like to add data. -5. +5. In the Pattern Code drop down box, you can enter a JSON representation of the 85X tag by hand, or you can click the Wizard to enter the information in a user-friendly format. -6. +6. The Caption and Pattern that you create is Active by default, but you can deactivate a caption and pattern at a later time by unchecking the box. -A subscription may have multiple captions and patterns listed in the subscripiton details, but only one Caption and Pattern can be active at any time. +A subscription may have multiple captions and patterns listed in the subscripiton details, but only one Caption and Pattern can be active at any time. If you want to add multiple patterns, e.g. for Basic and Supplement, Click Add Caption and Pattern. -Use the Pattern Code WizardUse the Pattern Code Wizard +Use the Pattern Code WizardUse the Pattern Code Wizard The Pattern Code Wizard enables you to create the caption of the item and add its publication information. The Wizard is composed of five pages of questions. You can use the Next and Previous navigation buttons in the top corners to flip between pages. -1. +1. To add a pattern code, click Wizard. -2. +2. Page 1: Enumerations -a. +a. To add an enumeration, check the box adjacent to Use enumerations?. The enumerations conform to $a-$h of the 853,854, and 855 MARC tags. -b. +b. A field for the First level will appear. Enter the enumeration for the first level. A common first level enumeration is volume, or “v.” -c. +c. Click Add Enumeration. -d. +d. A field for the Second level will appear. Enter the enumeration for the second level. A common first level enumeration is number, or “no.” -e. +e. Enter the number of bibliographic units per next higher level. This conforms to $u in the 853, 854, and 855 MARC tags. -f. +f. Choose the enumeration scheme from the drop down menu. This conforms to $v in the 853, 854, and 855 MARC tags. -You can add up to six levels of enumeration. -g. +You can add up to six levels of enumeration. +g. Add Alternate Enumeration if desired. -h. +h. When you have completed the enumerations, click Next. -3. +3. Page 2: Calendar -a. +a. To use months, seasons, or dates in your caption, check the box adjacent to Use calendar changes? -b. +b. Identify the point in the year at which the highest level enumeration caption changes. -c. +c. In the Type drop down menu, select the points during the year at which you want the calendar to restart. -d. +d. In the Point drop down menu, select the specific time at which you would like to change the calendar -e. +e. To add another calendar change, click Add Calendar Change. There are no limits on the number of calendar changes that you can add. -f. +f. When you have finished the calendar changes, click Next. -4. +4. Page 3: Chronology -a. +a. To add chronological units to the captions, check the box adjacent to Use chronology captions? -b. +b. Choose a chronology for the first level. If you want to display the terms, “year” and “month” next to the chronology caption in the catlaog, then check the box beneath Display in holding field? -c. +c. To include additional levels of chronology, click Add Chronology Caption. Each level that you add must be smaller than the previous level. -d. +d. After you have completed the chronology caption, click Next. -5. +5. Page 4: Compress and Expand Captions -a. +a. Select the appropriate option for compressing or expanding your captions in the catalog from the compressibility and expandability drop down menu. The entries in the drop down menu correspond to the indicator codes and the subfield $w in the 853 tag. Compressibility and expandability correspond to the first indicator in the 853 tag. -b. +b. Choose the appropriate caption evaluation from the drop down menu. -c. +c. Choose the frequency of your publication from the drop down menu. For irregular frequencies, you may wish to select use number of issues per year, and enter the total number of issues that you receive each year. However, in the . 0 release, recommended practice is that you use only regular frequencies. Planned development will create an additional step to aid in the creation of irregular frequencies. -d. +d. Click Next. -6. +6. Page 5: Finish Captions and Patterns -a. +a. To complete the wizard, click Create Pattern Code. -b. +b. Return to Subscription Details. -c. +c. Confirm that the box adjacent to Active is checked. Click Save Changes. The row is now highlighted gray instead of orange. @@ -1645,195 +1645,195 @@ Confirm that the box adjacent to Active is checked. Click Save Changes. The ro -IssuancesIssuances +IssuancesIssuances The Issuances tab enables you to manually create an issue in the ILS. The ILS will use the initial issue that you manually create to predict future issues. -Create an issuanceCreate an issuance +Create an issuanceCreate an issuance -1. +1. Click the Issuances tab in the Subscription Details. -2. +2. Click New Issuance. -3. +3. The Subscription, Creator, and Editor fields contain subscription and user IDs, respectively. These fields are disabled because Evergreen automatically fills in these fields. -4. +4. Enter a name for this issuance in the Label field. There are no limits on the number of characters that can be entered in this field. You may want to enter the month and year of the publication in hand. -5. +5. Enter the Date Published of the issuance that you are editing. Recommended practice is that you select the date from the drop down calendar although you can manually enter a date. If you are creating one manual issue before automatically predicting more issues, then this date should be the date of the most current issue before the prediction starts. -6. +6. Select a Caption/Pattern from the drop down menu. The numbers in the drop down menu correspond to the IDs of the caption/patterns that you created. -7. +7. The Holding Type appears by default and corresponds to the Type that you selected when you created the Caption/Pattern. -8. +8. In the holding code area of the New Issuance dialog, click Wizard. The Wizard enables you to add holdings information. -9. +9. Enter the volume of the item in hand in the v. field. -10. +10. Enter the number of the item in hand in the no. field. -11. +11. Enter the year of publication in the Year field. -12. +12. Enter the month of publication in the Month field if applicable. You must enter the calendar number of the month rather than the name of the month. For example, enter 12 if the item in hand was published in December. -13. +13. Enter the day of publication in the day field if applicable. -14. +14. Click Compile to generate the holdings code. -15. +15. Click Save. The newly generated issuance will appear in a list in the Issuances tab of the Subscription Details. -Generate item predicitionsGenerate item predicitions +Generate item predicitionsGenerate item predicitions After you manually create the first issue, Evergreen will predict future issuances. Use the Generate Predictions functionality to predict future issues. -1. +1. Click Subscription Details → Issuances → Generate Predictions. -2. +2. Choose the length of time for which you want to predict issues. If you select the radio button to predict until end of subscription, then Evergreen will predict issues until the end date that you created when you created the subscription. See simplesect . 1 for more information. If you do not have an end date, select the radio button to predict a certain number of issuances, and enter a number in the field. -3. +3. Click Generate. -4. +4. Evergreen will predict a run of issuances and copies. The prediction will appear in a list. -5. +5. You can delete the first, manual issuance by clicking the check box adjacent to the issuance and clicking Delete Selected. -ReceivingReceiving +ReceivingReceiving You can batch receive items through a simple or an advanced interface. The simple interface does not allow you to add barcodes or use the copy template. These items are also not visible in the OPAC. The advanced interface enables you to use the copy templates that you created, add barcodes, and make items OPAC visible and holdable. You can access both Batch Receive interfaces from two locations in the ILS. From the Subscription Details screen, you can click Batch Item Receive. You can also access these interfaces by opening the catalog record for the serial, and clicking Actions for this Record → Serials Batch Receive. -Simple Batch ReceivingSimple Batch Receiving +Simple Batch ReceivingSimple Batch Receiving Follow these steps to receive items in batch in a simple interface. -1. +1. The Batch Receive interface displays issues that have not yet been received. The earliest expected issue appears at the top of the list. -2. +2. In the right lower corner, you see a check box to Create Units for Received Items. If you do not check this box, then you will receive items in simple mode. -3. +3. Click Next. -4. +4. In simple mode, the distributions that you created are displayed. They are marked received by default. If you hover over the branch name, you can view the name of the distribution and its stream. -5. +5. You can receive and add a note to each item individually, or you can perform these actions on all of the distributions and streams at once. To do so, look above the line, and enter the note that you want to apply to all copies and confirm that the box to Receive? is checked. -6. +6. Click Apply. The note should appear in the note field in each distribution. -In 2.0, the note field is only displayed in the current screen. -7. +In 2.0, the note field is only displayed in the current screen. +7. Then click Receive Selected Items. -8. +8. The received items are cleared from the screen. -Advanced Batch ReceivingAdvanced Batch Receiving +Advanced Batch ReceivingAdvanced Batch Receiving Follow these steps to receive items in batch in a simple interface. -1. +1. The Batch Receive interface displays issues that have not yet been received. The earliest expected issue appears at the top of the list. -2. +2. If you want to barcode each copy, display it in the catalog, and make it holdable, then check the box adjacent to Create Units for Received Items in the lower right side of the screen. -3. +3. This will allow you to utilize the copy templates and input additional information about the copy: -a. +a. Barcode – You can scan printed barcodes into the barcode field for each copy, or you can allow the system to auto-generate barcodes. …To auto-generate barcodes, check the box adjacent to Auto-generate?, and enter the first barcode into the barcode field in the first row of the table. Then press the Tab key. The remaining barcode fields will automatically populate with the next barcodes in sequence, including check digits. -b. +b. Circ Modifiers - The circ modifiers drop down menu is populated with the circulation modifiers that you created in Admin → Server Administration → Circulation Modifiers. If you entered a circ modifier in the copy template that you created for this subscription, then it will appear by default in the distributions. -c. +c. Call Number – Enter a call number. Any item with a barcode must also have a call number. -d. +d. Note – Add a note. There are no limits on the number of characters that can be entered in this field. The note only displays in this screen. -e. +e. Copy Location – The copy location drop down menu is populated with the copy locations that you created in Admin → Local Administration → Copy Location Editor. If you entered a copy location in the copy template that you created for this subscription, then it will appear by default in the distributions. -f. +f. Price - If you entered a price in the copy template that you created for this subscription, then it will appear by default in the distributions. You can also manually enter a price if you did not include one in the copy template. -g. +g. Receive? – The boxes in the Receive? Column are checked by default. Uncheck the box if you do not want to receive the item. Evergreen will retain the unreceived copies and will allow you to receive them at a later time. -4. +4. When you are ready to receive the items, click Receive Selected Items. -5. +5. The items that have been received are cleared from the Batch Receive interface. The remaining disabled item is an unreceived item. -6. +6. If the items that you received have a barcode, a copy template that was set to OPAC Visible, and are assigned a shelving location that is OPAC Visible, then you can view the received items in the catalog. Notice that the Holdings Summary has been updated to reflect the most recent addition to the holdings. @@ -1846,40 +1846,40 @@ If the items that you received have a barcode, a copy template that was set to O Serial Control View is separate from the Alternate Serial Control interface. Serial Control View enables you to manage serials in a single tabbed interface. This view also enables you to bind units. Serial Control View consists of five tabs: Items, Units, Distributions, Subscriptions, and Claims. Units and Claims are not functional in 2.0. To access Serial Control View, open a bib record and click Actions for this Record → Serial Control View. -SubscriptionsSubscriptions +SubscriptionsSubscriptions The Subscriptions tab enables you to view and manage subscriptions. -Create a subscriptionCreate a subscription +Create a subscriptionCreate a subscription -1. +1. Click the Subscriptions tab. -2. +2. Select the branch that will own the subscription. -3. +3. Right-click or click Actions for Selected Row, and click Add Subscription. -4. +4. Enter the date that the subscription begins in the start date, and click Apply. You must enter the date in YYYY-MM-DD format. -5. +5. Enter the date that the subscription ends in the end date. This field is optional. -6. +6. Enter the difference between the nominal publishing date of an issue and the date that you expect to receive your copy in the Expected Date Offset. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter -2 days into this field. -7. +7. When finished, click Create Subscription(s) in the bottom right corner of the screen. -8. +8. A confirmation message appears. Click OK. @@ -1887,130 +1887,130 @@ A confirmation message appears. Click OK. You can add notes to the subscription by clicking Subscription Notes. These notes are currently viewable only in the staff client by clicking on the Subscription Notes button. -Edit a subscriptionEdit a subscription +Edit a subscriptionEdit a subscription To edit a subscription, select the subscription in the tree on the left side of the screen. You can edit the following categories: Owning Lib, Start Date, End Date, and Date Offset. After you edit the subscription, click Modify Subscription(s) to save the changes. -DistributionsDistributions +DistributionsDistributions Distributions indicate the branches that should receive copies of a serial. Distributions work together with streams to indicate the number of copies that should be sent to each branch. -Create a distributionCreate a distribution +Create a distributionCreate a distribution -1. +1. Click the distributions link beneath the subscription. Right click or click Actions for Selected Rows, and click Add distribution. -2. +2. Apply a new label to the distribution. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. -3. +3. Apply a prefix to the spine label if desired. This information will display in Serial Control View when the items are received, but it does not print on the spine label in . 0. -4. +4. Apply a suffix to the spine label if desired. This information will display in Serial Control View when the items are received, but it does not print on the spine label in . 0. -5. +5. The holding library is filled in by default and is the library to which you attached the subscription. -6. +6. The Legacy Record Entry contains the MFHD records that are attached to the bib record if the owning library is identical to the distribution’s holding library. A distribution can thus be an extension of an MFHD record. Select the MFHD record from the drop down menu. -7. +7. The Receive Call Number field is empty until you receive the first item. When you receive the first item, you are prompted to enter a call number. That call number will populate this drop down menu. -8. +8. The Bind Call Number field is empty until you bind the first item. When you receive the first item, you are prompted to enter a call number. That call number will populate this drop down menu. -9. +9. Receive Unit Template – The template that should be applied to copies when they are received. Select a template from the drop down menu. -10. +10. Bind Unit Template - The template that should be applied to copies when they are bound. Select a template from the drop down menu. -11. +11. When finished, click Create Distribution(s) in the bottom right corner of the screen. -12. +12. A confirmation message appears. Click OK. -You can add notes to the distribution by clicking Distribution Notes. These +You can add notes to the distribution by clicking Distribution Notes. These notes are currently viewable only in the staff client by clicking on the Distribution Notes button. -Edit a distributionEdit a distribution +Edit a distributionEdit a distribution To edit a distribution, select the distribution in the tree on the left side of the screen. You can edit the following categories: Label, Holding Lib, Legacy Record Entry, Receive Unit Template, Bind Unit Template, Receive Call Number and Bind Call Number. After you edit the distribution, click Modify Distribution(s) to save the changes. -StreamsStreams +StreamsStreams Distributions work together with streams to indicate the number of copies that should be sent to each branch. Distributions identify the branches that should receive copies of a serial. Streams identify how many copies should be sent to each branch. Streams are intended for copies that are received on a recurring, even if irregular, basis. In our example, the Apex Branch should receive copies, so we created a distribution to that branch. The Apex Branch should receive two copies, so we will create two streams to that branch. -Create a streamCreate a stream +Create a streamCreate a stream -1. +1. Click the Distributions tab. -2. +2. Check the boxes to Show Dist. and Show Groups to view distributions and streams. -3. +3. Select the Streams link beneath the distribution that you created for that branch. Right click or click Actions for Selected Row → Add Stream. -4. +4. Click the stream that is created. -5. +5. Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print in . 0. This field is optional. -6. +6. Click Modify Stream(s) in the bottom right corner of the screen. The data in the Basic Summary, Supplement Summary, and Index Summary are automatically generated by the ILS when you create a caption and pattern and a holdings statement. You can create additional textual holdings manually by editing the Textual Holdings field. -Edit a streamEdit a stream +Edit a streamEdit a stream -1. +1. To edit a stream, select the stream in the tree on the left side of the screen. You can edit the following category: -• +• Routing Label – The label given to an issue to direct it to the people or departments that should view the issue before it is available to the public. -2. +2. The Basic Summary displays the distribution ID, the Textual Holdings, and the Generated Holdings. The OPAC uses data in legacy records, the generated coverage field, and the textual holdings fields to display holdings information. -a. +a. The distribution ID and the Generated Coverage are created by Evergreen. -b. +b. Textual Holdings – Enter any additional holdings information in this field, and it will display in the OPAC as Additional Volume Information. -c. +c. Then click Modify Basic Summary to save your changes. Your changes will appear in the OPAC view. @@ -2018,51 +2018,51 @@ Then click Modify Basic Summary to save your changes. Your changes will appear -Captions and PatternsCaptions and Patterns +Captions and PatternsCaptions and Patterns The Captions and Patterns wizard allows you to enter caption and pattern data as it is described by the 853, 854, and 855 MARC tags. These tags allow you to define how issues will be captioned, and how often the library receives issues of the serial. In 2.0, it is not possible to create a caption and pattern and apply it to multiple subscriptions. However, you can re-use patterns if you copy and paste to and from the pattern code field in the Captions and Patterns tab. -Create a caption and patternCreate a caption and pattern +Create a caption and patternCreate a caption and pattern -1. +1. Click the Subscriptions tab. -2. +2. Beneath the subscription, click Captions and Patterns, and right-click or click Actions for Selected Row → Add Caption/Pattern. -3. +3. The ID and Creation Date will fill in automatically. -4. +4. Click the Unset entry beneath Type. A drop down menu will appear. Choose the type of caption and pattern that you want to create, and click Apply. -5. +5. Click the Unset entry beneath Active. A drop down menu will appear. Choose Yes if you want to activate the caption and pattern. Click Apply. -6. +6. Click the Unset entry beneath the Pattern Code (temporary) field if you want to create the pattern code by hand. If you want to create it automatically, click Pattern Code Wizard in the lower right corner. -7. +7. Follow the steps for using the pattern code wizard. -8. +8. Click Apply. -9. +9. Click Create Caption and Pattern(s). -Edit a caption and patternEdit a caption and pattern +Edit a caption and patternEdit a caption and pattern To edit a caption/pattern, select the caption/pattern in the tree on the left side of the screen. You can edit the following categories: - Type – Change the type of the caption/pattern. @@ -2071,169 +2071,169 @@ Click Create Caption and Pattern(s). After you edit the subscription, click Modify Caption and Pattern(s) to save the changes. -IssuancesIssuances +IssuancesIssuances The Issuances tab enables you to manually create an issue in the ILS. The ILS will use the initial issue that you manually create to predict future issues. -Create an issuanceCreate an issuance +Create an issuanceCreate an issuance -1. +1. Click the Subscriptions tab. -2. +2. Beneath the subscription, click Issuances, and right-click or click Actions for Selected Row → Add Issuance. -3. +3. The fields in the first column will fill in automatically after you have created the issuance. -4. +4. Click the Unset link in the Holding Code field, and manually enter a holding code. Click Apply. -5. +5. Click the Unset link in the Caption/Pattern field. Select a caption/pattern from the drop down menu. Click Apply. -6. +6. Enter the Date Published of the issuance that you are editing. Enter the date in YYYY-MM-DD format. If you are creating one manual issue before automatically predicting more issues, then this date should be the date that you want to enter before the prediction starts. Click Apply. -7. +7. Click in the Issuance Label field to name the issuance. There are no limits on the number of characters that can be entered in this field. You may want to enter the month and year of the publication in hand. Click Apply. -8. +8. Click Create Issuance in the lower right corner to save your changes. -9. +9. A confirmation message appears. Click OK. -Edit an issuanceEdit an issuance +Edit an issuanceEdit an issuance To edit an issuance, select the issuance in the tree on the left side of the screen. You can edit the following categories: Holding Code, Caption/Pattern, Date Published, and Issuance Label. After you edit the issuance, click Modify Issuance(s) to save the changes. -Generate item predictionsGenerate item predictions +Generate item predictionsGenerate item predictions -1. +1. Open the Subscriptions tab. -2. +2. Right-click or click Actions for Selected Row → Make predictions. -3. +3. A pop up box will aks you how many items you want to predict. Enter the number, and click OK. -4. +4. A confirmation message will appear. Click OK. -5. +5. Click the Issuances link to view the predicted issues. -ReceivingReceiving +ReceivingReceiving Receive items in the Items tab. From this interface, you can receive items, edit item attributes, and delete items. -Receive ItemsReceive Items +Receive ItemsReceive Items -1. +1. To receive items, click the Receive radio button. In the top half of the screen, the items that have yet to be received are displayed. In the bottom half of the screen, recently received items are displayed. -2. +2. Select the branch that will receive the items from the drop down box. -3. +3. Select the issue that you want to receive. -4. +4. Select the current working unit. Click Set Current Unit, located in the lower right corner of the screen. A drop down menu will appear. -• +• If you want to barcode each item individually, select Auto per item. This setting is recommended for most receiving processes. -• +• If you want each item within a unit to share the same barcode, then select New Unit. This setting is advised for most binding processes. -• +• If you want the item to be received or bound into an existing item, select Recent and select the desired issue. To making a change in bound items, receive or bind the items into an already existing unit. -5. +5. Click Receive/Move Selected. -6. +6. Enter a barcode and call number if prompted to do so. -7. +7. A message confirming receipt of the item appears. Click OK. -8. +8. The screen refreshes. In the top half of the screen, the item displays a received date. In the bottom half of the screen, the item that you have just received is now at the top of the list of the received items. After receiving items, you can view the updated holdings in the OPAC. In this example, the legacy MFHD record and the items recently received in the serial control view display together in the MFHD statement. -Edit Item AttributesEdit Item Attributes +Edit Item AttributesEdit Item Attributes In this pop up box, you can view the Item ID, Status, Distribution, and Shelving ID. These are generated by Evergreen. However, you may need to edit an item’s Date Expected or Received. -1. +1. To edit item attributes, select the item(s) that you want to edit, and click Actions for Selected Rows → Edit Item Attributes. -2. +2. Edit the attributes that appear. When you are finished, click Modify Item(s). -Delete ItemsDelete Items +Delete ItemsDelete Items You can use this menu item to delete items from your holdings. To delete items from your holdings, click Actions for Selected Rows → Delete Item. -Bind ItemsBind Items +Bind ItemsBind Items The binding mode applies the binding template, which is defined in the distribution (see simplesect 2 for more information), to units that should be bound. - 1. + 1. Select the the branch that will receive the items from the drop down box. - 2. + 2. To bind items, click the Bind radio button. Items that have been received will appear in the top half of the screen. - 3. + 3. Select the current working unit. - 4. + 4. Select the issues that you want to bind, and click Receive/Move Selected. - 5. + 5. In the bottom half of the screen, you can view the items that you have bound together. @@ -2246,39 +2246,39 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). MFHD RecordMFHD Record You can manually create MFHD statements. - 1. + 1. Create an MFHD record - 2. + 2. Open a serial record, and in the bottom right corner above the copy information, click Add MFHD Record. You can also add the MFHD statement by clicking Actions for this Record →MFHD Holdings →Add MFHD Record. - 3. + 3. A message will confirm that you have created the MFHD Record. Click OK. - 4. + 4. Click Reload in the top left corner of the record. - 5. + 5. The Holdings Summary will appear. Click Edit Holdings in the right corner. - 6. + 6. Click Edit Record. - 7. + 7. The MFHD window will pop up. Enter holdings information. Click Save MFHD. - 8. + 8. Close the MFHD window. - 9. + 9. Click Reload in the top left corner of the record. The Holdings Summary will reflect the changes to the MFHD statement. @@ -2287,29 +2287,29 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). The following permissions enable you to control serials’ functions. Although you can assign each permission to users in the Admin module, it is recommended that either all serials permissions be assigned to an individual, or that they should be assigned to individuals in the following groups. The following permission allow you to create, manage, view, edit, and perform all other functions associated with these serials tasks: - • + • ADMIN_SERIAL_CAPTION_PATTERN - • + • ADMIN_SERIAL_DISTRIBUTION - • + • ADMIN_SERIAL_STREAM - • + • ADMIN_SERIAL_SUBSCRIPTION To receive copies of serials: - • + • RECEIVE_SERIAL - • + • CREATE_VOLUME @@ -2336,7 +2336,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). continuing improvement of serials in Evergreen. - Creating a Copy TemplateCreating a Copy Template + Creating a Copy TemplateCreating a Copy Template To create a serial subscription in the Alternate Serial Control @@ -2349,11 +2349,11 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). client. - + Once that page has loaded, click New Template. - + You don't actually need to fill out all of these fields. If you don't @@ -2370,7 +2370,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). fill in the fields shown in the above image. - To the CatalogTo the Catalog + To the CatalogTo the Catalog Initiate a catalog search in the staff client to find the bibliographic @@ -2385,10 +2385,10 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). called Flying for our example. - + - The SubscriptionThe Subscription + The SubscriptionThe Subscription Here you'll be presented with an interface that would show you any @@ -2397,7 +2397,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). "New Subscription" button. Click that. - + The only required fields here are owning library and start date. @@ -2414,7 +2414,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). that link to manage the subscription in greater detail. - + Now you're looking at the Subscription Details page, which has four @@ -2423,10 +2423,10 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Evergreen where you want copies of your serial to go. - DistributionsDistributions + DistributionsDistributions - + Using the New Distribution button, create a distribution for each @@ -2444,7 +2444,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). should leave these fields blank. - + After saving your distribution (and creating any others for other @@ -2452,7 +2452,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Label column to set up the streams for each distribution. - StreamsStreams + StreamsStreams “Streams” are perhaps the most confusing concept in the Alternate @@ -2463,7 +2463,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). distribution for BR1 and four streams for that distribution. - + You can create streams one at a time by clicking New Stream. In this @@ -2473,7 +2473,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). routing label is entirely optional. - + If you don't care about routing labels, or need to create more than @@ -2481,7 +2481,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). several at once. - + If you wish to set up routing lists, use the checkboxes on the left @@ -2491,7 +2491,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). document. - Setting Up Routing ListsSetting Up Routing Lists + Setting Up Routing ListsSetting Up Routing Lists @@ -2501,7 +2501,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). - + A routing list is made up of users, who will presumably get their @@ -2512,7 +2512,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). a note field available in either case. - + Enter any combination of readers and departments you need using @@ -2520,7 +2520,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). their barcodes in the appropriately labeled field. - + You can re-arrange users in the list by dragging and dropping each @@ -2529,7 +2529,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). user's list order number. - Captions and PatternsCaptions and Patterns + Captions and PatternsCaptions and Patterns After you've set up all the streams you need on all of your @@ -2538,7 +2538,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). - + Caption and Pattern objects define the same material that would be @@ -2556,7 +2556,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). tags by hand, click the Wizard button. - + This Caption and Pattern Wizard is where you'll enter information @@ -2578,7 +2578,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). back if you've forgotten something by using the Prev button. - + The wizard's second page is concerned with calendar changes (i.e., at @@ -2589,7 +2589,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). standard. - + The wizard's third page is for defining chronology captions. Make sure @@ -2599,7 +2599,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). and “Nov.” - + The fourth page of the wizard deals with indicator codes and the @@ -2617,10 +2617,10 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). object you have just created, and click Save Changes. - + - On to IssuancesOn to Issuances + On to IssuancesOn to Issuances We're finally close to the point where we define an initial issuance @@ -2632,7 +2632,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). click on New Issuance. - + What we're doing here is hand-entering one example issuance, and you @@ -2649,7 +2649,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). fields filled in. - + Once everything is filled in on your example issue, click Save. You @@ -2665,7 +2665,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). a year's worth of issues on a monthly publication). - + After you click Generate, the system should take a moment to predict @@ -2678,14 +2678,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). of the grid and clickDelete Selected. - + Your subscription is now completely set up. Let's receive some copies. - + Batch Receiving Batch Receiving @@ -2704,7 +2704,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Control from earlier in this tutorial). - + The Batch Receiving interface will present you with a selection of @@ -2713,7 +2713,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). so generally you will click next here. - + “Simple” mode for Batch Receiving gives you few options - this is @@ -2738,7 +2738,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). receiving table. - + If you have a printed stack of barcodes available, you can scan @@ -2753,7 +2753,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). next barcodes in sequence, including check digits. - + As for the other fields in the table besides barcode, you can set @@ -2773,7 +2773,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). to its entire column throughout the table. - + Now, as for the question of what to do when you didn't receive all @@ -2793,7 +2793,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). implemented. - + When you've filled in all the item rows, look in the lower left of @@ -2801,14 +2801,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). that. - + You see that the items that were marked for receipt are now cleared from this interface, as they have been received. - + Since we left all Routing List checkboxes marked, if any of the items @@ -2816,7 +2816,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). open with a routing list ready to print. - + If you set up a routing list as described earlier in this tutorial, @@ -2829,7 +2829,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). items you received in the catalog. - + @@ -2846,12 +2846,12 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). staff client. It deals with maintaining servers, installation, upgrading, and configuring both system wide and local library settings. Some sections require understanding of Linux system administration while others require an understanding of your system hierarchy of locations and users. Many procedures explained in the following - chapters are accomplished with Linux commands run from the + chapters are accomplished with Linux commands run from the terminal without a Graphical User Interface (GUI).In order to accomplish some of the tasks, prerequisite knowledge or experience will be required and you may need to consult system administration documentation for your specific Linux distribution if you have limited Linux system experience. A vast ammount of free resources can be found on the on the web for various experinece levels. You might also consider consulting - PostgreSQL and - Apache documentation for a greater understanding + PostgreSQL and + Apache documentation for a greater understanding of the software stack on which Evergreen is built. Chapter 8. Server-side Installation of Evergreen SoftwareChapter 8. Server-side Installation of Evergreen Software Report errors in this documentation using Launchpad. @@ -2875,13 +2875,11 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). to emulate a Linux environment). It can also be installed to run on other Linux systems via virtualized environments (using, for example, "VirtualBox" or - "VMware"). More information on virtualized environments can be found in - ???. - Installation of the Evergreen Staff Client software is reviewed in ???. + "VMware"). The Evergreen server-side software has dependencies on particular versions of certain major software sub-components. Successful installation of Evergreen software requires that software versions agree with those listed here: - Table 8.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL2.01.6.38.4 + Table 8.1. Evergreen Software DependenciesEvergreenOpenSRFPostgreSQL2.01.6.38.4 Installing Server-Side SoftwareInstalling Server-Side Software This section describes the installation of the major components of Evergreen server-side software. @@ -2896,10 +2894,10 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Installing OpenSRF 1.6.3 On Ubuntu or DebianInstalling OpenSRF 1.6.3 On Ubuntu or Debian - + - - + + This section describes the installation of the latest version of the Open Service Request Framework (OpenSRF), a major component of the Evergreen server-side software, on Ubuntu or Debian @@ -2909,7 +2907,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). properly installed and configured. Do not continue with any further Evergreen installation steps until you have verified that OpenSRF has been successfully installed and tested. - + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) platforms. OpenSRF 1.6.3 has been tested on Debian Lenny (5.0), Debian Squeeze (6.0) and Ubuntu Lucid Lynx (10.04), Debian Lenny (5.0), @@ -2918,12 +2916,12 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). either the root user, the opensrf user, or the postgres user. - • + • Debian -- To become the root user, issue the command su - and enter the password of the root user. - • + • Ubuntu -- To become the root user, issue the command sudo su - and enter the password of the @@ -2937,7 +2935,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). the root user again, simply issue the command exit. - 1. + 1. Add New opensrf User As the root user, add the @@ -2945,30 +2943,30 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). In the following example, the default shell for the opensrf user is automatically set to /bin/bash to inherit a reasonable environment: - + # as the root user: useradd -m -s /bin/bash opensrf passwd opensrf - 2. + 2. Download and Unpack Latest OpenSRF Version - + The latest version of OpenSRF can be found here: http://evergreen-ils.org/downloads/OpenSRF-1.6.3.tar.gz . As the opensrf user, change to the directory /home/opensrf then download and extract OpenSRF. The new subdirectory /home/opensrf/OpenSRF-1.6.3 will be created: - + # as the opensrf user: cd /home/opensrf wget http://evergreen-ils.org/downloads/OpenSRF-1.6.3.tar.gz tar zxf OpenSRF-1.6.3.tar.gz - 3. + 3. Install Prerequisites to Build OpenSRF In this section you will install and configure a set of prerequisites that will be @@ -2981,24 +2979,24 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Linux listed distributions. For example, to install the prerequisites for Ubuntu version 10.04 (Lucid Lynx) you would enter this command: make -f src/extras/Makefile.install ubuntu-lucid . - + # as the root user: cd /home/opensrf/OpenSRF-1.6.3 make -f src/extras/Makefile.install [DISTRIBUTION] - • + • debian-squeeze for Debian Squeeze (6.0) - - • + + • fedora13 for Fedora 13 - - • + + • ubuntu-lucid for Ubuntu Lucid Lynx (10.04) - • + • centos for CentOS 5 - • + • rhel for Red Hat Enterprise Linux 5 This will install a number of packages on the system that are required by OpenSRF, @@ -3006,16 +3004,16 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). CPAN configuration prompt to allow it to automatically configure itself to download and install Perl modules from CPAN. The CPAN installer will ask you a number of times whether it should install prerequisite modules - say Yes. - 4. + 4. Build OpenSRF In this section you will configure, build and install the OpenSRF components that support other Evergreen services. - a. + a. Configure OpenSRF - + As the opensrf user, return to the new OpenSRF build directory and use the configure utility to prepare for the next @@ -3023,7 +3021,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). include support for Python and Java, add the configuration options --enable-python and --enable-java, respectively: - + # as the opensrf user: cd /home/opensrf/OpenSRF-1.6.3 @@ -3032,14 +3030,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). This step will take several minutes to complete. - b. + b. Compile, Link and Install OpenSRF As the root user, return to the new OpenSRF build directory and use the make utility to compile, link and install OpenSRF: - + # as the root user: cd /home/opensrf/OpenSRF-1.6.3 @@ -3047,7 +3045,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). This step will take several minutes to complete. - c. + c. Update the System Dynamic Library Path You must update the system dynamic library path to force @@ -3058,7 +3056,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). new library path, then run the command ldconfig to automatically read the file and modify the system dynamic library path: - + # as the root user: echo "/openils/lib" > /etc/ld.so.conf.d/osrf.conf @@ -3080,22 +3078,22 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). to handle communications for these two domains. As the root user, edit the file /etc/hosts and add the following example domains: - - + + # as the root user: 127.0.1.2 public.localhost public 127.0.1.3 private.localhost private - e. + e. Change File Ownerships Finally, as the root user, change the ownership of all files installed in the directory /openils to the user opensrf: - + # as the root user: chown -R opensrf:opensrf /openils @@ -3105,11 +3103,11 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). 5. Stop the ejabberd Service - + Before continuing with configuration of ejabberd you must stop that service. As the root user, execute the following command to stop the service: - + # as the root user: /etc/init.d/ejabberd stop @@ -3120,7 +3118,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). beam or epmd you may need to perform the following commands to kill them: - + # as the root user: epmd -kill @@ -3128,7 +3126,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). rm /var/lib/ejabberd/* echo 'ERLANG_NODE=ejabberd@localhost' >> /etc/default/ejabberd - 6. + 6. Edit the ejabberd configuration You must make several configuration changes for the @@ -3137,7 +3135,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). As the root user, edit the file /etc/ejabberd/ejabberd.cfg and make the following changes: - a. + a. Change the line: {hosts, ["localhost"]}. @@ -3145,7 +3143,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). {hosts, ["localhost", "private.localhost", "public.localhost"]}. - b. + b. Change the line: {max_user_sessions, 10} @@ -3157,21 +3155,21 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). then change it to instead read: {access, max_user_sessions, [{10000, all}]} - c. + c. Change all three occurrences of: max_stanza_size to instead read: 2000000 - d. + d. Change both occurrences of: maxrate to instead read: 500000 - e. + e. Comment out the line: {mod_offline, []} @@ -3186,12 +3184,12 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). As the root user, restart the ejabberd service to test the configuration changes and to register your users: - + # as the root user: /etc/init.d/ejabberd start - 8. + 8. Register router and opensrf as @@ -3205,11 +3203,11 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). /etc/hosts in a previous step (see Step 4.d). The users include: - • + • the router user, to whom all requests to connect to an OpenSRF service will be routed; - • + • the opensrf user, which clients use to connect to OpenSRF services (you may name the user anything you like, but we use @@ -3220,7 +3218,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). for the users router and opensrf on each domain (remember to replace NEWPASSWORD with the appropriate password): - + # as the root user: # Note: the syntax for registering a user with ejabberdctl is: @@ -3241,7 +3239,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). execute the following commands to create the new configuration files /openils/conf/opensrf_core.xml and /openils/conf/opensrf.xml from the example templates: - + # as the opensrf user: cd /openils/conf @@ -3291,7 +3289,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). public.localhost domains, these will already be set to the correct values. Otherwise, search and replace to match values for your own systems. - 11. + 11. Set location of the persistent database As the opensrf user, edit the @@ -3302,7 +3300,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). to instead read: /tmp/persist.db Following is a sample modification of that portion of the file: - + <!-- Example of an app-specific setting override --> <opensrf.persist> <app_settings> @@ -3316,7 +3314,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). In this section you will set up a special configuration file for each user who will need to run the srfsh (pronounced surf shell) utility. - + The software installation will automatically create the utility srfsh (surf shell), a command line diagnostic tool for testing and interacting with OpenSRF. It will be used @@ -3327,14 +3325,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). to the home directory of each user who will use srfsh. For instance, do the following for the opensrf user: - + # as the root user: cp /openils/conf/srfsh.xml.example /home/opensrf/.srfsh.xml Edit each user's file ~/.srfsh.xml and make the following changes: - • + • Modify domain to be the router hostname (following our domain examples, private.localhost will give @@ -3342,21 +3340,21 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). public.localhost will only allow access to those OpenSRF services that are publicly exposed). - • + • Modify username and password to match the opensrf Jabber user for the chosen domain - • + • Modify logfile to be the full path for a log file to which the user has write access - • + • Modify loglevel as needed for testing - • + • Change the owner of the file to match the owner of the home directory Following is a sample of the file: - + <?xml version="1.0"?> <!-- This file follows the standard bootstrap config file layout --> <!-- found in opensrf_core.xml --> @@ -3371,7 +3369,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). <loglevel>4</loglevel> </srfsh> - 13. + 13. Modify the environmental variable PATH for the opensrf user @@ -3379,18 +3377,18 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). environmental variable PATH by adding a new file path to the opensrf user's shell configuration file ~/.bashrc: - + # as the opensrf user: echo "export PATH=/openils/bin:\$PATH" >> ~/.bashrc - 14. + 14. Start OpenSRF As the root user, start the ejabberd and memcached services: - + # as the root user: /etc/init.d/ejabberd start @@ -3398,7 +3396,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). As the opensrf user, start OpenSRF as follows: - + # as the opensrf user: osrf_ctl.sh -l -a start_all @@ -3409,14 +3407,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). OpenSRF router , Perl , and C services. - • + • You can also start Evergreen without the -l flag, but the osrf_ctl.sh utility must know the fully qualified domain name for the system on which it will execute. That hostname was probably specified in the configuration file opensrf.xml which you configured in a previous step. - • + • If you receive an error message similar to osrf_ctl.sh: command not found, then your environment variable PATH does not include the @@ -3426,7 +3424,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). add the following line: export PATH=$PATH:/openils/bin - 15. + 15. Test connections to OpenSRF Once you have installed and started OpenSRF, as the @@ -3434,7 +3432,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). OpenSRF using the srfsh utility and trying to call the add method on the OpenSRF math service: - + # as the root user: /openils/bin/srfsh @@ -3449,14 +3447,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). For other srfsh commands, type in help at the prompt. - 16. + 16. Stop OpenSRF After OpenSRF has started, you can stop it at any time by using the osrf_ctl.sh again. As the opensrf user, stop OpenSRF as follows: - + # as the opensrf user: osrf_ctl.sh -l -a stop_all @@ -3467,15 +3465,15 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). DebianInstalling Evergreen 2.0 On Ubuntu or Debian - - + + This section outlines the installation process for the latest stable version of Evergreen. In this section you will download, unpack, install, configure and test the Evergreen system, including the Evergreen server and the PostgreSQL database system. You will make several configuration changes and adjustments to the software, including updates to configure the system for your own locale, and some updates needed to work around a few known issues. - + The following steps have been tested on the x86 (32-bit) and x86-64 (64-bit) architectures. There may be differences between the Desktop and Server editions of Ubuntu. These instructions assume the Server @@ -3484,12 +3482,12 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). either the root user, the opensrf user, or the postgres user. - • + • Debian -- To become the root user, issue the command su - and enter the password of the root user. - • + • Ubuntu -- To become the root user, issue the command sudo su - and enter the password of the @@ -3503,7 +3501,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). root user again, simply issue the command exit. - 1. + 1. Install OpenSRF Evergreen software is integrated with and depends on the Open Service @@ -3516,7 +3514,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). not continue with any further Evergreen installation steps until you have verified that OpenSRF has been successfully installed and tested. - 2. + 2. Download and Unpack Latest Evergreen Version The latest version of Evergreen can be found here: @@ -3525,7 +3523,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). the directory /home/opensrf then download and extract Evergreen. The new subdirectory /home/opensrf/Evergreen-ILS-2.0.4 will be created: - + # as the opensrf user: cd /home/opensrf @@ -3544,26 +3542,26 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). and unpacked. Remember to replace [DISTRIBUTION] in the following example with the keyword corresponding to the name of one of the Linux distributions listed in the following - distribution keywords table ??? . - For example, to install the prerequisites for Ubuntu version 10.05 (Karmic Koala) you would + distribution list. + For example, to install the prerequisites for Ubuntu version 10.05 (Lucid Lynx) you would enter this command: make -f Open-ILS/src/extras/Makefile.install ubuntu-lucid. - + # as the root user: cd /home/opensrf/Evergreen-ILS-2.0.4 make -f Open-ILS/src/extras/Makefile.install [DISTRIBUTION] - • + • debian-squeeze for Debian Squeeze (6.0) - • + • ubuntu-lucid for Ubuntu Lucid Lynx (10.04) 4. (OPTIONAL) Install the PostgreSQL Server - + Since the PostgreSQL server is usually a standalone server in multi-server production systems, the prerequisite installer Makefile in the previous section (see Step 3) @@ -3575,7 +3573,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). in the section called “Installing PostgreSQL from Source”, or you can visit the official web site http://www.postgresql.org for more information. - + PostgreSQL version 8.4 is the minimum supported version to work with Evergreen 2.0. If you have an older version of PostgreSQL, you should upgrade before installing Evergreen. To find your current version @@ -3584,7 +3582,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). SELECT version(); to get detailed information about your version of PostgreSQL. - 5. + 5. Install Perl Modules on PostgreSQL Server If PostgreSQL is running on the same system as your Evergreen software, @@ -3594,11 +3592,11 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). root user install the following Perl modules: as the root user, ensure the gcc compiler is installed: - + aptitude install gcc libxml-libxml-perl libxml-libxslt-perl then install the Perl modules: - + perl -MCPAN -e shell cpan> Business::ISBN cpan> install JSON::XS @@ -3609,8 +3607,8 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). For more information on installing Perl Modules vist the official CPAN site. - - 6. + + 6. Update the System Dynamic Library Path You must update the system dynamic library path to force your system to recognize @@ -3618,14 +3616,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). do this by creating the new file /etc/ld.so.conf.d/osrf.conf containing a new library path, then run the command ldconfig to automatically read the file and modify the system dynamic library path: - + # as the root user: echo "/usr/local/lib" >> /etc/ld.so.conf.d/osrf.conf echo "/usr/local/lib/dbd" >> /etc/ld.so.conf.d/osrf.conf ldconfig - 7. + 7. Restart the PostgreSQL Server If PostgreSQL is running on the same system as the rest of Evergreen, as @@ -3636,7 +3634,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). execute the following command (remember to replace PGSQL_VERSION with your installed PostgreSQL version, for example 8.4): - + # as the opensrf user: /etc/init.d/postgresql-PGSQL_VERSION restart @@ -3649,7 +3647,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). and linked later in Step 9. As the opensrf user, return to the Evergreen build directory and execute these commands: - + # as the opensrf user: cd /home/opensrf/Evergreen-ILS-2.0.4 @@ -3663,7 +3661,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). default Evergreen Staff Client. As the root user, return to the Evergreen build directory and use the make utility as shown below: - + # as the root user: cd /home/opensrf/Evergreen-ILS-2.0.4 @@ -3681,13 +3679,13 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). directory /openils/var/web/xul that points to the subdirectory /server of the new Staff Client build: - + # as the root user: cd /openils/var/web/xul ln -sf rel_2_0_4/server server - 10. + 10. Copy the OpenSRF Configuration Files In this step you will replace some OpenSRF configuration files that you set up in @@ -3698,7 +3696,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). to opensrf. As the root user, execute the following commands: - + # as the root user: cd /openils/conf @@ -3709,10 +3707,10 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). cp oils_web.xml.example oils_web.xml chown -R opensrf:opensrf /openils/ - 11. + 11. Create and Configure PostgreSQL Database - + In this step you will create the Evergreen database. In the commands below, remember to adjust the path of the contrib repository to match your PostgreSQL server @@ -3722,7 +3720,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). the path would be /usr/share/postgresql/8.4/contrib/ . - a. + a. Create and configure the database @@ -3730,7 +3728,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). As the postgres user on the PostgreSQL system create the PostgreSQL database, then set some internal paths: - + # as the postgres user: createdb evergreen -E UTF8 -T template0 @@ -3742,7 +3740,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). and execute the SQL scripts as shown below (remember to adjust the paths as needed, where PGSQL_VERSION is your installed PostgreSQL version, for example 8.4). - + # as the postgres user: psql -f /usr/share/postgresql/PGSQL_VERSION/contrib/tablefunc.sql evergreen @@ -3758,7 +3756,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). named evergreen and assign a password (remember to replace NEWPASSWORD with an appropriate new password): - + # as the postgres user: createuser -P -s evergreen @@ -3767,7 +3765,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Enter it again: NEWPASSWORD - c. + c. Create database schema In this step you will create the database schema and configure your @@ -3778,15 +3776,15 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). the following commands and replace HOSTNAME, PORT, PASSWORD and DATABASENAME with appropriate values: - - - # as the root user: - cd /home/opensrf/Evergreen-ILS-2.0.4 - perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ - --service all --create-schema --create-offline \ - --hostname HOSTNAME --port PORT \ - --user evergreen --password PASSWORD --database DATABASENAME --admin-user ADMIN-USER --admin-pass ADMIN-PASSWORD - + +cd /home/opensrf/Evergreen-ILS-2.0.4 +perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ +--service all --create-schema --create-offline \ +--hostname HOSTNAME --port PORT \ +--user evergreen --password PASSWORD \ +--database DATABASENAME --admin-user ADMIN-USER \ +--admin-pass ADMIN-PASSWORD + On most systems, HOSTNAME will be localhost and PORT will be 5432. @@ -3798,17 +3796,17 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). As the command executes, you may see warnings similar to: ERROR: schema SOMENAME does not exist (in fact, you may see one warning per schema) but they can be safely ignored. - If you are entering the above command on a single line, do not + If you are entering the above command on a single line, do not include the \ (backslash) characters. If you are using the bash shell, these should only be used at the end of a line at a bash prompt to indicate that the command is continued on the next line. - 12. + 12. Configure the Apache web server - + In this step you will configure the Apache web server to support Evergreen software. First, you must enable some built-in Apache modules and install some @@ -3816,14 +3814,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Certificate. Finally, you must make several changes to the Apache configuration file. - a. + a. Enable the required Apache Modules As the root user, enable some modules in the Apache server, then copy the new configuration files to the Apache server directories: - - + + # as the root user: a2enmod ssl # enable mod_ssl @@ -3834,14 +3832,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Module SOMEMODULE already enabled but you can safely ignore them. - b. + b. Copy Apache configuration files You must copy the Apache configuration files from the Evergreen installation directory to the Apache directory. As the root user, perform the following commands: - + # as the root user: cd /home/opensrf/Evergreen-ILS-2.0.4 @@ -3860,7 +3858,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). accept the warnings in the Staff Client and browser during testing and development. As the root user, perform the following commands: - + # as the root user: mkdir /etc/apache2/ssl @@ -3872,7 +3870,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). server.crt and server.key will be created in the directory /etc/apache2/ssl . - This step generates a self-signed SSL certificate. You must install + This step generates a self-signed SSL certificate. You must install a proper SSL certificate for a public production system to avoid warning messages when users login to their account through the OPAC or when staff login through the Staff Client. For further information on @@ -3887,14 +3885,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). /etc/apache2/sites-available/eg.conf . As the root user, edit the file and make the following changes: - • + • In the section <Directory "/openils/var/cgi-bin"> replace the line: Allow from 10.0.0.0/8 with the line: Allow from all - This change allows access to your configuration + This change allows access to your configuration CGI scripts from any workstation on any network. This is only a temporary change to expedite testing and should be removed after you have finished and successfully tested @@ -3903,29 +3901,29 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). for further details on removing this change after the Evergreen installation is complete. - • + • Comment out the line: Listen 443 since it conflicts with the same declaration in the configuration file: /etc/apache2/ports.conf. - • + • The following updates are needed to allow the logs to function properly, but it may break other Apache applications on your server: Edit the Apache configuration file and change the lines: - + export APACHE_RUN_USER=www-data export APACHE_RUN_GROUP=www-data to instead read: - + export APACHE_RUN_USER=opensrf export APACHE_RUN_GROUP=opensrf - • + • As the root user, edit the Apache configuration file @@ -3933,12 +3931,12 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). modify the value for KeepAliveTimeout and MaxKeepAliveRequests to match the following: - + KeepAliveTimeout 1 MaxKeepAliveRequests 100 - • + • Further configuration changes to Apache may be necessary for busy systems. These changes increase the number of Apache server processes that are started to @@ -3950,7 +3948,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). and modify the section related to prefork configuration to suit the load on your system: - + <IfModule mpm_prefork_module> StartServers 20 MinSpareServers 5 @@ -3961,7 +3959,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). - e. + e. Enable the Evergreen web site Finally, you must enable the Evergreen web site. As the @@ -3969,7 +3967,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). following Apache configuration commands to disable the default It Works web page and enable the Evergreen web site, and then restart the Apache server: - + # as the root user: # disable/enable web sites @@ -4021,7 +4019,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). password for router user - 14. + 14. (OPTIONAL) Create Configuration Files for Users Needing srfsh When OpenSRF was installed in the section called “Installing OpenSRF 1.6.3 On Ubuntu or @@ -4036,31 +4034,31 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Modify the OpenSRF Environment In this step you will make some minor modifications to the OpenSRF environment: - • + • As the opensrf user, modify the shell configuration file ~/.bashrc for user opensrf by adding a Perl environmental variable, then execute the shell configuration file to load the new variables into your current environment. - In a multi-server environment, you must add any + In a multi-server environment, you must add any modifications to ~/.bashrc to the top of the file before the line [ -z "$PS1" ] && return . This will allow headless (scripted) logins to load the correct environment. - + # as the opensrf user: echo "export PERL5LIB=/openils/lib/perl5:\$PERL5LIB" >> ~/.bashrc . ~/.bashrc - 16. + 16. (OPTIONAL) Enable and Disable Language Localizations You can load translations such as Armenian (hy-AM), Canadian French (fr-CA), and others into the database to complete the translations available in the OPAC and Staff Client. For further information, see - ???. + Chapter 19, Languages and Localization. Starting EvergreenStarting Evergreen @@ -4068,22 +4066,22 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). In this section you will learn how to start the Evergreen services. For completeness, instructions for stopping Evergreen can be found later in the section called “Stopping Evergreen”. - 1. + 1. As the root user, start the ejabberd and memcached services as follows: - + # as the root user: /etc/init.d/ejabberd start /etc/init.d/memcached start - 2. + 2. As the opensrf user, start Evergreen as follows: - + # as the opensrf user: osrf_ctl.sh -l -a start_all @@ -4094,14 +4092,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). OpenSRF router , Perl , and C services. - • + • You can also start Evergreen without the -l flag, but the osrf_ctl.sh utility must know the fully qualified domain name for the system on which it will execute. That hostname was probably specified in the configuration file opensrf.xml which you configured in a previous step. - • + • If you receive an error message similar to osrf_ctl.sh: command not found, then your environment variable PATH does not include the @@ -4110,7 +4108,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). edit the configuration file ~/.bashrc and add the following line: export PATH=$PATH:/openils/bin - • + • If you receive an error message similar to Can't locate OpenSRF/System.pm in @INC ... BEGIN failed--compilation aborted, then your environment variable @@ -4121,7 +4119,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). add the following line: export PERL5LIB=$PERL5LIB:/openils/lib/perl5 - 3. + 3. In this step you will generate the Web files needed by the Staff Client and catalog, and update the proximity of locations in the Organizational Unit @@ -4130,7 +4128,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). changes to the library hierarchy. As the opensrf user, execute the following command and review the results: - + # as the opensrf user: cd /openils/bin @@ -4153,16 +4151,16 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Successfully updated the organization proximity Done - 4. + 4. As the root user, restart the Apache Web server: - + # as the root user: /etc/init.d/apache2 restart - If the Apache Web server was running when you started the OpenSRF + If the Apache Web server was running when you started the OpenSRF services, you might not be able to successfully log into the OPAC or Staff Client until the Apache Web server has been restarted. @@ -4181,7 +4179,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). srfsh commands, type help at the prompt. As the opensrf user, execute the following commands to test your Evergreen connection: - + # as the opensrf user: /openils/bin/srfsh @@ -4209,11 +4207,11 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). ------------------------------------ If this does not work, try the following: - • + • As the opensrf user, run the settings-tester.pl utility to review your Evergreen installation for any system configuration problems: - + # as the opensrf user: cd /home/opensrf @@ -4222,10 +4220,10 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). If the output of settings-tester.pl does not help you find the problem, please do not make any significant changes to your configuration. - • + • Follow the steps in the troubleshooting guide in Chapter 14, Troubleshooting System Errors. - • + • If you have followed the entire set of installation steps listed here closely, you are probably extremely close to a working system. Gather your configuration files and log files and contact the @@ -4243,13 +4241,13 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). version 3.0 and later on Ubuntu and Debian distributions). As the root user, start the Staff Client as shown: - + # as the root user: xulrunner /home/opensrf/Evergreen-ILS-v/Open-ILS/xul/staff_client/build/application.ini A login screen for the Staff Client similar to this should appear: - + First, add the name of your Evergreen server to the field Hostname in the Server section. You will probably want to use 127.0.0.1. After adding the server name, click Re-Test @@ -4257,15 +4255,13 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Status and Version. Because this is the initial run of the Staff Client, you will see a warning in the upper-right saying: Not yet configured for the specified - server. To continue, you must assign a workstation name. Refer to - ??? for further details. - Try to log into the Staff Client with the username admin and - the password open-ils. If the login is successful, you will see the - following screen: - + server. To continue, you must assign a workstation name. + Try to log into the Staff Client with the admin username and password you created during installation. If the login is successful, + you will see the following screen: + Otherwise, you may need to click 'Add SSL Exception' in the main window. You should see a popup window titled Add Security Exception: - + Click 'Get Certificate', then click 'Confirm Security Exception', then click 'Re-Test Server' in the main window and try to log in again. @@ -4278,7 +4274,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). commands. Note the use of restart to force the new Evergreen modules to be reloaded even if the Apache server is already running. Any problems found with your configuration files should be displayed: - + # as the root user: apache2ctl configtest && /etc/init.d/apache2 restart @@ -4291,13 +4287,13 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Evergreen services. As the opensrf user, stop all Evergreen services by using the following command: - + # as the opensrf user # stop the server; use "-l" to force hostname to be "localhost" osrf_ctl.sh -l -a stop_all - You can also stop Evergreen services without the + You can also stop Evergreen services without the -l flag, but the osrf_ctl.sh utility must know the fully qualified domain name for the system on which it will execute. That hostname may have been specified in the configuration file opensrf.xml, which @@ -4309,7 +4305,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). There are several additional steps you may need to complete after Evergreen has been successfully installed and tested. Some steps may not be needed (e.g., setting up support for Reports). - Remove temporary Apache configuration changes + Remove temporary Apache configuration changes You modified the Apache configuration file /etc/apache2/sites-available/eg.conf in an earlier step as a @@ -4317,7 +4313,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Step 12.d for further information). Those changes must now be reversed in order to deny unwanted access to your CGI scripts from users on other public networks. - + This temporary network update was done to expedite testing. You must correct @@ -4343,14 +4339,14 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). system. Some popular projects are listed below; please review them for background information on why you need such a system and how you can provide it: - • + • http://www.openca.org/projects/openca/ - • + • http://sourceforge.net/projects/ejbca/ - • + • http://pki.fedoraproject.org - + The temporary SSL key was only created to expedite testing. You should install a proper SSL certificate for a public @@ -4358,7 +4354,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). - (OPTIONAL) IP-Redirection + (OPTIONAL) IP-Redirection By default, Evergreen is configured so searching the OPAC always starts in the top-level (regional) library rather than in a second-level (branch) library. Instead, @@ -4378,7 +4374,7 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Finally, modify the Apache startup script /etc/apache2/startup.pl by uncommenting two lines as shown, then restarting the Apache server: - + # - Uncomment the following 2 lines to make use of the IP redirection code # - The IP file should contain a map with the following format: # - actor.org_unit.shortname <start_ip> <end_ip> @@ -4387,10 +4383,10 @@ use OpenILS::WWW::Redirect qw(/openils/conf/opensrf_core.xml); OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); - (OPTIONAL) Set Up Support For Reports + (OPTIONAL) Set Up Support For Reports Evergreen reports are extremely powerful but require some simple configuration. - See ??? for information on starting and + See Chapter 20, Starting and Stopping the Reporter Daemon for information on starting and stopping the Reporter daemon processes. @@ -4406,121 +4402,121 @@ OpenILS::WWW::Redirect->parse_ips_file('/openils/conf/lib_ips.txt'); service interruptions. All of the steps in this chapter are to be completed from the command line. Evergreen 2.0 has several software requirements: - •PostgreSQL: Version 8.4 is the minimum supported version of PostgreSQL. •Linux: Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid Lynx (10.04). If you are runnung an older version of these distributions, + •PostgreSQL: Version 8.4 is the minimum supported version of PostgreSQL. •Linux: Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid Lynx (10.04). If you are runnung an older version of these distributions, you may want to upgrade befor installing Evergreen 2.0. For instructions on upgrading these distributions, visit the Debian or Ubuntu websites. In the following instructions, you are asked to perform certain steps as either the root or opensrf user. - •Debian: To become the root user, issue the su command and enter the password of the - root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. + •Debian: To become the root user, issue the su command and enter the password of the + root user.•Ubuntu: To become the root user, issue the sudo su command and enter the password of your current user. To switch from the root user to a different user, issue the su - [user] command; for example, su - opensrf. Once you have become a non-root user, to become the root user again simply issue the exit command. In the following instructions, /path/to/OpenSRF/ represents the path to the OpenSRF source directory. Backing Up DataBacking Up Data - 1. + 1. As root, stop the Apache - web server. - 2. + web server. + 2. As the opensrf user, stop all Evergreen and OpenSRF services: - osrf_ctl.sh -l -a stop_all - 3. + osrf_ctl.sh -l -a stop_all + 3. Back up of the /openils directory. - 4. + 4. Back up the evergreen database. Upgrading OpenSRF to 1.6.3Upgrading OpenSRF to 1.6.3 - - 1. + + 1. As the opensrf user, download and extract the source files for OpenSRF 1.6.3: - + wget http://open-ils.org/downloads/OpenSRF-1.6.3.tar.gz tar xzf OpenSRF-1.6.3.tar.gz A new directory OpenSRF-1.6.3 is created. - For the latest edition of OpenSRF, check the Evergreen download page at + For the latest edition of OpenSRF, check the Evergreen download page at http://www.open-ils.org/downloads.php. - 2. + 2. As the root user, install the software prerequisites using the automatic prerequisite installer. - + aptitude install make cd /home/opensrf/OpenSRF-1.6.3 Replace [distribution] below with the following value for your distribution: - • + • debian-squeeze for Debian Squeeze (6.0) - - • + + • fedora13 for Fedora 13 - - • + + • ubuntu-lucid for Ubuntu Lucid Lynx - (10.04) - • + (10.04) + • centos for CentOS 5 - - • + + • rhel for Red Hat Enterprise Linux 5 - + - + cd /path/to/OpenSRF make -f src/extras/Makefile.install [distribution] This will install a number of packages required by OpenSRF on your system, - including some Perl modules from CPAN. You can type no to the initial CPAN + including some Perl modules from CPAN. You can type no to the initial CPAN configuration prompt to allow it to automatically configure itself to download and install Perl modules from CPAN. The CPAN installer will ask you a number of times whether it should install prerequisite modules - type yes. - 3. + 3. As the opensrf user, configure and compile OpenSRF: - You can include the –enable-python and –enable-java configure options if - you want to include support for Python and Java - , respectively. - + You can include the –enable-python and –enable-java configure options if + you want to include support for Python and Java + , respectively. + cd /home/opensrf/OpenSRF-1.6.3 ./configure --prefix=/openils --sysconfdir=/openils/conf make - 4. + 4. As the root user, return to your OpenSRF build directory and install OpenSRF: - + cd /home/opensrf/OpenSRF-1.6.3 make install - 5. + 5. As the root user, change the ownership of the installed files to the opensrf user: - chown -R opensrf:opensrf /openils - 6. + chown -R opensrf:opensrf /openils + 6. Restart and Test OpenSRF - + osrf_ctl.sh -l -a start_all /openils/bin/srfsh srfsh# request opensrf.math add 2 2 You should see output such as: - + Received Data: 4 ------------------------------------ @@ -4537,77 +4533,77 @@ srfsh# Upgrade Evergreen from 1.6.1 to 2.0Upgrade Evergreen from 1.6.1 to 2.0 - + PostgreSQL 8.4 is the minimum supported version of PostgreSQL. Evergreen 2.0 has been tested on Debian Squeeze (6.0) and Ubuntu Lucid (10.04). If you are runnung an older version of these distributions, you may want to upgrade befor installing Evergreen 2.0. For instructions on upgrading these distribuitions, visit the Debian or Ubuntu websites. - + Copying these Apache configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying them. For example, if you purchased an SSL certificate, you will need to edit eg.conf to point to the appropriate SSL certificate files. - - 1. + + 1. As the opensrf user, download and extract Evergreen 2.0 - + wget http://www.open-ils.org/downloads/Evergreen-ILS-2.0.4.tar.gz tar xzf Evergreen-ILS-2.0.4.tar.gz - For the latest edition of Evergreen 2.0, check the Evergreen download page at + For the latest edition of Evergreen 2.0, check the Evergreen download page at http://www.open-ils.org/downloads.php and adjust upgrading instructions accordingly. - 2. + 2. As the root user, install the prerequisites: - cd /home/opensrf/Evergreen-ILS-2.0.4 + cd /home/opensrf/Evergreen-ILS-2.0.4 On the next command, replace [distribution] with one of these values for your distribution of Debian or Ubuntu: - • + • debian-squeeze for Debian Squeeze (6.0) - • + • ubuntu-lucid for Ubuntu Lucid Lynx (10.04) - make -f Open-ILS/src/extras/Makefile.install [distribution] - 3. + make -f Open-ILS/src/extras/Makefile.install [distribution] + 3. As the opensrf user, configure and compile Evergreen: - cd /home/opensrf/Evergreen-ILS-2.0.4 - ./configure --prefix=/openils --sysconfdir=/openils/conf - make - 4. + cd /home/opensrf/Evergreen-ILS-2.0.4 + ./configure --prefix=/openils --sysconfdir=/openils/conf + make + 4. As the root user, install Evergreen: - make STAFF_CLIENT_BUILD_ID=rel_2_0_4 install - 5. + make STAFF_CLIENT_BUILD_ID=rel_2_0_4 install + 5. Change to the Evergreen installation directory: - cd /home/opensrf/Evergreen-ILS-2.0.4 - 6. + cd /home/opensrf/Evergreen-ILS-2.0.4 + 6. As the root user, change all files to be owned by the opensrf user and group: - chown -R opensrf:opensrf /openils - 7. + chown -R opensrf:opensrf /openils + 7. As the opensrf user, update server symlink in /openils/var/web/xul/: - + cd /openils/var/web/xul/ rm server ln -s rel_2_0_4/server - 8. + 8. Update the evergreen database: - It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. - The 1.6.1-2.0-upgrade-db.sql upgrade script may take a long time (hours) to process + It is recommended that you back up your Evergreen database in order to restore your data if anything goes wrong. + The 1.6.1-2.0-upgrade-db.sql upgrade script may take a long time (hours) to process on larger systems. - + psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/1.6.1-2.0-upgrade-db.sql evergreen psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.0-2.0.1-upgrade-db.sql evergreen @@ -4618,83 +4614,84 @@ psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/2.0.3-2.0.4-upgrade-db.sql - 9. + 9. Run the reingest-1.6-2.0.pl script to generate an sql script. Then use the sql file to reingest bib records into your evergreen database. This is required to make the new facet sidebar in OPAC search results work and to upgrade the keyword indexes to use the revised NACO normalization routine. - If you are running a large Evergreen installation, it is recommend that you examine the script first. Reingesting a large number of bibliographic records + If you are running a large Evergreen installation, it is recommend that you examine the script first. Reingesting a large number of bibliographic records may take several hours. -perl Open-ILS/src/sql/Pg/reingest-1.6-2.0.pl -psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/reingest-1.6-2.0.sql evergreen - 10. +perl Open-ILS/src/sql/Pg/reingest-1.6-2.0.pl +psql -U evergreen -h localhost -f Open-ILS/src/sql/Pg/reingest-1.6-2.0.sql evergreen + 10. As the opensrf user, copy /openils/conf/oils_web.xml.example to /openils/conf/oils_web.xml . (If upgrading from 1.6.1.x, oils_web.xml should already exist.) - - cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml - 11. + + cp /openils/conf/oils_web.xml.example /openils/conf/oils_web.xml + 11. Update opensrf_core.xml and opensrf.xml by copying the new example files (/openils/conf/opensrf_core.xml.example and /openils/conf/opensrf.xml). - - cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml - - cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml - Copying these configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying + + cp /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml + + cp /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml + Copying these configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying them. - 12. + 12. Update opensrf.xml with the database connection info: - -perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config --service all --create-offline --user evergreen \ ---password evergreen --hostname localhost --port 5432 --database evergreen + +perl Open-ILS/src/support-scripts/eg_db_config.pl --update-config \ +--service all --create-offline --user evergreen --password evergreen \ +--hostname localhost --port 5432 --database evergreen - 13. + 13. Update /etc/apache2/startup.pl by copying the example from - Open-ILS/examples/apache/startup.pl. - 14. + Open-ILS/examples/apache/startup.pl. + 14. Update /etc/apache2/eg_vhost.conf by copying the example from - Open-ILS/examples/apache/eg_vhost.conf. - 15. + Open-ILS/examples/apache/eg_vhost.conf. + 15. Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/ - examples/apache/eg.conf. + examples/apache/eg.conf. Restart Evergreen and TestRestart Evergreen and Test - 1. + 1. As the opensrf user, start all Evergreen and OpenSRF services: - osrf_ctl.sh -l -a start_all - 2. + osrf_ctl.sh -l -a start_all + 2. As the opensrf user, run autogen to refresh the static - organizational data files: - + organizational data files: + cd /openils/bin ./autogen.sh -c /openils/conf/opensrf_core.xml -u - 3. + 3. Start srfsh and try logging in using your Evergreen username and password: - + /openils/bin/srfsh srfsh% login username password - 4. + 4. Start the Apache web server. - + If you encounter errors, refer to the troubleshooting section of this documentation for tips on finding solutions and seeking further assistance from the Evergreen community. @@ -4703,21 +4700,21 @@ cd /openils/bin Upgrading PostgreSQL from 8.2 to 8.4 (if required)Upgrading PostgreSQL from 8.2 to 8.4 (if required) Evergreen 2.0 requires PostgreSQL version 8.4 or later. - The order of the following steps is very important. - 1. + The order of the following steps is very important. + 1. As opensrf, stop the evergreen and opensrf services: - osrf_ctl.sh -l -a stop_all - 2. + osrf_ctl.sh -l -a stop_all + 2. Backup the Evergreen database data - 3. + 3. Upgrade to Postgresql 8.4 by removing old version and installing Postgresql 8.4 - 4. + 4. Create an empty Evergreen database in postgresql 8.4 by issuing the following commands as the postgres user: - + createdb -E UNICODE evergreen createlang plperl evergreen @@ -4728,24 +4725,24 @@ psql -f /usr/share/postgresql/8.4/contrib/tsearch2.sql evergreen psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen - 5. + 5. As the postgres user on the PostgreSQL server, create a PostgreSQL user named evergreen for the database cluster: - createuser -P -s evergreen + createuser -P -s evergreen Enter the password for the new PostgreSQL superuser (evergreen) - 6. + 6. Restore data from backup created in step 1. - 7. + 7. To point tsearch2 to proper function names in 8.4, run the SQL script /home/opensrf/Evergreen-ILS*/Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql using the psql command. - cd /home/opensrf/Evergreen-ILS* - psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen - 8. + cd /home/opensrf/Evergreen-ILS* + psql -f /Open-ILS/src/sql/Pg/000.english.pg84.fts-config.sql evergreen + 8. - Restart Evergreen and OpenSRF services - 9. + Restart Evergreen and OpenSRF services + 9. For additional information regarding upgrading PostgreSQL, see the following documentation in PostgreSQL: http://www.postgresql.org/docs/8.4/static/install-upgrading.html @@ -4765,7 +4762,7 @@ psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen Migrating Bibliographic RecordsMigrating Bibliographic Records - + One of the most important and challenging tasks is migrating your bibliographic records to a new system. The procedure may be different depending on the system from which you are migrating and the content of the marc records exported from the existing system. The procedures in this section deal with the process once the data from the existing system @@ -4774,26 +4771,26 @@ psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen (/home/opensrf/Evergreen-ILS-1.6.1.6/Open-ILS/src/extras/import/ ) and are also available from the Evergreen repository ( http://svn.open-ils.org/trac/ILS/browser/branches/rel_1_6_1/Open-ILS/src/extras/import). - Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format + Converting MARC records to Evergreen BRE JSON formatConverting MARC records to Evergreen BRE JSON format - + If you are starting with MARC records from your existing system or another source, use the marc2bre.pl script to create the JSON representation of a bibliographic record entry (hence bre) in Evergreen. marc2bre.pl can perform the following functions: - •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field - actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following + •Converts MARC-8 encoded records to UTF-8 encoding•Converts MARC21 to MARCXML21•Select the unique record number field (common choices are '035' or '001'; check your records as you might be surprised how a supposedly unique field + actually has duplicates, though marc2bre.pl will select a unique identifier for subsequent duplicates)•Extracts certain pertinent fields indexing and display purposes (along with the complete MARCXML21 record)•Sets the ID number of the first record from this batch to be imported into the biblio.record_entry table (hint - run the following SQL to determine what this number should be to avoid conflicts: - + psql -U postgres evergreen # SELECT MAX(id)+1 FROM biblio.record_entry; - • + • If you are processing multiple sets of MARC records with marc2bre.plbefore loading the records into the database, you will need to keep track of the starting ID number for each subsequent batch of records that you are importing. For example, if you are processing three files of MARC records with 10000 records each into a clean database, you would use –startid 1, –startid 10001, and –startid 20001 parameters for each respective file. - • + • Ignore “trash” fields that you do not want to retain in Evergreen - • + • If you use marc2bre.pl to convert your MARC records from the MARC-8 encoding to the UTF-8 encoding, it relies on the MARC::Charset Perl module to complete the conversion. When importing a large set of items, you can speed up the process by using a utility like marc4j or marcdumper to convert the records @@ -4803,7 +4800,7 @@ psql -f /usr/share/postgresql/8.4/contrib/pgxml.sql evergreen horrible thing and ensure that there are no namespace prefixes in front of the element names. marc2bre.pl cannot parse the following example: - + <?xml version="1.0" encoding="UTF-8" ?> @@ -4825,7 +4822,7 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> But marc2bre.pl can parse the same example with the namespace prefixes removed: - + <?xml version="1.0" encoding="UTF-8" ?> <collection xmlns:marc="http://www.loc.gov/MARC21/slim" @@ -4846,19 +4843,19 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> - Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL + Converting Records for Import into PostgreSQLConverting Records for Import into PostgreSQL - + Once you have your records in Open-ILS JSON ingest format, you then need to use pg_loader.pl to convert these records into a set of SQL statements that you can use to load the records into PostgreSQL. The –order and –autoprimary command line options (bre, mrd, mfr, etc) map to class IDs defined in /openils/conf/fm_IDL.xml. - Adding Metarecords to the DatabaseAdding Metarecords to the Database + Adding Metarecords to the DatabaseAdding Metarecords to the Database - + One you have loaded the records into PostgreSQL, you can create metarecord entries in the metabib.metarecord table by running the following SQL: - + psql evergreen # \i /home/opensrf/Evergreen-ILS-1.6*/src/extras/import/quick_metarecord_map.sql @@ -4870,18 +4867,18 @@ http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"> Migrating Bibliographic Records Using the ESI Migration ToolsMigrating Bibliographic Records Using the ESI Migration Tools - + The following procedure explains how to migrate bibliographic records from marc records into Evergreen. This is a general guide and will need to be adjusted for your specific environment. It does not cover exporting records from specific proprietary ILS systems. For assistance with exporting records from your current system please refer to the manuals for your system or you might try to ask for help from the Evergreen community. - 1. + 1. Download the Evergreen migration utilities from the git repository. Use the command git clone git://git.esilibrary.com/git/migration-tools.git to clone the migration tools. Install the migration tools: - + cd migration-tools/Equinox-Migration @@ -4892,23 +4889,23 @@ make install -2. +2. Add environmental variables for migration and import tools. These paths must point to: - •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) + •the import perl scripts bundled with Evergreen •the folder where you extracted the migration tools•the location of the Equinox-Migration perl modules •the location of the Evergreen perl modules (e.g. perl5) - + export PATH=[path to Evergreen]/Open-ILS/src/extras/import: \ /[path to migration-tools]/migration-tools:$PATH:. export PERL5LIB=/openils/lib/perl5: \ /[path to migration-tools/Equinox-Migration/lib -3. +3. Dump marc records into MARCXML using yaz-marcdump - + echo '<?xml version="1.0" encoding="UTF-8" ?>' > imported_marc_records.xml @@ -4916,10 +4913,10 @@ yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> im -4. +4. Test validity of XML file using xmllint - + @@ -4927,56 +4924,56 @@ yaz-marcdump -f MARC-8 -t UTF-8 -o marcxml imported_marc_records.mrc >> im -5. +5. Clean up the marc xml file using the marc_cleanup utility: - + marc_cleanup --marcfile=imported_marc_records.xml --fullauto [--renumber-from #] -ot 001 The --renumber-from is required if you have bibliographic records already in your system. Use this to set the starting id number higher then the last id in the biblio.record_entry table. The marc_cleanup command will generate a file called clean.marc.xml -6. +6. Create a fingerprinter file using the fingerprinter utility: - + fingerprinter -o incumbent.fp -x incumbent.ex clean.marc.xml fingerprinter is used for deduplification of the incumbent records. The -o option specifies the output file and the -x option is used to specify the error output file. -7. +7. Create a fingerprinter file for existing Evergreen bibliographic records using the fingerprinter utility if you have existing bibliographic records in your system previously imported: - + fingerprinter -o production.fp -x production.fp.ex --marctype=MARC21 existing_marc_records.mrc \ --tag=901 --subfield=c fingerprinter is used for deduplification of the incumbant records. -8. +8. Create a merged fingerprint file removing duplicate records. - + cat cat production.fp incumbent.fp | sort -r > dedupe.fp match_fingerprints [-t start id] -o records.merge dedupe.fp -9. +9. Create a new import XML file using the extract_loadset utility - + extract_loadset -l 1 -i clean.marc.xml -o merged.xml records.merge -10. +10. Extract all of the currently used TCN's an generate the .bre and .ingest files to prepare for the bibliographic record load. - + psql -U evergreen -c "select tcn_value from biblio.record_entry where not deleted" \ | perl -npe 's/^\s+//;' > used_tcns @@ -4984,14 +4981,14 @@ marc2bre.pl --idfield 903 [--startid=#] --marctype=XML -f final.xml \ --used_tcn_file=used_tcns > evergreen_bre_import_file.bre - + The option --startid needs to match the start id used in earlier steps and must be higher than largest id value in the biblio.record_entry table. the option --idfield should match the marc datafield used to store your records ids. -11. +11. Ingest the bibliographic records into the Evergreen database. - + parallel_pg_loader.pl \ @@ -5013,10 +5010,10 @@ parallel_pg_loader.pl \ - 12. + 12. Load the records using psql and the sql scripts generated from the previous step. - + psql -U evergreen -h localhost -d evergreen -f bibrecords.sql @@ -5024,10 +5021,10 @@ psql -U evergreen < ~/Ever*/Open-ILS/src/extras/import/quick_metarecord_map.s - 13. + 13. Extract holdings from marc records for importing copies into Evergreen using the extract_holdings utility. - + extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map holdings.map @@ -5035,7 +5032,7 @@ extract_holdings --marcfile=clean.marc.xml --holding 999 --copyid 999i --map hol This command would extract holdings based on the 949 datafield in the marc records. The copy id is generated from the subfile i in the 999 datafield. You may need to adjust these options based on the field used for holdings informatiom in your marc records. The map option holdings.map refers to a file to be used for mapping subfields to the holdings data you would like extracted. Here is an example based on mapping holdings data to the 999 data field: - + callnum 999 a barcode 999 i @@ -5045,7 +5042,7 @@ circ_modifier 999 t Running the extract holdings script should produce an sql script HOLDINGS.pg similar to: - + BEGIN; egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, @@ -5056,7 +5053,7 @@ egid, hseq, l_callnum, l_barcode, l_location, l_owning_lib, l_circ_modifier, Edit the holdings.pg sql script like so: - + BEGIN; TRUNCATE TABLE staging_items; @@ -5076,15 +5073,15 @@ COMMIT; Adding Copies to Bibliographic RecordsAdding Copies to Bibliographic Records - + Before bibliographic records can be found in an OPAC search copies will need to be created. It is very important to understand how various tables related to each other in regards to holdings maintenance. The following procedure will guide you through the process of populating Evergreen with volumes and copies. This is a very simple example. The SQL queries may need to be adjusted for the specific data in your holdings. - 1. + 1. Create a staging_items staging table to hold the holdings data: - + CREATE TABLE staging_items ( l_callnum text, -- call number label hseq int, @@ -5096,26 +5093,26 @@ CREATE TABLE staging_items ( l_owning_lib text -- actor.org_unit.shortname ); - 2. + 2. Import the items using the HOLDINGS.pg SQL script created using the extract_holdings utility. - + psql -U evergreen -f HOLDINGS.pg evergreen the file HOLDINGS.pg and/or the COPY query may need to be adjusted for your particular circumstances. - 3. + 3. Generate shelving locations from your staging table. - + INSERT INTO asset.copy_location (name, owning_lib) SELECT DISTINCT l.location, ou.id FROM staging_items l JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname); - 4. + 4. Generate circulation modifiers from your staging table. - + INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magnetic_media) SELECT DISTINCT l_circ_modifier AS code, l_circ_modifier AS name, @@ -5125,19 +5122,19 @@ INSERT INTO config.circ_modifier (code, name, description, sip2_media_type, magn FROM staging_items WHERE l_circ_modifier NOT IN (SELECT code FROM config.circ_modifier); - 5. + 5. Generate call numbers from your staging table: - + INSERT INTO asset.call_number (creator,editor,record,label,owning_lib) SELECT DISTINCT 1, 1, egid, l.callnum, ou.id FROM staging.staging_items l JOIN actor.org_unit ou ON (l.owning_lib = ou.shortname); - 6. + 6. Generate copies from your staging table: - + INSERT INTO asset.copy ( circ_lib, creator, editor, create_date, barcode, STATUS, location, loan_duration, fine_level, circ_modifier, deposit, ref, call_number) @@ -5175,28 +5172,28 @@ SELECT DISTINCT ou.id AS circ_lib, Migrating Patron DataMigrating Patron Data - + - This section will explain the task of migrating your patron data from comma delimited files into Evergreen. + This section will explain the task of migrating your patron data from comma delimited files into Evergreen. It does not deal with the process of exporting from the non-Evergreen system since this process may vary depending on where you are extracting your patron records. Patron could come from an ILS or it could come from a student database in the case of academic records. When importing records into Evergreen you will need to populate 3 tables in your Evergreen database: - •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. + •actor.usr - The main table for user data•actor.card - Stores the barcode for users; Users can have more than 1 card but only 1 can be active at a given time;•actor.usr_address - Used for storing address information; A user can have more than one address. Before following the procedures below to import patron data into Evergreen, it is a good idea to examine the fields in these tables in order to decide on a strategy for data to include in your import. It is important to understand the data types and constraints on each field. - 1. + 1. Export the patron data from your existing ILS or from another source into a comma delimited file. The comma delimited file used for importing - the records should use Unicode (UTF8) character encoding. - 2. + the records should use Unicode (UTF8) character encoding. + 2. - Create a staging table. A staging table will allow you to tweak the data before importing. + Create a staging table. A staging table will allow you to tweak the data before importing. Here is an example sql statement: - - + + CREATE TABLE students ( student_id int, barcode text, last_name text, first_name text, program_number text, program_name text, email text, address_type text, street1 text, street2 text, @@ -5207,19 +5204,19 @@ CREATE TABLE students ( Note the DEFAULT variables. These allow you to set default for your library or to populate required fields if you data allows NULL values where fields are required in Evergreen. - 3. + 3. Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example of sql to adjust phone numbers in the staging table to fit the evergreen field: - + UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || substring(phone from 10), '(', ''), ')', ''), ' ', '-'); Data “massaging” may be required to fit formats used in Evergreen. - 4. + 4. Insert records from the staging table into the actor.usr Evergreen table: - + INSERT INTO actor.usr ( profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, day_phone, home_ou, claims_returned_count, net_access_level) @@ -5227,10 +5224,10 @@ substring(phone from 10), '(', ''), ')', ''), ' ', '-'); first_name, last_name, phone, home_ou, claims_returned_count, net_access_level FROM students; - 5. + 5. insert records into actor.card from actor.usr. - + INSERT INTO actor.card (usr, barcode) SELECT actor.usr.id, students.barcode FROM students @@ -5238,29 +5235,29 @@ INSERT INTO actor.card (usr, barcode) ON students.usrname = actor.usr.usrname; This assumes a one to one card patron relationship. If your patron data import has multiple cards assigned to one patron more complex import scripts may be required which look for inactive or active flags. - 6. + 6. Update actor.usr.card field with actor.card.id to associate active card with the user: - + UPDATE actor.usr SET card = actor.card.id FROM actor.card WHERE actor.card.usr = actor.usr.id; - 7. + 7. Insert records into actor.usr_address to add address information for users: - + INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, students.country, students.postal_code FROM students INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; - 8. + 8. update actor.usr.address with address id from address table. - + UPDATE actor.usr SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id FROM actor.usr_address @@ -5268,18 +5265,18 @@ UPDATE actor.usr This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. - Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons + Creating an sql Script for Importing PatronsCreating an sql Script for Importing Patrons The procedure for importing patron can be automated with the help of an sql script. Follow these steps to create an import script: - 1. + 1. Create an new file and name it import.sql - 2. + 2. Edit the file to look similar to this: - + BEGIN; -- Create staging table. @@ -5333,9 +5330,9 @@ COMMIT; each sql statement is doing and are not processed. - Batch Updating Patron DataBatch Updating Patron Data + Batch Updating Patron DataBatch Updating Patron Data - + For academic libraries, doing batch updates to add new patrons to the Evergreen database is a critical task. The above procedures and import script can be easily adapted to create an update script for importing new patrons from external databases. If the data import file contains only new patrons, then, the above procedures will work well to insert those patrons. However, if the data load contains all patrons, a second staging table and a procedure to remove existing patrons from that second staging table may be required before importing the new patrons. Moreover, additional steps to update address information and perhaps delete @@ -5349,56 +5346,56 @@ COMMIT; If you've done a test import of records and you want to quickly get Evergreen back to a pristine state, you can create a clean Evergreen database schema by performing the following: - 1. + 1. - + cd ILS/Open-ILS/src/sql/Pg/ - 2. + 2. Rebuild the database schema: - + ./build-db.sh [db-hostname> [db-port] [db-name] [db-user] [db-password] [db-version] - This will remove all of your data from the database and restore the default values. + This will remove all of your data from the database and restore the default values. Exporting Bibliographic Records into MARC filesExporting Bibliographic Records into MARC files - + The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the opensrf user from your Evergreen server. - + Processing time for exporting records will depond on several factors such as the number of records you are exporting. It is recommended that you divide the export id files (records.txt) into manageable number of records if you are exporting a large number of records. - 1. + 1. Create a text file list of the Bibliographic record ids you would like to export from Evergreen. One way to do this is using SQL: - + SELECT DISTINCT bre.id FROM biblio.record_entry AS bre JOIN asset.call_number AS acn ON acn.record-bre.id WHERE bre.deleted='false' and ownling_lib=101 \g /home/opensrf/records.txt; This query will create a file called records.txt containing a column of distinct ids of items owned by the organizational unit with the id 101. - 2. + 2. Navigate to the support-scripts folder - + cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ - 3. + 3. Run marc_export, using the id file you created in step 1 to define which files to export. - + cat /home/opensrf/records.txt | ./marc_export -i -c /openils/conf/opensrf_core.xml \ -x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml The example above export the records into MARCXML format. - + For help or for more options when running marc_export, run marc_export with the -h option: - + ./marc_export -h @@ -5407,77 +5404,77 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre Importing Authority RecordsImporting Authority Records - + The following procedure explains how to export Evergreen bibliographic records into MARC files using the marc_export support script. All steps should be performed by the opensrf user from your Evergreen server. - Importing Authority Records from Command LineImporting Authority Records from Command Line + Importing Authority Records from Command LineImporting Authority Records from Command Line The major advantages of the command line approach are its speed and its convenience for system administrators who can perform bulk loads of authority records in a controlled environment. - 1. + 1. Run marc2are.pl against the authority records, specifying the user name, password, MARC type (USMARC or XML). Use STDOUT redirection to either pipe the output directly into the next command or into an output file for inspection. For example, to process a set of authority records named auth_small.xml using the default user name and password and directing the output into a file named auth.are: - + cd Open-ILS/src/extras/import/ perl marc2are.pl --user admin --pass open-ils auth_small.xml > auth.are - 2. + 2. Run pg_loader.pl to generate the SQL necessary for importing the authority records into your system. To save time for very large batches of records, you could simply pipe the output of marc2are.pl directly into pg_loader.pl. - + cd Open-ILS/src/extras/import/ perl pg_loader.pl --auto are --order are auth.are > auth_load.sql - 3. + 3. Load the authority records from the SQL file that you generated in the last step into your Evergreen database using the psql tool. Assuming the default user name, host name, and database name for an Evergreen instance, that command looks like: - + psql -U evergreen -h localhost -d evergreen -f auth_load.sql - Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client + Importing authority records using the MARC Batch Import/Export interface from the Staff ClientImporting authority records using the MARC Batch Import/Export interface from the Staff Client Good for loading batches of up to 5,000 records (roughly) at a time, the major advantages to importing authority records using the MARC Batch Import/Export interface are that it does not require command-line or direct database access – good for both security in that it minimizes the number of people who need this access and for spreading the effort around to others in the library – and it does most of the work (for example, figuring out whether the batch of records is in XML or USMARC format) for you. To import a set of MARC authority records from the MARC Batch Import/Export interface: - 1. + 1. From the Evergreen staff client, select Cataloging → MARC Batch Import/Export. The Evergreen MARC File Upload screen opens, with Import Records as the highlighted tab. - 2. + 2. From the Bibliographic records drop-down menu, select Authority records. - 3. + 3. Enter a name for the queue (batch import job) in the Create a new upload queue field. - 4. + 4. Select the Auto-Import Non-Colliding Records checkbox. - 5. + 5. Click the Browse… button to select the file of MARC authorities to import. - 6. + 6. Click the Upload button to begin importing the records. The screen displays Uploading… Processing… to show that the records are being transferred to the server, then displays a progress bar to show the actual import progress. When the staff client displays the progress bar, you can disconnect your staff client safely. Very large batches of records might time out at this stage. - 7. + 7. Once the import is finished, the staff client displays the results of the import process. You can manually display the import progress by selecting the Inspect Queue tab of the MARC Batch Import/Export interface and selecting the queue name. By default, the staff client does not @@ -5503,76 +5500,76 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre commands to stop and start the Evergreen server. You can start and stop Evergreen from the command line of the server using the osrf_ctl.sh script located in the openils/bin directory. - The osrf_ctl.sh command must be run as the opensrf user. + The osrf_ctl.sh command must be run as the opensrf user. To view help on osrf_ctl.sh and get all of its options, run: - osrf_ctl.sh -h + osrf_ctl.sh -h To start Evergreen, run: - osrf_ctl.sh -l -a start_all + osrf_ctl.sh -l -a start_all The -l flag is used to indicate that Evergreen is configured to use localhost as the host. If you have configured opensrf.xml to use your real hostname, do not use the -l flag. The -a option is required and indicates the action of the command. In this case start_all. - + If you receive the error message: osrf_ctl.sh: command not found, then your environment variable - PATH does not include the + PATH does not include the /openils/bin directory. You can set it using the following command: - export PATH=$PATH:/openils/bin + export PATH=$PATH:/openils/bin If you receive the error message Can't locate OpenSRF/System.pm in @INC … BEGIN - failed–compilation aborted, then your environment variable PERL5LIB does not include the /openils/lib/perl5 directory. You can set it using the following command: - export PERL5LIB=$PERL5LIB:/openils/lib/perl5 + export PERL5LIB=$PERL5LIB:/openils/lib/perl5 It is also possible to start a specific service. For example: - osrf_ctl.sh -l -a start_router + osrf_ctl.sh -l -a start_router will only start the router service. - + If you decide to start each service individually, you need to start them in a specific order for Evergreen to start correctly. Run the commands in this exact order: - osrf_ctl.sh -l -a start_router - osrf_ctl.sh -l -a start_perl - osrf_ctl.sh -l -a start_c + osrf_ctl.sh -l -a start_router + osrf_ctl.sh -l -a start_perl + osrf_ctl.sh -l -a start_c After starting or restarting Evergreen, it is also necessary to restart the Apache web server - for the OPAC to work correctly. + for the OPAC to work correctly. To stop Evergreen, run: - osrf_ctl.sh -l -a stop_all + osrf_ctl.sh -l -a stop_all As with starting, you can choose to stop services individually. To restart Evergreen, run: - osrf_ctl.sh -l -a restart_all + osrf_ctl.sh -l -a restart_all Backing UpBacking Up - + Backing up your system files and data is a critical task for server and database administrators. Having a strategy for backing up and recovery could be the difference between a minor annoyance for users and a complete catastrophe. - Backing up the Evergreen DatabaseBacking up the Evergreen Database - + Backing up the Evergreen DatabaseBacking up the Evergreen Database + Most of the critical data for an Evergreen system – patrons, bibliographic records, holdings, - transactions, bills – is stored in the PostgreSQL database. You can therefore use normal + transactions, bills – is stored in the PostgreSQL database. You can therefore use normal PostgreSQL backup procedures to backup this data. For example, the simplest method of backing up the Evergreen database is to use the pg_dump command to create a live backup of the database without having to interrupt any Evergreen services. Here is an example pg_dump command which will dump a local Evergreen database into a the file evergreen_db.backup: - pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen + pg_dump -U evergreen -h localhost -f evergreen_db.backup evergreen To restore the backed up database into a new database, create a new database using the template0 database template and the UTF8 encoding, and run the psql command, specifying the new database as your target: - createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen - psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen - + createdb -T template0 -E UTF8 -U evergreen -h localhost new_evergreen + psql -U evergreen -h localhost -f evergreen_db.backup new_evergreen + This method of backup is only suitable for small Evergreen instances. Larger sites should consider implementing continuous archiving (also known as “log shipping”) to provide more granular backups with lower system overhead. More information on backing up PostgreSQL databases can be found in the official PostgreSQL documentation. - Backing up Evergreen FilesBacking up Evergreen Files + Backing up Evergreen FilesBacking up Evergreen Files - + When you deploy Evergreen, you will probably customize many aspects of your system including the system configuration files, Apache configuration files, OPAC and Staff Client. In order to protect your investment of time, you should carefully consider the best approach to backing up @@ -5580,13 +5577,13 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre There are a number of ways of tackling this problem. You could create a script that regularly creates a time-stamped tarball of all of these files and copies it to a remote server - but that would build up over time to hundreds of files. You could use rsync - to ensure that the files of + to ensure that the files of interest are regularly updated on a remote server - but then you would lose track of the changes to the files, should you make a change that introduces a problem down the road. Perhaps one of the best options is to use a version control system like - Bazaar, - git - or Subversion to regularly push updates of the files you care about to a repository on a + Bazaar, + git + or Subversion to regularly push updates of the files you care about to a repository on a remote server. This gives you the advantage of quickly being able to run through the history of the changes you made, with a commenting system that reminds you why each change was made, combined with remote storage of the pertinent files in case of disaster on site. In addition, your team can create @@ -5594,50 +5591,50 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre system. Using a version control system also helps to recover system customizations after an upgrade. - Full System BackupFull System Backup + Full System BackupFull System Backup A full system backup archives every file on the file system. Some basic methods require you - to shut down most system processes; other methods can use mirrored RAID setups or - SAN storage to + to shut down most system processes; other methods can use mirrored RAID setups or + SAN storage to take “snapshot” backups of your full system while the system continues to run. The subject of how to implement full system backups is beyond the scope of this documentation. SecuritySecurity - + As with an ILS and resource accessible from the world wide web careful consideration needs to be given to the security of your Evergreen servers and database. While it is impossible to cover all aspects of security, it is important to take several precautions when setting up production Evergreen site. - 1. + 1. Change the Evergreen admin password and keep it secure. The default admin password is known by anyone who has installed Evergreen. It is not a secret and needs to be changed by the Administrator. It should also only be shared by those who need the highest level of access to your system. - 2. + 2. Create strong passwords using a combination of numerical and alphabetical characters for all of the Administrative passwords including the postgres and opensrf users - 3. - Open ports in the firewall with caution - It is only necessary to open ports + 3. + Open ports in the firewall with caution - It is only necessary to open ports 80 and 443 for TCP connections to the Evergreen server from the OPAC and the staff client. It is critical for administrators to understand the concepts of network security and take precautions to minimize vulnerabilities. - 4. - Use permissions and permission groups wisely - it is important to understand the + 4. + Use permissions and permission groups wisely - it is important to understand the purpose of the permissions and to only give users the level of access that they require. Managing Log FilesManaging Log Files - + Evergreen comes with a sophisticated logging system, but it is important to manage the OpenSRF and Evergreen logs. This section will provide a couple of log management techniques and tools. - Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size + Using the logrotate Utility to Manage Log SizeUsing the logrotate Utility to Manage Log Size - + Fortunately, this is not a new problem for Unix administrators, and there are a number of ways of keeping your logs under control. On Debian and Ubuntu, for example, the logrotate utility controls when old log files are compressed and a new log file is started. @@ -5645,7 +5642,7 @@ SELECT DISTINCT bre.id FROM biblio.record_entry AS bre threshold of time or size has been reached and rotates the log files if a threshold condition has been met. To teach logrotate to rotate Evergreen logs on a weekly basis, or if they are > 50MB in size, create a new file /etc/logrotate.d/evergreen with the following contents: - + compress /openils/var/log/*.log { # keep the last 4 archived log files along with the current log file @@ -5659,26 +5656,26 @@ size 50M } - Changing Logging Level for EvergreenChanging Logging Level for Evergreen + Changing Logging Level for EvergreenChanging Logging Level for Evergreen - + Change the Log Levels in your config files. Changing the level of logging will help narrow down errors. - + A high logging level is not wise to do in a production environment since it will produce vastly larger log files and thus reduce server performance. Change logging levels by editing the configuration file - /openils/conf/opensrf_core.xml + /openils/conf/opensrf_core.xml you will want to search for lines containing <loglevel>. the default setting for loglevel is 3 which will log errors, warnings and information. The next level is 4 which is for debugging and provides additional information helpful for the debugging process. Thus, lines with: - <loglevel>3</loglevel> + <loglevel>3</loglevel> Should be changed to: - <loglevel>4</loglevel> + <loglevel>4</loglevel> to allow debugging level logging Other logging levels include 0 for no logging, 1 for logging errors and 2 for logging warnings @@ -5687,30 +5684,30 @@ size 50M Installing PostgreSQL from SourceInstalling PostgreSQL from Source - + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 8.1, “Evergreen Software Dependencies” to ensure that your Linux distribution supports the required version of PostgreSQL. - + Some Linux distributions, such as Debian Etch (4.0), do not offer PostgreSQL version 8.2 as an installable package. Before you continue, examine the software dependencies listed in Table 8.1, “Evergreen Software Dependencies” to ensure that your Linux distribution supports the required version of PostgreSQL. - 1. + 1. Install the application stow on your system if it is not already installed. Issue the following command as the root user: - + apt-get install stow - 2. + 2. Download, compile, and install the latest release for PostgreSQL 8.2 (which was version 8.2.12 at the time of this writing). As the root user, follow these steps: - + wget http://wwwmaster.postgresql.org/redir/198/h/source/v8.2.17/postgresql-8.2.17.tar.bz2 tar xzf postgresql-8.2.17.tar.gz @@ -5729,16 +5726,16 @@ stow pgsql - 3. + 3. Create the new user postgres to run the PostgreSQL processes. As the root user, execute this command: - adduser postgres - 4. + adduser postgres + 4. Initialize the database directory and start up PostgreSQL. As the root user, follow these steps: - + mkdir -p /usr/local/pgsql/data chown postgres /usr/local/pgsql/data @@ -5747,7 +5744,7 @@ initdb -D /usr/local/pgsql/data -E UNICODE --locale=C pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start - + If an error occurs during the final step above, review the path of the home directory for the postgres user. It may be /var/lib/postresql instead of /home/postres. @@ -5755,10 +5752,10 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start Configuring PostgreSQLConfiguring PostgreSQL - + The values of several PostreSQL configuration parameters may be changed for enhanced performance. The following table lists the default values and some suggested updates for several useful parameters: - Table 11.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb + Table 11.1. Suggested configuration valuesParameterDefaultSuggesteddefault_statistics_target10100work_mem4Mb128Mbshared_buffers8Mb512Mbeffective_cache_size128Mb4Gb Chapter 12. SIP ServerChapter 12. SIP Server @@ -5770,12 +5767,12 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start SIP, standing for Standard Interchange Protocol, was developed by the 3Mcorporation to be a common protocol for data transfer between ILS' (referred to in SIP as an ACS, or Automated Circulation System) - and a + and a third party device. Originally, the protocol was developed for use with 3M SelfCheck (often abbreviated SC, not to be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common to find - SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. + SIP in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. Some examples include: - •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or + •Patron Authentication (computer access, subscription databases)•Automated Material Handling (AMH) - The automated sorting of items, often to bins or book carts, based on shelving location or other programmable criteria Installing the SIP ServerInstalling the SIP Server @@ -5784,35 +5781,35 @@ pg_ctl -D /usr/local/pgsql/data -l /home/postgres/logfile start Getting the codeGetting the code Current SIP code lives at github: - cd /opt - git clone git://github.com/atz/SIPServer.git SIPServer + cd /opt + git clone git://github.com/atz/SIPServer.git SIPServer Or use the old style: - $ cd /opt - $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login + $ cd /opt + $ sudo cvs -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip login When prompted for the CVS password, just hit Enter (sudo password may be req'd) - $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer + $ sudo cvs -z3 -d:pserver:anonymous@openncip.cvs.sourceforge.net:/cvsroot/openncip co -P SIPServer Configuring the ServerConfiguring the Server - 1. + 1. - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. + Type the following commands from the command prompt: + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + 2. - Edit oils_sip.xml. + Edit oils_sip.xml. Change the commented out <server-params> section to this: - + <server-params min_servers='1' min_spare_servers='0' max_servers='25' /> - 3. + 3. max_servers will directly correspond to the number of allowed SIP clients. Set the number accordingly, but bear in mind that too many connections can @@ -5822,26 +5819,26 @@ max_servers='25' Adding SIP UsersAdding SIP Users - 1. + 1. - Type the following commands from the command prompt: - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - 2. + Type the following commands from the command prompt: + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + 2. in the <accounts> section, add SIP client login information. Make sure that all <logins> use the same institution attribute, and make sure the institution is listed in <institutions>. All attributes in the <login> section will be used by the SIP client. - 3. + 3. In Evergreen, create a new profile group called SIP. This group should be a sub-group of Users (not Staff or Patrons). Set Editing Permission as group_application.user.sip_client and give the group the following permissions: - + COPY_CHECKIN COPY_CHECKOUT RENEW_CIRC @@ -5853,7 +5850,7 @@ max_servers='25' VIEW_USER_TRANSACTIONS OR use SQL like: - + INSERT INTO permission.grp_tree (id,name,parent,description,application_perm) VALUES (8, 'SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); @@ -5864,7 +5861,7 @@ VALUES (8,15,0),(8,16,0),(8,17,0),(8,31,0),(8,32,0),(8,48,0),(8,54,0),(8,75,0),( Verify: - + SELECT * FROM permission.grp_perm_map JOIN permission.perm_list ON @@ -5874,118 +5871,118 @@ WHERE grp=8; Keep in mind that the id (8) may not necessarily be available on your system. - 4. + 4. For each account created in the <login> section of oils_sip.xml, create a user (via the staff client user editor) that has the same username and password and put that user into the SIP group. - The expiration date will affect the SIP users' connection so you might want to make a note of + The expiration date will affect the SIP users' connection so you might want to make a note of this somewhere. Running the serverRunning the server To start the SIP server type the following commands from the command prompt: - $ sudo su opensrf - $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip + $ sudo su opensrf + $ oils_ctl.sh -d /openils/var/run -s /openils/conf/oils_sip.xml -a [start|stop|restart]_sip Logging-SIPLogging-SIP - - SyslogSyslog + + SyslogSyslog It is useful to log SIP requests to a separate file especially during initial setup by modifying your - syslog config file. - 1. + syslog config file. + 1. Edit syslog.conf. - $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf - 2. + $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf + 2. Add this: - local6.* -/var/log/SIP_evergreen.log - 3. + local6.* -/var/log/SIP_evergreen.log + 3. Syslog expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. + $ sudo touch /var/log/SIP_evergreen.log + 4. Restart sysklogd. - $ sudo /etc/init.d/sysklogd restart + $ sudo /etc/init.d/sysklogd restart - Syslog-NGSyslog-NG + Syslog-NGSyslog-NG - 1. + 1. - Edit logging config. - sudo vi /etc/syslog-ng/syslog-ng.conf - 2. + Edit logging config. + sudo vi /etc/syslog-ng/syslog-ng.conf + 2. Add: - + # SIP2 for Evergreen filter f_eg_sip { level(warn, err, crit) and facility(local6); }; destination eg_sip { file("/var/log/SIP_evergreen.log"); }; log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; - 3. + 3. Syslog-ng expects the logfile to exist so create the file. - $ sudo touch /var/log/SIP_evergreen.log - 4. + $ sudo touch /var/log/SIP_evergreen.log + 4. Restart syslog-ng - $ sudo /etc/init.d/syslog-ng restart + $ sudo /etc/init.d/syslog-ng restart Testing Your SIP ConnectionTesting Your SIP Connection - - • + + • In the top level CVS checkout of the SIPServer code. - $ cd SIPServer/t - • + $ cd SIPServer/t + • Edit SIPtest.pm, change the $instid, $server, $username, and $password variables. This will be enough to test connectivity. To run all tests, you'll need to change all the variables in the Configuration section. - $ PERL5LIB=../ perl 00sc_status.t + $ PERL5LIB=../ perl 00sc_status.t This should produce something like: - + 1..4 ok 1 - Invalid username ok 2 - Invalid username ok 3 - login ok 4 - SC status - • + • Don't be dismayed at Invalid Username. That's just one of the many tests that are run. More TestingMore Testing - 1. + 1. Once you have opened up either the SIP OR SIP2 ports to be accessible from outside you can do some testing via telnet. You can try this with localhost if you so wish, but we want to prove that SIP2 works from non-localhost. Replace $instid, $server, $barcode, $username, and $password variables below as necessary. - We are using 6001 here which is associated with SIP2 as per our configuration. - + We are using 6001 here which is associated with SIP2 as per our configuration. + $ telnet $server 6001 Connected to $server. Escape character is '^]'. 9300CN**$username**|CO**$password**|CP**$instid** You should get back. - 941 - 2. + 941 + 2. Now just copy in the following line (with variables replaced) you don't need to hit enter, just paste! - 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** + 2300120080623 172148AO**$instid**|AA**$barcode**|AC$password|AD**$password** You will get back the patron information for $barcode (something similar to the what's below). -24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY +24 Y 00120100113 170738AEFirstName MiddleName LastName|AA**$barcode**|BLY|CQY |BHUSD|BV0.00|AFOK|AO**$instid**| The response declares it is a valid patron BLY with a valid password CQY and shows the user's @@ -6008,38 +6005,38 @@ Escape character is '^]'. Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and preceeded with a 2-character field identifier) are used. The fields vary between message pairs. - PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status + PairNameSupported?Details01Block PatronYes01_Block_Patron - ACS responds with 24 Patron Status Response09/10CheckinYes (with extensions)09/10_Checkin11/12CheckoutYes (no renewals)11/12_Checkout15/16HoldNo15/16_Hold17/18Item InformationYes (no extensions)17/18_Item_Information19/20Item Status UpdateNo19/20_Item_Status_Update - Returns Patron Enable response, but doesn't make any changes in EG23/24Patron StatusYes23/24_Patron_Status - 63/64 “Patron Information” preferred25/26Patron EnableNo25/26_Patron_Enable - Used during system testing and validation29/30RenewNO (maybe?)29/30_Renew35/36End SessionYes35/36_End_Session37/38Fee PaidNo37/38_Fee_Paid63/64Patron InformationYes (no extensions)63/64_Patron_Information65/66Renew AllNo65/66_Renew_All93/94LoginYes93/94_Login - Must be first command to Evergreen ACS (via socket) or SIP will terminate97/96Resend last messageYes97/96_Resend99/98SC/ACS StatusYes99/98_SC_and_ACS_Status 01 Block Patron01 Block Patron A selfcheck will issue a Block Patron command if a patron leaves their card in a selfcheck machine or if the selfcheck detects tampering (such as attempts - to disable multiple items during a single item checkout, multiple failed pin entries, etc). + to disable multiple items during a single item checkout, multiple failed pin entries, etc). In Evergreen, this command does the following: - •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL - Blocked Card Message field).•Card is marked inactive. + •User alert message: CARD BLOCKED BY SELF-CHECK MACHINE (this is independent of the AL + Blocked Card Message field).•Card is marked inactive. The request looks like: - 01<card retained><date>[fields AO, AL, AA, AC] + 01<card retained><date>[fields AO, AL, AA, AC] Card Retained: A single character field of Y or N - tells the ACS whether the SC has retained the card (ex: left in the machine) or not. Date: An 18 character field for the date/time when the block occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) Fields: See Fields for more details. The response is a 24 “Patron Status Response” with the following: - •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron + •Charge privileges denied•Renewal privileges denied•Recall privileges denied (hard-coded in every 24 or 64 response)•hold privileges denied•Screen Message 1 (AF): blocked•Patron 09/10 Checkin09/10 Checkin The request looks like: - 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] + 09<No block (Offline)><xact date><return date>[Fields AP,AO,AB,AC,CH,BI] No Block (Offline): A single character field of Y or N - Offline transactions are not currently supported so send N. xact date: an 18 character field for the date/time when the checkin occurred. Format: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, “ Z” (3 blanks and a Z) represents UTC(GMT/Zulu) Fields: See Fields for more details. The response is a 10 “Checkin Response” with the following: - 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] + 10<resensitize><magnetic media><alert><xact date>[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] Example (with a remote hold): - 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| - + 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| + 101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 |CTBR3|CY373827|DANicholas Richard Woodard|CV02| @@ -6047,16 +6044,16 @@ Escape character is '^]'. to be picked up at CT “BR3”. Since the transaction is happening at AO “BR1”, the alert type CV is 02 for hold at remote library. The possible values for CV are: - •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other - + •00: unknown•01: local hold•02: remote hold•03: ILL transfer (not used by EG)•04: transfer•99: other + the logic for Evergreen to determine the content is magnetic_media comes from either legacy circ scripts or search_config_circ_modifier. - The default is non-magnetic. + The default is non-magnetic. The same is true for media_type (default 001). Evergreen does not populate the collection_code because it does not really have any, but it will provide the call_number where available. Unlike the item_id (barcode), the title_id is actually a title string, unless the configuration forces the return of the bib ID. Don't be confused by the different branches that can show up in the same response line. - •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). + •AO is where the transaction took place,•AQ is the “permanent location”, and•CT is the destination location (i.e., pickup lib for a hold or target lib for a transfer). 11/12 Checkout11/12 Checkout @@ -6069,24 +6066,24 @@ Escape character is '^]'. 17/18 Item Information17/18 Item Information The request looks like: - 17<xact_date>[fields: AO,AB,AC] + 17<xact_date>[fields: AO,AB,AC] The request is very terse. AC is optional. The following response structure is for SIP2. (Version 1 of the protocol had only 6 total fields.) - + 18<circulation_status><security_marker><fee_type><xact_date> [fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] Example: - 1720060110 215612AOBR1|ABno_such_barcode| - 1801010120100609 162510ABno_such_barcode|AJ| - 1720060110 215612AOBR1|AB1565921879| -1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 + 1720060110 215612AOBR1|ABno_such_barcode| + 1801010120100609 162510ABno_such_barcode|AJ| + 1720060110 215612AOBR1|AB1565921879| +1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 |CTBR3|CSQA76.73.P33V76 1996| The first case is with a bogus barcode. The latter shows an item with a circulation_status of 10 for in transit between libraries. The known values of circulation_status are enumerated in the spec. EXTENSIONS: The CT field for destination location and CS call number are used by - Automated Material Handling systems. + Automated Material Handling systems. 19/20 Item Status Update19/20 Item Status Update @@ -6094,15 +6091,15 @@ Escape character is '^]'. 23/24 Patron Status23/24 Patron Status Example: - 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| - 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| - 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| - 1.The BL field (SIP2, optional) is valid patron, so the + 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| + 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| + 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| + 1.The BL field (SIP2, optional) is valid patron, so the N value means - bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N + bad_barcode doesn't match a patron, the Y value means 999999 does.2.The CQ field (SIP2, optional) is valid password, so the N value means bad_password doesn't match 999999's password, the Y means userpassword does. So if you were building the most basic SIP2 authentication client, you would check for @@ -6122,8 +6119,8 @@ Escape character is '^]'. 35/36 End Session35/36 End Session - 3520100505 115901AOBR1|AA999999| - 36Y20100507 161213AOCONS|AA999999|AFThank you!| + 3520100505 115901AOBR1|AA999999| + 36Y20100507 161213AOCONS|AA999999|AFThank you!| The Y/N code immediately after the 36 indicates success/failure. Failure is not particularly meaningful or important in this context, and for evergreen it is hardcoded Y. @@ -6134,15 +6131,15 @@ Escape character is '^]'. 63/64 Patron Information63/64 Patron Information Attempting to retrieve patron info with a bad barcode: - 6300020060329 201700 AOBR1|AAbad_barcode| - 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| + 6300020060329 201700 AOBR1|AAbad_barcode| + 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| Attempting to retrieve patron info with a good barcode (but bad patron password): - 6300020060329 201700 AOBR1|AA999999|ADbadpwd| - + 6300020060329 201700 AOBR1|AA999999|ADbadpwd| + 64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 |BD2 Meadowvale Dr. St Thomas, ON Canada -90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons +90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons |PIUnfiltered|AFOK|AOBR1| See 23/24 Patron Status for info on BL and CQ fields. @@ -6154,11 +6151,11 @@ Escape character is '^]'. 93/94 Login93/94 Login Example: - 9300CNsip_01|CObad_value|CPBR1| - [Connection closed by foreign host.] - ... - 9300CNsip_01|COsip_01|CPBR1| - 941 + 9300CNsip_01|CObad_value|CPBR1| + [Connection closed by foreign host.] + ... + 9300CNsip_01|COsip_01|CPBR1| + 941 941 means successful terminal login. 940 or getting dropped means failure. 97/96 Resend97/96 Resend @@ -6166,21 +6163,21 @@ Escape character is '^]'. 99/98 SC and ACS Status99/98 SC and ACS Status - 99<status code><max print width><protocol version> + 99<status code><max print width><protocol version> All 3 fields are required: - •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx - + •0: SC is OK•1: SC is out of paper•2: SC shutting down•status code - 1 character:•max print width - 3 characters - the integer number of characters the client can print•protocol version - 4 characters - x.xx + 98<on-line status><checkin ok><checkout ok><ACS renewal policy> <status update ok><offline ok><timeout period> - + <retries allowed><date/time sync><protocol version><institution id> <library name><supported messages><terminal - location><screen message><print line> + location><screen message><print line> Example: - 9910302.00 - 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| + 9910302.00 + 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| The Supported Messages field BX appears only in SIP2, and specifies whether 16 different SIP commands are supported by the ACS or not. @@ -6203,50 +6200,50 @@ Escape character is '^]'. Testing SRU with yaz-clientTesting SRU with yaz-client yaz-client is installed as a part of Index Data's YAZ software. Recent versions include support for querying SRU servers. - Evergreen ships an SRU configuration + Evergreen ships an SRU configuration that works out of the box. To search Evergreen with yaz-client, choose the GET query method and issue the find command. In the following example, we connect to the Evergreen test server dev.gapines.org - substitute this hostname with your own Evergreen server hostname: - Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from + Some older versions of yaz-client have known issues with SRU. Ensure that you are using the latest edition of yaz from http://www.indexdata.com/yaz. - $ yaz-client http://dev.gapines.org/opac/extras/sru - Z> sru GET 1.1 - Z> find hemingway + $ yaz-client http://dev.gapines.org/opac/extras/sru + Z> sru GET 1.1 + Z> find hemingway If your database has records that match that term, you will get the corresponding MARCXML records - in your response from yaz-client. + in your response from yaz-client. Here's what the SRU request looks like as sent to the Evergreen web server: - GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 + GET /opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 You can see what the response looks like by hitting the same URL in your Web browser: http://dev.gapines.org/opac/extras/sru?version=1.1&operation=searchRetrieve&query=hemingway&maximumRecords=0 CQL queries - Evergreen supports some CQL index-sets for advanced queries such as a subset of + Evergreen supports some CQL index-sets for advanced queries such as a subset of Dublin Core (DC) elements. Those DC elements that are - supported map to Evergreen default indexes as follows: + supported map to Evergreen default indexes as follows: DC element Evergreen indextitletitlecreator authorcontributorauthorpublisherkeywordsubjectsubjectidentifierkeywordtypenoneformatnonelanguagelang Here are a few examples of SRU searches against some of these indexes: - •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone + •dc.title all “complete dinosaur”•dc.subject all “britain france”•dc.title exact “The Empire Strikes Back”•dc.author=king and dc.title=zone Setting up Z39.50 server supportSetting up Z39.50 server support - You must have Evergreen's SRU server running before you can enable Z39.50 server support. + You must have Evergreen's SRU server running before you can enable Z39.50 server support. This support uses an Z39.50-to-SRU translator service supplied - by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. + by the Net::Z3950::Simple2ZOOM Perl module to enable Evergreen to act as a Z39.50 server. You could run the Z39.50 server on a different machine. It just needs to be able to connect to the Evergreen SRU server. - Setting up the Z39.50 server1. + Setting up the Z39.50 server1. - Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. + Install a recent version of yaz (the Makefile.install should have installed a suitable version).2. - Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. + Install Net::Z3950::Simple2ZOOM (sudo cpan Net::Z3950::Simple2ZOOM)3. Create a Simple2ZOOM configuration file. Something like the following is a good start, and is based on the Simple2ZOOM documentation example. We'll name the file dgo.conf for our example: - + <client> <database name="gapines"> <zurl>http://dev.gapines.org/opac/extras/sru</zurl> @@ -6273,14 +6270,14 @@ Escape character is '^]'. scoping purposes, and you can also append /holdings if you want to expose the holdings for any returned records. So your zurl could be http://dev.gapines.org/opac/extras/sru/BR1/holdings to limit the search scope to BR1 and its children, and to expose its holdings. - 4. + 4. Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the Z39.50 server will be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org n port 210: - + <yazgfs> <server id="server1"> <retrievalinfo> @@ -6294,15 +6291,15 @@ Escape character is '^]'. </server> </yazgfs> - 5. + 5. Run simple2ZOOM as a daemon, specifying the configuration files and one or more listener addresses that the Z39.50 server will be accessible on. If you do not specify a port, it will automatically run on port 9999. In the following example, we tell it to listen both to localhost on port 2210, and on dev.gapines.org on port 210: - simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 + simple2zoom -c dgo.conf -- -f xml2marc-yaz.cfg localhost:2210 dev.gapines.org:210 To test the Z39.50 server, we can use yaz-client again: - + yaz-client Z> open localhost:2210/gapines Connecting...OK. @@ -6345,18 +6342,18 @@ Records: 1 If you have Evergreen installed and are encountering systematic errors, here is the steps to find the cause and solution to most problems. These instructions assume standard locations and file names for Evergreen installations, and may also include commands for specific Linux distributions. - Systematic Evergreen Restart to Isolate Errors1. + Systematic Evergreen Restart to Isolate Errors1. - Stop Apache: - /etc/init.d/apache2 stop + Stop Apache: + /etc/init.d/apache2 stop or - apache2ctl stop - 2. + apache2ctl stop + 2. - Stop OpenSRF: - osrf_ctl.sh -l -a stop_all + Stop OpenSRF: + osrf_ctl.sh -l -a stop_all You should get either output simlar to this: - + Stopping OpenSRF C process 12515... Stopping OpenSRF C process 12520... Stopping OpenSRF C process 12526... @@ -6364,49 +6361,49 @@ Stopping OpenSRF Perl process 12471... Stopping OpenSRF Router process 12466... Or, if services have already been stopped, output may look like this: - OpenSRF C not running - OpenSRF Perl not running - OpenSRF Router not running + OpenSRF C not running + OpenSRF Perl not running + OpenSRF Router not running Occasionally osrf_ctl.sh fails to kill OpenSRF processes, so we should check to make sure that none are still running with the command: - ps -aef | grep OpenSRF + ps -aef | grep OpenSRF You should manually kill any OpenSRF processes. If you were unable to stop OpenSRF with the above methods, you could also try this command: - rm –R /openils/var/run/*.pid + rm –R /openils/var/run/*.pid This will remove the temporary OpenSRF process files from the run directory which may have been left over from a previous system boot cycle. - 3. + 3. Restart Ejabberd and - Memcached with the following commands: - sudo /etc/init.d/ejabberd restart - sudo /etc/init.d/memcached restart - 4. + Memcached with the following commands: + sudo /etc/init.d/ejabberd restart + sudo /etc/init.d/memcached restart + 4. Start the OpenSRF router and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_router + + /openils/bin/osrf_ctl.sh -l -a start_router If the router started correctly, output will be: - Starting OpenSRF Router + Starting OpenSRF Router If router does not start correctly, you should check the router error log files for error information. Evergreen 1.6 uses two routers, a public one and a private one, with two different logfiles: - /openils/var/log/private.router.log + /openils/var/log/private.router.log /openils/var/log/public.router.log A quick way to find error information in the logs is with the grep command. - grep ERR /openils/var/log/*router.log + grep ERR /openils/var/log/*router.log As a final sanity check, look for router processes using the process status command: - ps -aef | grep Router - 5. + ps -aef | grep Router + 5. Start the OpenSRF perl services and check for errors - - /openils/bin/osrf_ctl.sh -l -a start_perl + + /openils/bin/osrf_ctl.sh -l -a start_perl You should see the output similar to the following: - + Starting OpenSRF Perl * starting all services for ... * starting service pid=7484 opensrf.settings @@ -6420,35 +6417,35 @@ Starting OpenSRF Perl If the perl services do not start correctly or you receive errors, search for errors in the following log files: - •/openils/var/log/router.log•/openils/var/log/osrfsys.log + •/openils/var/log/router.log•/openils/var/log/osrfsys.log At this point you can use the grep command to find errors in any of the Evergreen log files: - grep ERR /openils/var/log/*.log + grep ERR /openils/var/log/*.log As a final sanity check, look for OpenSRF processes: - ps -aef | grep -i opensrf - 6. + ps -aef | grep -i opensrf + 6. Start the OpenSRF c services and check for errors:] - - /openils/bin/osrf_ctl.sh -l -a start_c + + /openils/bin/osrf_ctl.sh -l -a start_c And output should be: - Starting OpenSRF C (host=localhost) + Starting OpenSRF C (host=localhost) If the c service does not start, check for errors by grepping the log files for errors: - grep ERR /openils/var/log/*.log + grep ERR /openils/var/log/*.log Check for OpenSRF processes: - ps -aef | grep -i opensrf - 7. + ps -aef | grep -i opensrf + 7. - Smoke test with autogen.sh + Smoke test with autogen.sh The autogen tool will take some dynamic information from the database and generate - static JavaScript files for use by the OPAC and staff client. It is also able to refresh - the proximity map between libraries for the purpose of efficiently routing hold + static JavaScript files for use by the OPAC and staff client. It is also able to refresh + the proximity map between libraries for the purpose of efficiently routing hold requests. As user opensrf, you invoke autogen with the command: - /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u + /openils/bin/autogen.sh -c /openils/conf/opensrf_core.xml -u If Autogen completes successfully, the output will be: - + Updating fieldmapper Updating web_fieldmapper Updating OrgTree @@ -6462,22 +6459,22 @@ Done If Autogen does not complete its task and you receive errors, use grep to find errors in the log files: - grep ERR /openils/var/log/*.log - 8. + grep ERR /openils/var/log/*.log + 8. - Connect to Evergreen using the srfsh command-line OpenSRF client - /openils/bin/srfsh - + Connect to Evergreen using the srfsh command-line OpenSRF client + /openils/bin/srfsh + In order for you to connect using srfsh, you will need to have set up the .srfsh.xml configuration file in your home directory as as described in the installation chapter. You will then see the srfsh prompt: - srfsh# + srfsh# At the srfsh prompt, enter this command: - login admin open-ils + login admin open-ils You should the request verification: - + Received Data: "6f63ff5542da1fead4431c6c280efc75" ------------------------------------ Request Completed Successfully @@ -6503,48 +6500,48 @@ Request Time in seconds: 0.552430 Login Session: 28804. Session timeout: 420.000 srfsh# If you encounter errors or if you are unable to connect, you should consult the - srfsh.log file. The location of this file is configured in your + srfsh.log file. The location of this file is configured in your .srfsh.xml configuration file and is /openils/var/log/srfsh.log by default. Pressing Ctrl+D or entering “exit” will terminate srfsh. - 9. + 9. Start Apache and check for errors: - - /etc/init.d/apache2 start + + /etc/init.d/apache2 start or - apache2ctl start + apache2ctl start You should see output: - + * Starting web server apache2 ...done. the Apache OpenSRF modules write to the - /openils/var/log/gateway.log + /openils/var/log/gateway.log However, you should check all of the log files for errors: - grep ERR /openils/var/log/*.log + grep ERR /openils/var/log/*.log Another place to check for errors is the Apache error logs - generally located in in the /var/log/Apache2 - + generally located in in the /var/log/Apache2 + directory If you encounter errors with Apache, a common source of potential problems are the Evergreen site configuration files /etc/apache2/eg_vhost.conf and /etc/apache2/sites-available/eg.conf - - - 10. + + + 10. Testing with settings-tester.pl As the opensrf user, run the script settings-tester.pl to see if it finds any system configuration problems. - + cd /home/opensrf/Evergreen-ILS-1.6.0.0 perl Open-ILS/src/support-scripts/settings-tester.pl Here is example output from running settings-tester.pl: - + LWP::UserAgent version 5.810 XML::LibXML version 1.70 XML::LibXML::XPathContext version 1.70 @@ -6578,7 +6575,7 @@ Parse::RecDescent version 1.95.1 SRU version 0.99 JSON::XS version 2.27 - + Checking Jabber connection for user opensrf, domain private.localhost * Jabber successfully connected @@ -6619,13 +6616,13 @@ $ If the output from the script does not help you find the problem, please do not make any further significant changes to your configuration. Follow the steps in the troubleshooting guide in Chapter 14, Troubleshooting System Errors. - 11. + 11. - Try to login from the staff client + Try to login from the staff client 12. Testing the Catalog - + By default, the OPAC will live at the URL http://my.domain.com/opac/. Navigate to this URL and the front page of the OPAC should load. There is a basic text entry field with some extra search options. If you have any problems loading this page, check the Apache error logs. If the page loads but does not function correctly, then check for possible javascript errors. We @@ -6633,12 +6630,966 @@ $ Assuming that the OPAC is functioning and there is data in your database, you can now perform other simple functional tests (e.g., searching the catalog). + + Chapter 15. Local Administration MenuChapter 15. Local Administration Menu + Report errors in this documentation using Launchpad. + Chapter 15. Local Administration Menu + Report any errors in this documentation using Launchpad. + Chapter 15. Local Administration MenuChapter 15. Local Administration Menu + + OverviewOverview + + Many Evergreen configuration options are available under the Admin (-) → Local Administration rollover menu. + Settings are also available from the Local Administration page. + Either access point can be used, but examples in this manual use the more comprehensive + Local Administration rollover menu. + Items on this menu are visible to anyone logged into the staff client but usually + require special permissions to edit. The following table describes each of the menu options. + Menu optionDescription + Receipt Template Editor + Customize printed receipts (checkout receipts, hold slips, etc) for a + single workstation + Global Font and Sound Settings + Change font size and sound settings for a single workstation + Printer Settings Editor + Configure printer settings for a single workstation + Closed Dates Editor + Set library closure dates (affects due dates and fines) + Copy Locations Editor + Create and edit copy locations, also known as shelving locations + Library Settings Editor + Detailed library configuration settings + Non-Catalogued Type Editor + Create and edit optional non-catalogued item + types + Statistical Categories Editor + Create and manage optional categories for detailed patron/item + informationStanding Penalties + admin settings + + Group Penalty Thresholds + Set library-specific thresholds for maximum items out, maximum overdues, + and maximum fines Field Documentation + admin settings + Notifications / Action Triggers + admin settings + + Surveys + Create patron surveys to be completed at patron registration + Reports + Generate reports on any field in the Evergreen database + Cash Reports + View summary report of cash transactions for selected date range + Transit List + View items in transit to or from your library during selected date + rangeCirculation Policies + admin settings + Hold Policies + admin settings + + + Receipt Template EditorReceipt Template Editor + + + This tip sheet will show you how to customize your receipts.  This example will walk you + through how to customize the receipt that is printed on checkout.   + + Receipt templates are saved on the workstation, but it is possible to export the templates + to import to other workstations.   + +1. + + Select Admin (-) → Local Administration → Receipt Template Editor.   + 2. + + + Select the checkout template from the dropdown menu. +   + 3. + + You can edit the Header, Line + Item or Footer on the right hand side.   + 4. + In the upper right hand corner you can see the available macros by clicking on the + Macros button.  A macro prints a real value from the database. + The macros that are available + vary slightly between types of receipt templates (i.e. bills, holds, items). + 5. + Here are the available macros for an item receipt, like a checkout receipt.   + + + + + Adding an imageAdding an image + + +1. + You can edit the Header to have an image.  This is the default checkout Header. +   + 2. + Using HTML tags you can insert a link to an image that exists on the web.  The + link will end in .jpg or possibly .gif.  To + get this link you can right click on the image and choose Copy Image + Location (Firefox).   + +If you are using Internet Explorer right click and select Save Picture + As… + + + 3. + Enter the URL of the + link for the image that you just copied off a website. + +By clicking outside the Header box the Preview will update to reflect the edit you just + made.   + + 4. + If the image runs into the text, add a <br/> after the + image to add a line break. + + You may use most HTML tags.  See http://www.w3schools.com/html/ for more information on HTML tags.   + + Line ItemLine Item + + + This is what the default Line Item looks like: + + + + + + In this example, the macro %barcode% prints the item barcodes of the books that were + checked out.  The macro %due_date% prints the due date for each item that was checked out. +   + + In this example, we will not make any changes to the Line Item + + + The due date can only be printed in the YYYY-MM-DD format. + + + Editing the footerEditing the footer + + + +1. + This is what the default Footer looks like: + + + + 2. + Remove the “You were helped by %STAFF_FIRSTNAME% <br/>”.  As many + libraries use a generic circulation login on the circulation desk, the “You were + helped by…” note isn’t meaningful.   + + + + 3. + Once you have the checkout template how you want it, click Save Locally to save + the template to your computer.   + + + + + 4. + Click OK. + + + + + + + The footer is a good place to advertise upcoming library programs or events.   + + + Exporting templatesExporting templates + + + As you can only save a template on to the computer you are working on you will need to + export the template if you have more than one computer that prints out receipts (i.e., more + than one computer on the circulation desk, or another computer in the workroom that you use + to checkin items or capture holds with). + + +1. + Click on Export.   + + + + + +2. + Select the location to save the template to, name the template, and click Save. +   + + + +3. + Click OK.   + + + + + + Importing TemplatesImporting Templates + +1. + Click Import. + + + + 2. + Navigate to and select the template that you want to import.  Click Open. + + + + 3. + Click OK. + + + + 4. + Click Save Locally. + + + 5. + Click OK. + + + + + + + + Global Font and Sound SettingsGlobal Font and Sound Settings + + Global Font and Sound Settings apply to the current workstation + only. Use to turn staff client sounds on/off or to adjust the font size in the staff client + interface. These settings do not affect OPAC font sizes. + 1. + + Select Admin (-) → Local Administration → Global Font and Sound Settings. + 2. + + + + To turn off the system sounds, like the noise that happens when a patron with a + block is retrieved check the disable sound box and click + Save to Disk.   + + + + + 3. + + + To change the size of the font, pick the desired option and click + Save to Disk.   + + + + + + + Printer Settings EditorPrinter Settings Editor + + Use the Printer Settings Editor to configure printer output for + each workstation. + 1. + + Select Admin (-) → Local Administration → Printer Settings Editor. + 2. + + + From this screen you can print a test page, or alter the page settings for your + receipt printer.   + + + + + 3. + + + Click on Page Settings to change printing format and + option settings.  Click on the Margins & + Header/Footer tab to adjust + + + + + + + Closed Dates EditorClosed Dates Editor + + These dates are in addition to your regular weekly closed days (see ???).    Both regular closed days and those entered in the + Closed Dates Editor affect due dates and fines: + • + + Due dates.  + + Due dates that would fall on closed days are automatically pushed forward to + the next open day. Likewise, if an item is checked out at 8pm, for example, and + would normally be due on a day when the library closes before 8pm, Evergreen + pushes the due date forward to the next open day. + + • + + Overdue fines.  + + Overdue fines are not charged on days when the library is closed. + + + Multi-Day ClosingMulti-Day Closing + + 1. + + Select Admin (-) → Local Administration → Closed Dates Editor. + 2. + + + Select Add Multi-Date Closing if your closed dates + are entire business days. + + + + + 3. + + + Enter applicable dates and a descriptive reason for the closing and click + Save.  Check the Apply to all of my + libraries box if your library is a multi-branch system and the + closing applies to all of your branches.   + + + + + + + You can type dates into fields using YYYY-MM-DD format or use calendar widgets to + choose dates. + + + Detailed ClosingDetailed Closing + + + If your closed dates include a portion of a business day, select Add Detailed + Closing at Step 2, then enter detailed hours and + dates and click Save. Time format must be HH:MM. + + + + + + + Copy Locations EditorCopy Locations Editor + + 1. + + Select Admin (-) → Local Administration → Copy Locations Editor. + 2. + + + You can create new copy locations, or edit existing copy locations. To create a + new shelving location type in the name, and select Yes or + No for the various attributes: OPAC Visible, + Holdable, Circulate, and Hold Verify. + Holdable means a patron is able to place a hold on an item + in this location; Hold Verify means staff will be prompted + before an item is captured for a hold.  Finally click + Create. + + + + + 3. + + + In the bottom part of the Copy Locations Editor you can + edit or delete existing copy locations. You cannot delete a location that contains + items. In this example the copy location Adult Videos is + being edited. + + + + + + + There are also options in the Copy Editor for a copy to be + OPAC Visible-yes or no, Holdable-yes or no, + or Circulate-yes or no.  If either the copy record or the shelving + location is set to Circulate-no, then the item will not be able to circulate. + + + This is where you see the shelving locations in the Copy + Editor: + + + + + + This is where the shelving location appears in the OPAC. + + + + + Library Settings EditorLibrary Settings Editor + + With the Library Settings Editor Local System Admnistrators (LSAs) + can optionally customize Evergreen's behaviour for a particular library or library system. + For descriptions of available settings see the Settings Overview table below. + + To open the Library Settings Editor select Admin (-) → Local Adminstration → Library Settings Editor. + Settings OverviewSettings Overview + + This table describes available settings and shows which LSAs can change on a + per-library basis. Below the table is a list of data types [] with details about acceptable + settings values. + SettingDescriptionData typeNotesAlert on empty bib recordsAlert staff before the last copy for a record is deletedTrue/false Allow Credit Card PaymentsNot availableTrue/false Change reshelving status intervalAmount of time to wait before changing an item from “reshelving” status + to “available” + Duration +  Charge item price when marked damaged If true Evergreen bills item price to the last patron who checked out + the damaged item. Staff receive an alert with patron information and must + confirm the billing. + True/false + + Charge processing fee for damaged itemsOptional processing fee billed to last patron who checked out the + damaged item. Staff receive an alert with patron information and must + confirm the billing.Number (dollars)Disabled when set to 0Circ: Lost items usable on checkinLost items are usable on checkin instead of going 'home' first + True/false +  Circ: Restore overdues on lost item returnIf true when a lost item is checked in overdue fines are charged (up to + the maximum fines amount) + True/false +  Circ: Void lost item billing when returnedIf true,when a lost item is checked in the item replacement bill (item + price) is voided. If the patron has already paid the bill a credit is + applied. + True/false +  Circ: Void lost max intervalItems that have been lost this long will not result in voided billings + when returned. Only applies if Circ: Void lost item + billing or Circ: Void processing fee on lost + item are true. + Duration +  Circ: Void processing fee on lost item returnIf true the processing fee is voided when a lost item is + returned + True/false +  Default Item PriceReplacement charge for lost items if price is unset in the + Copy Editor + . Does not apply if item price is set to $0Number (dollars) Default localeSets language used in staff clientText (dollars)Can be set for each workstation at loginDo not automatically delete empty bib recordsIf false bib records (aka MARC records) will automatically be deleted + when the last attached volume is deleted + True/false + Set to false to avoid orphaned bib recordsGUI: Above-Tab Button Bar If true the staff client button bar + appears by default on all workstations registered to your library; staff can + override this setting at each login. + True/false +  GUI: Alternative Horizontal Patron Summary PanelIf true replaces the vertical patron summary panel with a horizontal one + on all workstations registered to your library + True/false +  GUI: Network Activity MeterIf true displays a progress bar when the staff client is sending or + receiving information from the Evergreen server + True/false +  GUI: Patron display timeout intervalPatron accounts opened in the staff client will close if inactive for + this period of time + Duration + Not functional in this version of EvergreenHolds: Estimated Wait (Days) Average number of days between check out and check in, multiplied by a + patron's position in the hold queue to estimate wait for holds + Number + Not yet implementedHolds: Expire Alert IntervalTime before a hold expires at which to send an email notifying the + patron + Duration + Only applies if your library notifies patrons of expired holds. Holds: Expire IntervalAmount of time until an unfulfilled hold expires + Duration +  Holds: Hard boundaryAdministrative setting + Number +  Holds: Soft boundaryAdministrative setting + Number +  Holds: Soft stalling intervalAdministrative setting + Duration +  Juvenile Age ThresholdUpper cut-off age for patrons to be considered juvenile, calculated from + date of birth in patron accountsDuration (years) Lost Materials Processing FeeThe amount charged in addition to item price when an item is marked los. +  Number (dollars) Maximum previous checkouts displayedNumber of previous circulations displayed in staff client + Number +  OPAC Inactivity Timeout (in seconds)Number of seconds of inactivity before OPAC accounts are automatically + logged out. + Number +  OPAC: Allow pending addressesIf true patrons can edit their addresses in the OPAC. Changes must be + approved by staff + True/false +  Password formatDefines acceptable format for OPAC account passwords Regular expression Default requires that passwords "be at least 7 characters in length, + contain at least one letter (a-z/A-Z), and contain at least one number. + Patron barcode format Defines acceptable format for patron barcodes Regular expression  Patron: password from phone #If true the last 4 digits of the patron's phone number is the password + for new accounts (password must still be changed at first OPAC + login) + True/false +  Selfcheck: Patron Login Timeout (in seconds)Administrative setting + Number + Not for SIP connectionsSelfcheck: Pop-up alert for errorsAdministrative setting + True/false + Not for SIP connectionsSelfcheck: Require patron passwordAdministrative setting + True/false + Not for SIP connectionsSending email address for patron noticesThis email address is for automatically generated patron notices (e.g. + email overdues, email holds notification).  It is good practice to set up a + generic account, like info@nameofyourlibrary.ca, so that one person’s + individual email inbox doesn’t get cluttered with emails that were not + delivered. + Text +  Show billing tab first when bills are presentIf true, accounts for patrons with bills will open to the billing tab + instead of check out + True/false +  Staff Login Inactivity Timeout (in seconds)Number of seconds of inactivity before staff client prompts for login + and password. + Number + + Void overdue fines when items are marked lostIf true overdue fines are voided when an item is marked lost + True/false + + + + + Acceptable formats for each setting type are + listed below. Quotation marks are never required when updating settings in the staff + client. + + + Data typeFormattingTrue/falseSelect value from drop-down menuNumberEnter a numerical value (decimals allowed in price settings)DurationEnter a number followed by a space and any of the following units: + minutes, hours, days, months (30 minutes, 2 days, etc)TextFree text + + + + + + + + Non-Catalogued Type Editor Non-Catalogued Type Editor + + + This is where you configure your non-catalogued types that appear in the dropdown menu + for non-catalogued circulations.  + + 1. + + + Select Admin (-) → Local Administration → Non Catalogued Type Editor. + + + 2. + + + To set up a new non-catalogued type, type the name in the left hand box, and + choose how many days the item will circulate for.  Click + Create. + + + + + Select the Circulate In-House box for non-catalogued items + that will circulate in house.  This can be used to manually track computer use, or + meeting room rentals.   + + + + + + + This is what the dropdown menu for non-catalogued circulations in the patron checkout + screen looks like: + + + + + + + Group Penalty ThresholdsGroup Penalty Thresholds + + Group Penalty Thresholds block circulation transactions for users who exceed maximum + check out limits, number of overdue items, or fines. Settings for your library are + visible under Admin (-) → Local Administration → Group Penalty Thresholds. + + + PenaltyEffectPATRON_EXCEEDS_FINESBlocks new circulations and renewals if patron exceeds X in fines PATRON_EXCEEDS_OVERDUE_COUNTBlocks new circulations and renewals if patron exceeds X overdue items PATRON_EXCEEDS_CHECKOUT_COUNTBlocks new circulations if patron exceeds X items out + + + + + + Accounts that exceed penalty thresholds display an alert message when opened and + require staff overrides for blocked transactions. + + + + + + + + Penalty threshold inheritance rulesPenalty threshold inheritance rules + + + + Local penalty thresholds are identified by Org Unit and + appear in the same table as the system wide defaults. + + + + + + + Where there is more than one threshold for the same penalty Evergreen gives + precedence to local settings. In this example Salt Spring Island Public Library (BGSI) + patrons are blocked when owing $5.00 in fines () instead of the system default(). + + Thresholds and are both for BGSI but apply to different user profile groups. + Threshold limits all patrons to a maximum of 12 items out, but provides an exception for the Board + profile. + + + + Multi-branch libraries may create rules for the entire library system or for + individual branches. Evergreen will use the most specific applicable rule. + + + + + + + Creating local penalty thresholdsCreating local penalty thresholds + + + Local System Administrators can override the system defaults by creating local penalty + thresholds for selected patron groups. + + 1. + + Select Admin (-) → Local Administration → Group Penalty Thresholds. + + + 2. + + + Click New Penalty Threshold. + + + + + 3. + + + The new penalty pop-up appears. Complete all fields and click + Save. + + + + + + Group - the profile group to which the rule applies. + Selecting Patrons includes all profiles below it in the + user hierarchy. + + + + + Org Unit - multi-branch libraries may create rules for + individual branches or the entire library system. + + + + + Penalty - select + PATRON_EXCEEDS_CHECKOUT_COUNT, + PATRON_EXCEEDS_OVERDUE_COUNT, or + PATRON_EXCEEDS_FINES + + 4. + + + After clicking Save the new threshold appears with + the defaults. Evergreen always gives precedence to local settings (in + this example, BSP). + + + + + + + Deleting or editing local penalty thresholdsDeleting or editing local penalty thresholds + + + To delete a local threshold select the row to remove and click Delete + Selected. The threshold is removed immediately without further + confirmation. + + + + + + To edit a local threshold, double-click the desired row to open the pop-up form. + Edit the form and click Save. New settings take effect + immediately. + + + + + + + Statistical Categories EditorStatistical Categories Editor + + This is where you configure your statistical categories (stat cats).  Stat cats are a + way to save and report on additional information that doesn’t fit elsewhere in Evergreen's + default records.  It is possible to have stat cats for copies or patrons.   + + 1. + + Select Admin (-) → Local Administration → Statistical Categories Editor. + 2. + + + To create a new stat cat, enter the name of the stat cat, select if you want + OPAC Visiblity, and select either + patron or copy from the + Type dropdown menu.   + + + + + + + Copy Stat Cats.  + + The image above shows some examples of copy stat cats. You would see these when + editing items in the Copy Editor, also known as the Edit + Item Attributes screen. You might use copy stat cats to track books you + have bought from a specific vendor, or donations. + + + + This is what the copy stat cat looks like in the Copy + Editor. + + + + + + + Patron stat cats.  + + Below are some examples of patron stat cats.  Patron stat cats can be used to keep + track of information like the high school a patron attends, or the home library for a + consortium patron, e.g. Interlink. You would see these in the fifth screen of patron + registration/edit patron.   + + + + + + + This is what the patron stat cat looks like in the patron registration screen.  It + looks very similar in the patron edit screen. + + + + + + Field DocumentationField Documentation + + Field Documentation is custom field-level documentation that explains individual fields for + library staff. As of 2.0, the field documentation only is used in the Patron Registration screen. + Administering Field DocumentationAdministering Field Documentation + + + If their permission settings allow, staff members can create local field documentation. This + requires the ADMIN_FIELD_DOC permission. The 'depth' at which that permission is applied, is the maximum + level of the org tree at which the staff member will be able to create field documentation. + 1. + + In the staff client, select Admin → Local Administration → Field Documentation + 2. + + Click the New button. + 3. + + Using the fm_class selector, select the database table for which you wish to create Field Documentation. This will show all of the + existing Field Documentation for that table. + As of Evergreen 2.0, only the ILS User table is used anywhere in the Evergreen UI + 4. + + Using the owner selector, select the topmost org unit at which you would like the field documentation to be available. + 5. + + Using the field selector, select the field you wish to document. + 6. + + Enter your actual documentation in the string text box. + 7. + + Click Save to save your Field Documentation entry + + + To view field documentation for different tables, use the Class selector to filter the Field Documentation list + + + Patron Field DocumentationPatron Field Documentation + + + On the patron registration screen there are small boxes along the left hand side. If a magnifying glass appears, you may click that magnifying + glass to retrieve the Field Documentation for that patron field. + + +SurveysSurveys + + This section illustrates how to create a survey, shows where the survey responses are saved + in the patron record, and explains how to report on surveys. + + Survey questions show up on the 6th patron registration screen, or on the 6th patron edit + screen. Surveys questions can be optional or required. Some examples of survey questions + might include: Would you use the library if it were open on a Sunday? + Would you like to be contacted by the library to learn about new + services? Do you attend library programs? + + Surveys come up when a patron is first registered. If you would like staff to ask the + survey questions when the patron’s library card is renewed, you’ll need to make that part of + local procedure. + + It is possible to run reports on survey questions. For example, you could find out how + many people say they would use the library if it were open on a Sunday, or you could get a + list of patrons who say they would like to receive marketing material from the library. + + 1. + + + From the Admin (-) menu, select Local Administration → Surveys. + + + + + + 2. + + + The Survey List will open. In this example the table is + empty because no surveys have been created. Click Add New + Survey. + + + + + + 3. + + + Fill out the New Survey form, then click Save + Changes. + + + + + A few tips when creating a new survey: + • + Start Date must always be in the future. It is not + possible to add questions to a survey after the start date. + • + Dates should be in YYYY-MM-DD format + • + OPAC Survey? and Poll Style? are + not yet implemented - leave unchecked + • + Check Is Required if the survey should be mandatory + for all new patrons + • + Check Display in User Summary to make survey answers + visible from patron records + + + 4. + + + A summary of your new survey will appear. Type the first survey question in + the Question field, then click Save Question + & Add Answer. Survey questions are multiple + choice. + + + + + + + + 5. + + + Enter possible multiple choice answers and click Add + Answer. Each question may have as many answers as you + like. + + + + + + 6. + + Repeat the steps above to add as many questions and answers as you wish. When + finished click Save, then Go Back to + return to the survey list. + + + 7. + + Your new survey will appear in the Survey List table. To make further changes click the survey name to open the detailed view. + + + + + This is what the survey looks like in the patron registration/edit screen. Note that in + this example this survey question appears in red and is required as the + Is Required box was checked when creating the survey. + + + + To see a patron’s response to a survey, retrieve the patron record. Click Other → Surveys to see the response. + + + + Cash ReportsCash Reports + + 1. + + + Select Admin (-) → Local Administration → Cash Reports. + 2. + + + Select the start date and the end date that you wish to run a cash report for. +  You can either enter the date in the YYYY-MM-DD format, or click on the calendar + icon to use the calendar widget.   + + + + + 3. + + Select your library from the drop down menu.  Click Go. +   + 4. + + + The output will show cash, check, and credit card payments.  It will also show + amounts for credits, forgiven payments, work payments and goods payments (i.e. + food for fines initiatives).  The output will look something like this: + + + + + + + By clicking on the hyperlinked column headers (i.e. workstation, + cash_payment, check_payment, etc.) it is + possible to sort the columns to order the payments from smallest to largest, or largest + to smallest, or to group the workstation names.   + + - Chapter 15. Action TriggersChapter 15. Action Triggers + Chapter 16. Action TriggersChapter 16. Action Triggers Report errors in this documentation using Launchpad. - Chapter 15. Action Triggers + Chapter 16. Action Triggers Report any errors in this documentation using Launchpad. - Chapter 15. Action TriggersChapter 15. Action Triggers + Chapter 16. Action TriggersChapter 16. Action Triggers Action Triggers were introduced to Evergreen in 1.6. They allow administrators the ability to set up actions for specific events. They are useful for notification events such as hold notifications. @@ -6647,15 +7598,15 @@ $ To access the Action Triggers module, select Admin → Local Administration → Notifications / Action triggers - You must have Local Administrator permissions to access the Action Triggers module. + You must have Local Administrator permissions to access the Action Triggers module. You will notice four tabs on this page: Event Definitions, Hooks, Reactors and Validators. Event DefinitionsEvent Definitions - + Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: - Table 15.1. Action Trigger Event DefinitionsFieldDescriptionOwning libraryThe shortname of the library for which the action / trigger / hook is defined.NameThe name of the trigger event, that links to a trigger event environment containing a set of fields + Table 16.1. Action Trigger Event DefinitionsFieldDescriptionOwning libraryThe shortname of the library for which the action / trigger / hook is defined.NameThe name of the trigger event, that links to a trigger event environment containing a set of fields that will be returned to the Validators / Reactors for processing.HooksThe name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper class in the core_type column off of which the rest of the field definitions “hang”. EnabledSets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run.Processing DelayDefines how long after a given trigger / hook event has occurred before the associated action (“Reactor”) will be taken.Processing Delay FieldDefines the field associated with the event on which the processing delay is calculated. For example, the processing delay @@ -6664,27 +7615,27 @@ $ if the validator returns false.ReactorsLinks the action trigger to the Reactor.Max Event Validity DelayDefine the threshold for how far back the action_trigger_runner.pl script should reach to generate a batch of events. - Creating Action Triggers1. + Creating Action Triggers1. From the top menu, select Admin → Local Administration → Notifications / Action triggers - 2. - Click on the New button.3. - Select an Owning Library.4. - Create a unique Name for your new action trigger.5. - Select the Hook.6. - Check the Enabled check box.7. - Create a unique Name for your new action trigger.8. + 2. + Click on the New button.3. + Select an Owning Library.4. + Create a unique Name for your new action trigger.5. + Select the Hook.6. + Check the Enabled check box.7. + Create a unique Name for your new action trigger.8. Set the Processing Delay in the appropriate format. Eg. 7 days to run 7 days from the trigger event - or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. - Set the Processing Delay Context Field and Processing Group Context Field.10. + or 00:01:00 to run 1 hour after the Processing Delay Context Field.9. + Set the Processing Delay Context Field and Processing Group Context Field.10. Select the Validator, Reactor, Failure Cleanup and Success Cleanup. - 11. - Set the Processing Delay Context Field and Processing Group Context Field.12. + 11. + Set the Processing Delay Context Field and Processing Group Context Field.12. Enter text in the Template text box if required. These are for email messages. Here is an sample template for sending 90 day overdue notices: - + [%- USE date -%] [%- user = target.0.usr -%] To: robert.soulliere@mohawkcollege.ca @@ -6723,85 +7674,85 @@ The following items are 90 days overdue and have been marked LOST. [% END %] - 13. + 13. Once you are satisfied with your new event trigger , click the Save button located at the bottom of the form - A quick and easy way to create new action triggers is to clone an existing action trigger. - Cloning Existing Action Triggers1. + A quick and easy way to create new action triggers is to clone an existing action trigger. + Cloning Existing Action Triggers1. Check the check box next to the action trigger you wish to clone - 2. + 2. Click the Clone Selected on the top left of the page. - 3. + 3. An editing window with open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and give the new action trigger a unique Name. - 4. + 4. Click Save. - Editing Action Triggers1. + Editing Action Triggers1. Check the check box next to the action trigger you wish to delete - 2. + 2. Click the Delete Selected on the top left of the page. - Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use + Before deleting an action trigger, you should consider disabling it through the editing form. This way you can simply enable it if you decide that you would like to use the action trigger in the future. - Deleting Action Triggers1. + Deleting Action Triggers1. Check the check box next to the action trigger you wish to delete - 2. + 2. Click the Delete Selected on the top left of the page. HooksHooks - + Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions “hang”. - Table 15.2. HooksFieldDescriptionHook KeyA unique name given to the hook.Core TypeUsed to link the action trigger to the IDL class in fm_IDL.xmlDescriptionText to describe the purpose of the hook. PassiveIndicates whether or not an event is created by direct user action or is circumstantial. + Table 16.2. HooksFieldDescriptionHook KeyA unique name given to the hook.Core TypeUsed to link the action trigger to the IDL class in fm_IDL.xmlDescriptionText to describe the purpose of the hook. PassiveIndicates whether or not an event is created by direct user action or is circumstantial. You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. ReactorsReactors - + Reactors link the trigger definition to the action to be carried out. - Table 15.3. Action Trigger ReactorsFieldDescriptionModule NameThe name of the Module to run if the action trigger is validated. It must be defined as a subroutine in + Table 16.3. Action Trigger ReactorsFieldDescriptionModule NameThe name of the Module to run if the action trigger is validated. It must be defined as a subroutine in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm or as a module in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm.DescriptionDescription of the Action to be carried out. You may also create, edit and delete Reactors. Just remember that their must be an associated subroutine or module in the Reactor Perl module. ValidatorsValidators - + Validators set the validation test to be preformed to determine whether the action trigger is executed. - Table 15.4. Action Trigger ValidatorsFieldDescriptionModule NameThe name of the subroutine in + Table 16.4. Action Trigger ValidatorsFieldDescriptionModule NameThe name of the subroutine in /openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm to validate the action trigger.DescriptionDescription of validation test to run. You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. Processing Action TriggersProcessing Action Triggers - + To run the action triggers, an Evergreen administrator will need to run the trigger processing script /openils/bin/action_trigger_runner.pl --process-hooks --run-pending. This should be set up as a cron job to run periodically. You have several options when running the script: - •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: - /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should - use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. - Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined - in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. + •--run-pending: Run the pending events.•--process-hooks: Create hook events•--osrf-config=[config_file]: OpenSRF core config file. Defaults to: + /openils/conf/opensrf_core.xml.•--custom-filters=[filter_file]: File containing a JSON Object which describes any hooks that should + use a user-defined filter to find their target objects. Defaults to: /openils/conf/action_trigger_filters.json•--max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go away. + Defaults to 3600 (1 hour).•--hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it defaults to the list of hooks defined + in the --custom-filters option.•--debug-stdout: Print server responses to stdout (as JSON) for debugging.•--lock-file=[file_name]: Sets the lock file for the process.•--help: Show help information. - Chapter 16. Booking Module AdministrationChapter 16. Booking Module Administration + Chapter 17. Booking Module AdministrationChapter 17. Booking Module Administration Report errors in this documentation using Launchpad. - Chapter 16. Booking Module Administration + Chapter 17. Booking Module Administration Report any errors in this documentation using Launchpad. - Chapter 16. Booking Module AdministrationChapter 16. Booking Module Administration + Chapter 17. Booking Module AdministrationChapter 17. Booking Module Administration Adapted with permission from original material by the Evergreen Community AbstractThe Evergreen booking module is included in Evergreen 1.6.1.x and above.The following @@ -6811,44 +7762,44 @@ The following items are 90 days overdue and have been marked LOST. Make a Cataloged Item Bookable in AdvanceMake a Cataloged Item Bookable in Advance - + If their permission settings allow, staff members can make items bookable. Staff members can do this in advance of a booking request, or they can do it on the fly. If you know in advance of the request that an item will need to be booked, you can make the item bookable. - 1. + 1. In the staff client, select Search → Search the Catalog - 2. + 2. Begin a title search to find an item. - 3. + 3. Click the title of the item that you want to book. - 4. + 4. The Record Summary will appear. In this view you can see information about the item and its locations. Click Actions for this Record → Holdings Maintenance in the top right corner of the screen. - 5. + 5. The Holdings Maintenance screen will appear. In this screen, you can view the volumes and copies of an item avaialable at each branch. To view the barcodes and other information for each copy, click the arrow adjacent to the branch with the copy that you need to view. Click on successive arrows until you find the copy that you need to view. - 6. + 6. Select the item that you want to make bookable. Right click to open the menu, and click Make Item Bookable. - 7. + 7. The item has now been added to the list of resources that are bookable. To book the item, return to the Record Summary, and proceed with booking.. - + In Evergreen 1.6.1, there is no way to make an item “unbookable” after it has been made bookable and has been reserved. The Delete Selected button on this screen deletes the resource from the screen, but the item will be able to be @@ -6861,14 +7812,14 @@ The following items are 90 days overdue and have been marked LOST. If a patron wants to book an item immediately that does not have bookable status, you can book the item on the fly if you have the appropriate permissions. - 1. + 1. Follow steps one through five in the section called “Make a Cataloged Item Bookable in Advance”. - 2. + 2. Select the item that you want to make bookable. Right click to open the menu, and click Book Item Now. - 3. + 3. A Reservations screen will appear in a new tab, and you can make the reservation. @@ -6877,43 +7828,43 @@ The following items are 90 days overdue and have been marked LOST. Create a Bookable Status for Non-Bibliographic ItemsCreate a Bookable Status for Non-Bibliographic Items - + Staff with the required permissions can create a bookable status for non-bibliographic items. For example, staff can book conference rooms or laptops. You will be able to create types of resources, specify the names of individual resources within each type, and set attributes to describe those resources. You can then bring the values together through the Resource Attribute Map. - 1. + 1. First, create the type of resource that you want to make bookable. Select Admin → Server Administration → Booking → Resource Types. - 2. + 2. A list of resource types will appear. You may also see titles of cataloged items on this screen if they were added using the Make Item Bookable or Book Now links. You should not attempt to add cataloged items on this screen; it is best to use the aforementioned links to make those items bookable. In this screen, you will create a type of resource. - 3. + 3. In the right corner, click New Resource Type. - 4. + 4. A box will appear in which you will create a type of resource. In this box, you can set fines, determine “elbow room” periods between reservations on this type of resource, and indicate if this type of resource can be transferred to another library. Click Save when you have entered the needed information. - 5. + 5. After you click Save, the box will disappear. Refresh the screen to see the item that you have added. - 6. + 6. Next, set the attributes for the type of resource that you have created. Select Server Administration → Booking → Resource Attributes. - 7. + 7. Click New Resource Attribute. - 8. + 8. A box will appear in which you can add the attributes of the resource. Attributes are descriptive information that is provided to the staff member when the booking @@ -6921,51 +7872,51 @@ The following items are 90 days overdue and have been marked LOST. for its transportation. Other attributes might be number of seats available in a room, or MAC or PC attributes for a laptop. Click Save when the necessary information has been entered. - 9. + 9. The box will disappear. Refresh the screen to see the added attribute. - 10. + 10. Next, add the values for the resource attributes. A value can be a number, yes/no, or any other meaningful information. Select Server Administration → Booking → Resource Attribute Values. - 11. + 11. Select New Resource Attribute Value. - 12. + 12. A pop up box will appear. Select the Resource Attribute from the drop down box. Add the value. You can add multiple values for this field. Click Save when the required information has been added. - 13. + 13. If you refresh the screen, the attribute value may not appear, but it has been saved. - 14. + 14. Next, identify the specific objects that are associated with this resource type. Click Admin → Server Administration → Booking → Resources. - 15. + 15. Click New Resource. - 16. + 16. A pop-up box will appear. Add information for the resource and click Save. Repeat this process for each resource. - 17. + 17. Refresh the screen, and the resource(s) that you added will appear. - 18. + 18. Finally, use Resource Attribute Maps to bring together the resource and its attributes. Select Admin → Server Administration → Booking → Resource Attribute Maps. - 19. + 19. Select New Resource Attribute Map - 20. + 20. Select the resource that you want to match with its attributes, then click Save. Repeat for all applicable resources. - 21. + 21. You have now created bookable, non-bibliographic resource(s) with attributes. @@ -6973,7 +7924,7 @@ The following items are 90 days overdue and have been marked LOST. Setting Booking PermissionsSetting Booking Permissions - + Administrators can set permissions so that staff members can view reservations, make reservations, and make bibliographic or non-bibliographic items bookable. @@ -6988,17 +7939,17 @@ The following items are 90 days overdue and have been marked LOST. already included in the Staff permission group. All other booking permissions must be applied individually. - • + • View Reservations: VIEW_TRANSACTION* - • + • Use the pull list: RETRIEVE_RESERVATION_PULL_LIST - • + • Capture reservations: CAPTURE_RESERVATION - • + • Assist patrons with pickup and return: VIEW_USER* - • + • Create/update/delete reservations: ADMIN_BOOKING_RESERVATION @@ -7007,19 +7958,19 @@ The following items are 90 days overdue and have been marked LOST. bookable, booking items on the fly, and creating non-bibliographic resources for booking. - • + • Create/update/delete booking resource type: ADMIN_BOOKING_RESOURCE_TYPE - • + • Create/update/delete booking resource attributes: ADMIN_BOOKING_RESOURCE_ATTR - • + • Create/update/delete booking resource attribute values: ADMIN_BOOKING_RESOURCE_ATTR_VALUE - • + • Create/update/delete booking resource: ADMIN_BOOKING_RESOURCE - • + • Create/update/delete booking resource attribute maps: ADMIN_BOOKING_RESOURCE_ATTR_MAP @@ -7031,26 +7982,33 @@ The following items are 90 days overdue and have been marked LOST. - Chapter 17. Administration Functions in the Acquisitions ModuleChapter 17. Administration Functions in the Acquisitions Module + Chapter 18. Administration Functions in the Acquisitions ModuleChapter 18. Administration Functions in the Acquisitions Module Report errors in this documentation using Launchpad. - Chapter 17. Administration Functions in the Acquisitions Module + Chapter 18. Administration Functions in the Acquisitions Module Report any errors in this documentation using Launchpad. - Chapter 17. Administration Functions in the Acquisitions ModuleChapter 17. Administration Functions in the Acquisitions ModuleAbstract + Chapter 18. Administration Functions in the Acquisitions ModuleChapter 18. Administration Functions in the Acquisitions ModuleAbstract Currency TypesCurrency Types Currency types can be created and applied to funds in the administrative module. When a fund is applied to a copy or line item for purchase, the item will be purchased in the currency associated with that fund. - Create a currency typeCreate a currency type + Create a currency typeCreate a currency type - 1.To create a new currency type, click Admin → Server Administration → Acquisitions→ Currency types.2.Enter the currency code. No limits exist on the number of characters that can be entered in this field.3.Enter the name of the currency type in Currency Label field. No limits exist on the number of characters that can be entered in - this field.4.Click Save. + 1. + To create a new currency type, click Admin → Server Administration → Acquisitions→ Currency types.2. + Enter the currency code. No limits exist on the number of characters that can be entered in this field.3. + Enter the name of the currency type in Currency Label field. No limits exist on the number of characters that can be entered in + this field.4. + Click Save. - Edit a currency typeEdit a currency type + Edit a currency typeEdit a currency type - 1.To edit a currency type, click your cursor in the row that you want to edit. The row will turn blue.2.Double-click. The pop-up box will appear, and you can edit the fields.3.After making changes, click Save. - From the currency types interface, you can delete currencies that have never been applied to funds or used to make purchases. + 1. + To edit a currency type, click your cursor in the row that you want to edit. The row will turn blue.2. + Double-click. The pop-up box will appear, and you can edit the fields.3. + After making changes, click Save. + From the currency types interface, you can delete currencies that have never been applied to funds or used to make purchases. Exchange RatesExchange Rates @@ -7064,11 +8022,17 @@ The following items are 90 days overdue and have been marked LOST. When money is transferred between funds that use different currency types, Evergreen will automatically use the exchange rate to convert the money to the currency of the receiving fund. During such transfers, however, staff can override the automatic conversion by providing an explicit amount to credit to the receiving fund. - Create an exchange rateCreate an exchange rate + Create an exchange rateCreate an exchange rate - 1.To create a new exchange rate, click Admin → Server Administration → Acquisitions → Exchange Rates.2.Click New Exchange Rate.3.Enter the From Currency from the drop down menu populated by the currency types.4.Enter the To Currency from the drop down menu populated by the currency types.5.Enter the exchange Ratio.6.Click Save. + 1. + To create a new exchange rate, click Admin → Server Administration → Acquisitions → Exchange Rates.2. + Click New Exchange Rate.3. + Enter the From Currency from the drop down menu populated by the currency types.4. + Enter the To Currency from the drop down menu populated by the currency types.5. + Enter the exchange Ratio.6. + Click Save. - Edit an Exchange RateEdit an Exchange Rate + Edit an Exchange RateEdit an Exchange Rate Edit an exchange rate just as you would edit a currency type. @@ -7077,29 +8041,47 @@ The following items are 90 days overdue and have been marked LOST. Funding sources allow you to specify the sources that contribute monies to your fund(s). You can create as few or as many funding sources as you need. - Create a funding sourceCreate a funding source + Create a funding sourceCreate a funding source - 1.To create a new funding source, click Admin → Server Administration → Acquisitions → Funding Source.2.Enter a funding source name. No limits exist on the number of characters that can be entered in this field.3.Select an owner from the drop down menu. The owner indicates the organizational unit(s) whose staff can use this funding source. + 1. + To create a new funding source, click Admin → Server Administration → Acquisitions → Funding Source.2. + Enter a funding source name. No limits exist on the number of characters that can be entered in this field.3. + Select an owner from the drop down menu. The owner indicates the organizational unit(s) whose staff can use this funding source. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list. For example, if a system is made the owner of a funding source, - then users with appropriate permissions at the branches within the system could also use the funding source.4.Create a code for the source. No limits exist on the number of characters that can be entered in this field.5.Select a currency from the drop down menu. This menu is populated from the choices in the Currency Types interface.6.Click Save. + The rule of parental inheritance applies to this list. For example, if a system is made the owner of a funding source, + then users with appropriate permissions at the branches within the system could also use the funding source. + 4. + Create a code for the source. No limits exist on the number of characters that can be entered in this field.5. + Select a currency from the drop down menu. This menu is populated from the choices in the Currency Types interface.6. + Click Save. - Allocate Credits to Funding SourcesAllocate Credits to Funding Sources + Allocate Credits to Funding SourcesAllocate Credits to Funding Sources - 1.Apply a credit to this funding source.2.Enter the amount of money that the funding source contributes to the organization. Funding sources are not tied to fiscal + 1. + Apply a credit to this funding source.2. + Enter the amount of money that the funding source contributes to the organization. Funding sources are not tied to fiscal or calendar years, so you can continue to add money to the same funding source over multiple years, e.g. County Funding. Alternatively, you can name funding sources by year, e.g. County Funding 2010 and County Funding 2011, and apply - credits each year to the matching source.3.To apply a credit, click on the hyperlinked name of the funding source. The Funding Source Details will appear.4.Click Apply credit.5.Enter an amount to apply to this funding source.6.Enter a note. This field is optional.7.Click Apply. + credits each year to the matching source.3. + To apply a credit, click on the hyperlinked name of the funding source. The Funding Source Details will appear.4. + Click Apply credit.5. + Enter an amount to apply to this funding source.6. + Enter a note. This field is optional.7. + Click Apply. - Allocate credits to fundsAllocate credits to funds + Allocate credits to fundsAllocate credits to funds If you have already set up your funds, then you can then click the Allocate to Fund button to apply credits from the funding sources to the funds. If you have not yet set up your funds, or you need to add a new one, you can allocate credits to funds from the funds interface. See section 1.2 for more information. - 1.To allocate credits to funds, click Allocate to Fund.2.Enter the amount that you want to allocate.3.Enter a note. This field is optional.4.Click Apply. + 1. + To allocate credits to funds, click Allocate to Fund.2. + Enter the amount that you want to allocate.3. + Enter a note. This field is optional.4. + Click Apply. - Track Debits and CreditsTrack Debits and Credits + Track Debits and CreditsTrack Debits and Credits You can track credits to and allocations from each funding source. These amounts are updated when credits and allocations are made in the Funding Source Details. Access the Funding Source Details by clicking on the hyperlinked name of the Funding Source. @@ -7112,12 +8094,17 @@ The following items are 90 days overdue and have been marked LOST. When you need to report on the amount that has been spent on all children’s materials, you can run a report on the fund tag to find total expenditures on children’s materials rather than reporting on each individual fund. - Create a Fund TagCreate a Fund Tag + Create a Fund TagCreate a Fund Tag - 1.To create a fund tag, click Admin → Server Administration → Acquisitions → Fund Tags.2.Click New Fund Tag. No limits exist on the number of characters that can be entered in this field.3.Select a Fund Tag Owner from the drop down menu. The owner indicates the organizational unit(s) whose staff can use this + 1. + To create a fund tag, click Admin → Server Administration → Acquisitions → Fund Tags.2. + Click New Fund Tag. No limits exist on the number of characters that can be entered in this field.3. + Select a Fund Tag Owner from the drop down menu. The owner indicates the organizational unit(s) whose staff can use this fund tag. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4.Enter a Fund Tag Name. No limits exist on the number of characters that can be entered in this field.5.Click Save. + The rule of parental inheritance applies to this list.4. + Enter a Fund Tag Name. No limits exist on the number of characters that can be entered in this field.5. + Click Save. FundsFunds @@ -7129,108 +8116,166 @@ The following items are 90 days overdue and have been marked LOST. by org units. At the top of the funds interface, you can set a contextual org unit and year. The drop down menu at the top of the screen enables you to focus on funds that are owned by specific organizational units during specific years. - Create a fundCreate a fund + Create a fundCreate a fund - 1.To create a new fund, click Admin → Server Administration → Acquisitions → Funds.2.Enter a name for the fund. No limits exist on the number of characters that can be entered in this field.3.Create a code for the fund. No limits exist on the number of characters that can be entered in this field.4.Enter a year for the fund. This can be a fiscal year or a calendar year. The format of the year is YYYY.5.Select an org unit from the drop down menu. The org unit indicates the organizational units whose staff can use this fund. This menu is populated with the + 1. + To create a new fund, click Admin → Server Administration → Acquisitions → Funds.2. + Enter a name for the fund. No limits exist on the number of characters that can be entered in this field.3. + Create a code for the fund. No limits exist on the number of characters that can be entered in this field.4. + Enter a year for the fund. This can be a fiscal year or a calendar year. The format of the year is YYYY.5. + Select an org unit from the drop down menu. The org unit indicates the organizational units whose staff can use this fund. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list. See section6.Select a currency type from the drop down menu. This menu is comprised of entries in the currency types menu. When a fund - is applied to a line item or copy, the price of the item will be encumbered in the currency associated with the fund.7.Click the Active box to activate this fund. You cannot make purchases from this fund if it is not active.8.Enter a Balance Stop Percent. The balance stop percent prevents you from making purchases when only a specified amount of the + The rule of parental inheritance applies to this list. See section6. + Select a currency type from the drop down menu. This menu is comprised of entries in the currency types menu. When a fund + is applied to a line item or copy, the price of the item will be encumbered in the currency associated with the fund.7. + Click the Active box to activate this fund. You cannot make purchases from this fund if it is not active.8. + Enter a Balance Stop Percent. The balance stop percent prevents you from making purchases when only a specified amount of the fund remains. For example, if you want to leave a five percent balance in the fund, then you would enter 5 in the field. You can also enter negative numbers to prevent over expenditure. When the fund reaches its balance stop percent, it will appear in - red when you apply funds to copies.9.Enter a Balance Warning Percent. The balance warning percent gives you a warning that the fund is low. You can specify any percent. For example, if you want to be + red when you apply funds to copies.9. + Enter a Balance Warning Percent. The balance warning percent gives you a warning that the fund is low. You can specify any percent. For example, if you want to be warned when the fund has only 10 percent of its balance remaining, then enter 10 in the field. When the fund reaches its balance warning percent, it will appear in yellow when you - apply funds to copies.10.Check the Propagate box to propagate funds. When you propagate a fund, the ILS will create a new fund for the following fisca + apply funds to copies.10. + Check the Propagate box to propagate funds. When you propagate a fund, the ILS will create a new fund for the following fisca year with the same parameters as your current fund. All of the settings transfer except for the year and the amount of - money in the fund. Propagation occurs during the fiscal year close-out operation.11.Check the Rollover box if you want to roll over remaining funds into the same fund next year.12.Click Save. + money in the fund. Propagation occurs during the fiscal year close-out operation.11. + Check the Rollover box if you want to roll over remaining funds into the same fund next year.12. + Click Save. - Allocate Credits from Funding Sources to FundsAllocate Credits from Funding Sources to Funds + Allocate Credits from Funding Sources to FundsAllocate Credits from Funding Sources to Funds Credits can be applied to funds from funding sources using the fund interface. The credits that you apply to the fund can be applied later to purchases. - 1.To access funds, click Admin → Server Administration → Acquisitions → Funds.2.Click the hyperlinked name of the fund.3.To add a credit to the fund, click the Create Allocation tab.4.Choose a Funding Source from the drop down menu.5.Enter an amount that you want to apply to the fund from the funding source.6.Enter a note. This field is optional.7.Click Apply. + 1. + To access funds, click Admin → Server Administration → Acquisitions → Funds.2. + Click the hyperlinked name of the fund.3. + To add a credit to the fund, click the Create Allocation tab.4. + Choose a Funding Source from the drop down menu.5. + Enter an amount that you want to apply to the fund from the funding source.6. + Enter a note. This field is optional.7. + Click Apply. - Transfer credits between fundsTransfer credits between funds + Transfer credits between fundsTransfer credits between funds The credits that you allocate to funds can be transferred between funds if desired. In the following example, you can transfer $500.00 from the Young Adult Fiction fund to the Children’s DVD fund. - 1.To access funds, click Admin → Server Administration → Acquisitions → Funds.2.Click the hyperlinked name of the originating fund.3.The Fund Details screen appears. Click Transfer Money.4.Enter the amount that you would like to transfer.5.From the drop down menu, select the destination fund. - 6.Add a note. This field is optional.7.Click Transfer. + 1. + To access funds, click Admin → Server Administration → Acquisitions → Funds.2. + Click the hyperlinked name of the originating fund.3. + The Fund Details screen appears. Click Transfer Money.4. + Enter the amount that you would like to transfer.5. + From the drop down menu, select the destination fund. + 6. + Add a note. This field is optional.7. + Click Transfer. - Track Balances and ExpendituresTrack Balances and Expenditures + Track Balances and ExpendituresTrack Balances and Expenditures The Fund Details allows you to track the fund’s balance, encumbrances, and amount spent. It also allows you to track allocations from the funding source(s), debits, and fund tags. - 1.To access the fund details, click on the hyperlinked name of the fund that you - created.2.The Summary allows you to track the following: - a.Balance – The balance is calculated by subtracting both items that have been - invoiced and encumbrances from the total allocated to the fund.b.Total Allocated – This amount is the total amount allocated from the Funding - Source.c.Spent Balance – This balance is calculated by subtracting only the items that - have been invoiced from the total allocated to the fund. It does not include - encumbrances.d.Total Debits – The total debits are calculated by adding the cost of items that - have been invoiced and encumbrances.e.Total Spent – The total spent is calculated by adding the cost of items that - have been invoiced. It does not include encumbrances.f.Total Encumbered – The total encumbered is calculated by adding all - encumbrances. + 1. + To access the fund details, click on the hyperlinked name of the fund that you + created.2. + The Summary allows you to track the following: + a.Balance – The balance is calculated by subtracting both items that have been + invoiced and encumbrances from the total allocated to the fund.b.Total Allocated – This amount is the total amount allocated from the Funding + Source.c.Spent Balance – This balance is calculated by subtracting only the items that + have been invoiced from the total allocated to the fund. It does not include + encumbrances.d.Total Debits – The total debits are calculated by adding the cost of items that + have been invoiced and encumbrances.e.Total Spent – The total spent is calculated by adding the cost of items that + have been invoiced. It does not include encumbrances.f.Total Encumbered – The total encumbered is calculated by adding all + encumbrances. + - Edit a FundEdit a Fund + Edit a FundEdit a Fund Edit a fund just as you would edit a currency type. - Perform Year End Closeout OperationPerform Year End Closeout Operation + Perform Year End Closeout OperationPerform Year End Closeout Operation The Year End Closeout Operation allows you to deactivate funds for the current year and create analogous funds for the next year. It transfers encumbrances to the analogous funds, and it rolls over any remaining funds if you checked the rollover box when creating the fund. - 1.To access the year end closeout of a fund, click Admin → Server Administration → - Acquisitions → Funds.2.Click Fund Propagation and Rollover.3.Check the box adjacent to Perform Fiscal Year Close-Out Operation.4.Notice that the context org unit reflects the context org unit that you selected at - the top of the Funds screen.5.If you want to perform the close-out operation on the context org unit and its child - units, then check the box adjacent to Include Funds for Descendant Org Units.6.Check the box adjacent to dry run if you want to test changes to the funds before + 1. + To access the year end closeout of a fund, click Admin → Server Administration → + Acquisitions → Funds.2. + Click Fund Propagation and Rollover.3. + Check the box adjacent to Perform Fiscal Year Close-Out Operation.4. + Notice that the context org unit reflects the context org unit that you selected at + the top of the Funds screen.5. + If you want to perform the close-out operation on the context org unit and its child + units, then check the box adjacent to Include Funds for Descendant Org Units.6. + Check the box adjacent to dry run if you want to test changes to the funds before they are enacted. Evergreen will generate a summary of the changes that would - occur during the selected operations. No data will be changed.7.Click Process.8.Evergreen will begin the propagation process. Evergreen will make a clone of each - fund, but it will increment the year by . + occur during the selected operations. No data will be changed.7. + Click Process.8. + Evergreen will begin the propagation process. Evergreen will make a clone of each + fund, but it will increment the year by . ProvidersProviders Providers are vendors. You can create a provider profile that includes contact information for the provider, holdings information, invoices, and other information. - Create a providerCreate a provider + Create a providerCreate a provider - 1.To create a new provider, click Admin → Server Administration →Acquisitions → - Providers.2.Enter the provider name.3.Create a code for the provider. No limits exist on the number of characters that can - be entered in this field.4.Select an owner from the drop down menu. The owner indicates the organizational units whose staff can use this provider. This menu is populated with the shortnames + 1. + To create a new provider, click Admin → Server Administration →Acquisitions → + Providers.2. + Enter the provider name.3. + Create a code for the provider. No limits exist on the number of characters that can + be entered in this field.4. + Select an owner from the drop down menu. The owner indicates the organizational units whose staff can use this provider. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list. See section .1 - for more information.5.Select a currency from the drop down menu. This drop down list is populated by the - list of currencies available in the currency types.6.A provider must be active in order for purchases to be made from that provider. To + The rule of parental inheritance applies to this list. See section .1 + for more information.5. + Select a currency from the drop down menu. This drop down list is populated by the + list of currencies available in the currency types.6. + A provider must be active in order for purchases to be made from that provider. To activate the provider, check the box adjacent to Active. To deactivate a vendor, - uncheck the box.7.Select a default claim policy from the drop down box. This list is derived from the - claim policies that can be created8.Select an EDI default. This list is derived from the EDI accounts that can be created.9.Enter the provider’s email address.10.In the Fax Phone field, enter the provider’s fax number.11.In the holdings tag field, enter the tag in which the provider places holdings data.12.In the phone field, enter the provider’s phone number.13.If prepayment is required to purchase from this provider, then check the box - adjacent to prepayment required.14.Enter the Standard Address Number (SAN) for your provider.15.Enter the web address for the provider’s website in the URL field.16.Click Save. + uncheck the box.7. + Select a default claim policy from the drop down box. This list is derived from the + claim policies that can be created8. + Select an EDI default. This list is derived from the EDI accounts that can be created.9. + Enter the provider’s email address.10. + In the Fax Phone field, enter the provider’s fax number.11. + In the holdings tag field, enter the tag in which the provider places holdings data.12. + In the phone field, enter the provider’s phone number.13. + If prepayment is required to purchase from this provider, then check the box + adjacent to prepayment required.14. + Enter the Standard Address Number (SAN) for your provider.15. + Enter the web address for the provider’s website in the URL field.16. + Click Save. - Add contact and holdings information to providersAdd contact and holdings information to providers + Add contact and holdings information to providersAdd contact and holdings information to providers After you save the provider profile, the screen reloads so that you can save additional information about the provider. You can also access this screen by clicking the hyperlinked name of the provider on the Providers screen. The tabs allow you to add a provider address and contact, attribute definitions, and holding subfields. You can also view invoices associated with the provider. - 1.Enter a Provider Address, and click Save. - Required fields for the provider address are: Street 1, city, state, - country, post code. You may have multiple valid addresses.2.Enter the Provider Contact, and click Save.3.Your vendor may include information that is specific to your organization in MARC + 1. + Enter a Provider Address, and click Save. + Required fields for the provider address are: Street 1, city, state, + country, post code. You may have multiple valid addresses.2. + Enter the Provider Contact, and click Save.3. + Your vendor may include information that is specific to your organization in MARC tags. You can specify the types of information that should be entered in each MARC tag. Enter attribute definitions to correlate MARC tags with the information that they should contain in incoming vendor records. Some technical knowledge is - required to enter XPath information.4.You may have entered a holdings tag when you created the provider profile. You + required to enter XPath information.4. + You may have entered a holdings tag when you created the provider profile. You can also enter holdings subfields. Holdings subfields allow you to specify subfields - within the holdings tag to which your vendor adds holdings information.5.Click invoices to access invoices associated with a provider. + within the holdings tag to which your vendor adds holdings information.5. + Click invoices to access invoices associated with a provider. - Edit a providerEdit a provider + Edit a providerEdit a provider Edit a provider just as you would edit a currency type. - You can delete providers only if no purchase orders have been assigned + You can delete providers only if no purchase orders have been assigned to them. @@ -7239,26 +8284,39 @@ The following items are 90 days overdue and have been marked LOST. Many libraries use Electronic Data Interchange (EDI) accounts to order new acquisitions. In Evergreen 2.0, users can set up EDI accounts and manage EDI messages in the admin module. EDI messages and notes can be viewed in the acquisitions module. - The following fields are required to create an EDI account: host, + The following fields are required to create an EDI account: host, username, password, path, and incoming directory. - EDI AccountsEDI Accounts + EDI AccountsEDI Accounts Create EDI Accounts to communicate electronically with providers. - 1.Create a label. The label allows you to differentiate between accounts for the same + 1. + Create a label. The label allows you to differentiate between accounts for the same provider. No limits exist on the number of characters that can be entered in this - field.2.Enter a host. Your provider will provide you with the requisite FTP or SCP - information.3.Enter the username that has been supplied by your provider.4.Enter the password that has been supplied by your provider.5.Enter account information. This field enables you to add a supplemental password + field.2. + Enter a host. Your provider will provide you with the requisite FTP or SCP + information.3. + Enter the username that has been supplied by your provider.4. + Enter the password that has been supplied by your provider.5. + Enter account information. This field enables you to add a supplemental password for entry to a remote system after log in has been completed. This field is optional - for the ILS but may be required by your provider.6.Select an owner from the drop down menu. The owner indicates the organizational + for the ILS but may be required by your provider.6. + Select an owner from the drop down menu. The owner indicates the organizational units whose staff can use this EDI account. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.7.The Last Activity updates automatically with any inbound or outbound - communication.8.Select a provider from the drop down menu to whom this account belongs.9.Enter a path. The path indicates the remote location on the server from which files - are pulled in to the ILS.10.Enter the incoming directory. This directory indicates the location on your local - network to which the files download.11.Enter the vendor account number supplied by your provider.12.Enter the vendor account code supplied by your provider.13.Click Save. + The rule of parental inheritance applies to this list.7. + The Last Activity updates automatically with any inbound or outbound + communication.8. + Select a provider from the drop down menu to whom this account belongs.9. + Enter a path. The path indicates the remote location on the server from which files + are pulled in to the ILS.10. + Enter the incoming directory. This directory indicates the location on your local + network to which the files download.11. + Enter the vendor account number supplied by your provider.12. + Enter the vendor account code supplied by your provider.13. + Click Save. - EDI MessagesEDI Messages + EDI MessagesEDI Messages The EDI messages screen displays all incoming and outgoing messages between the library and the vendor. @@ -7269,55 +8327,76 @@ The following items are 90 days overdue and have been marked LOST. Evergreen 2.0 provides minimal claiming functionality. Currently, all claiming is manual, but the admin module enables you to build claim policies and specify the action(s) that users should take to claim items. - Create a claim policyCreate a claim policy + Create a claim policyCreate a claim policy The claim policy link enables you to name the claim policy and specify the organization that owns it. - 1.To create a claim policy, click Admin → Server Administration → Acquisitions → - Claim Policies.2.Create a claim policy name. No limits exist on the number of characters that can be - entered in this field.3.Select an org unit from the drop down menu. The org unit indicates the + 1. + To create a claim policy, click Admin → Server Administration → Acquisitions → + Claim Policies.2. + Create a claim policy name. No limits exist on the number of characters that can be + entered in this field.3. + Select an org unit from the drop down menu. The org unit indicates the organizational units whose staff can use this claim policy. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4.Enter a description. No limits exist on the number of characters that can be entered - in this field.5.Click Save. + The rule of parental inheritance applies to this list.4. + Enter a description. No limits exist on the number of characters that can be entered + in this field.5. + Click Save. - Create a claim typeCreate a claim type + Create a claim typeCreate a claim type The claim type link enables you to specify the reason for a type of claim. - 1.To create a claim type, click Admin → Server Administration → Acquisitions → - Claim types.2.Create a claim type. No limits exist on the number of characters that can be entered - in this field.3.Select an org unit from the drop down menu. The org unit indicates the + 1. + To create a claim type, click Admin → Server Administration → Acquisitions → + Claim types.2. + Create a claim type. No limits exist on the number of characters that can be entered + in this field.3. + Select an org unit from the drop down menu. The org unit indicates the organizational units whose staff can use this claim type. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4.Enter a description. No limits exist on the number of characters that can be entered - in this field.5.Click Save. + The rule of parental inheritance applies to this list.4. + Enter a description. No limits exist on the number of characters that can be entered + in this field.5. + Click Save. - Create a claim event typeCreate a claim event type + Create a claim event typeCreate a claim event type The claim event type describes the physical action that should occur when an item needs to be claimed. For example, the user should notify the vendor via email that the library is claiming an item. - 1.To access the claim event types, click Admin → Server Administration → - Acquisitions →Claim event type.2.Enter a code for the claim event type. No limits exist on the number of characters - that can be entered in this field.3.Select an org unit from the drop down menu. The org unit indicates the + 1. + To access the claim event types, click Admin → Server Administration → + Acquisitions →Claim event type.2. + Enter a code for the claim event type. No limits exist on the number of characters + that can be entered in this field.3. + Select an org unit from the drop down menu. The org unit indicates the organizational units whose staff can use this event type. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.4.Enter a description. No limits exist on the number of characters that can be entered - in this field.5.If this claim is initiated by the user, then check the box adjacent to Library Initiated. - Currently, all claims are initiated by a user. The ILS cannot automatically - claim an issue.6.Click Save. + The rule of parental inheritance applies to this list.4. + Enter a description. No limits exist on the number of characters that can be entered + in this field.5. + If this claim is initiated by the user, then check the box adjacent to Library Initiated. + Currently, all claims are initiated by a user. The ILS cannot automatically + claim an issue.6. + Click Save. - Create a claim policy actionCreate a claim policy action + Create a claim policy actionCreate a claim policy action The claim policy action enables you to specify how long a user should wait before claiming the item. - 1.To access claim policy actions, click Admin → Server Administration → Acquisitions - →Claim Policy Actions.2.Select an Action (Event Type) from the drop down menu.3.Enter an action interval. This field indicates how long a user should wait before - claiming the item.4.In the Claim Policy ID field, select a claim policy from the drop down menu.5.Click Save. - You can create claim cycles by adding multiple claim policy actions to a + 1. + To access claim policy actions, click Admin → Server Administration → Acquisitions + →Claim Policy Actions.2. + Select an Action (Event Type) from the drop down menu.3. + Enter an action interval. This field indicates how long a user should wait before + claiming the item.4. + In the Claim Policy ID field, select a claim policy from the drop down menu.5. + Click Save. + You can create claim cycles by adding multiple claim policy actions to a claim policy. @@ -7325,7 +8404,7 @@ The following items are 90 days overdue and have been marked LOST. Invoice menus allow you to create drop down menus that appear on invoices. You can create an invoice item type or invoice payment method. - Invoice item typeInvoice item type + Invoice item typeInvoice item type The invoice item type allows you to enter the types of additional charges that you can add to an invoice. Examples of additional charge types might include taxes or @@ -7335,12 +8414,18 @@ The following items are 90 days overdue and have been marked LOST. Invoice item types can also be used when adding non-bibliographic items to a purchase order. When invoiced, the invoice item type will copy from the purchase order to the invoice. - 1.To create a new charge type, click Admin → Server Administration → Acquisitions - → Invoice Item Type.2.Click New Invoice Item Type.3.Create a code for the charge type. No limits exist on the number of characters that - can be entered in this field.4.Create a label. No limits exist on the number of characters that can be entered in - this field. The text in this field appears in the drop down menu on the invoice.5.If items on the invoice were purchased with the monies in multiple funds, then you + 1. + To create a new charge type, click Admin → Server Administration → Acquisitions + → Invoice Item Type.2. + Click New Invoice Item Type.3. + Create a code for the charge type. No limits exist on the number of characters that + can be entered in this field.4. + Create a label. No limits exist on the number of characters that can be entered in + this field. The text in this field appears in the drop down menu on the invoice.5. + If items on the invoice were purchased with the monies in multiple funds, then you can divide the additional charge across funds. Check the box adjacent to Prorate? if - you want to prorate the charge across funds.6.Click Save. + you want to prorate the charge across funds.6. + Click Save. Invoice payment methodInvoice payment method @@ -7349,35 +8434,52 @@ The following items are 90 days overdue and have been marked LOST. payment method(s) that you accept. The text that you enter in the admin module will appear as a drop down menu in the invoice type and payment method fields on the invoice. - 1.To create a new invoice payment method, click Admin → Server Administration → - Acquisitions → Invoice Payment Method.2.Click New Invoice Payment Method.3.Create a code for the invoice payment method. No limits exist on the number of - characters that can be entered in this field.4.Create a name for the invoice payment method. No limits exist on the number of + 1. + To create a new invoice payment method, click Admin → Server Administration → + Acquisitions → Invoice Payment Method.2. + Click New Invoice Payment Method.3. + Create a code for the invoice payment method. No limits exist on the number of + characters that can be entered in this field.4. + Create a name for the invoice payment method. No limits exist on the number of characters that can be entered in this field. The text in this field appears in the drop - down menu on the invoice.5.Click Save. + down menu on the invoice.5. + Click Save. Distribution FormulasDistribution Formulas Distribution formulas allow you to specify the number of copies that should be distributed to specific branches. You can create and reuse formulas as needed. - Create a distribution formulaCreate a distribution formula + Create a distribution formulaCreate a distribution formula - 1.Click Admin → Server Administration → Acquisitions →Distribution Formulas.2.Click New Formula.3.Enter a Formula Name. No limits exist on the number of characters that can be - entered in this field.4.Choose a Formula Owner from the drop down menu. The Formula Owner indicates + 1. + Click Admin → Server Administration → Acquisitions →Distribution Formulas.2. + Click New Formula.3. + Enter a Formula Name. No limits exist on the number of characters that can be + entered in this field.4. + Choose a Formula Owner from the drop down menu. The Formula Owner indicates the organizational units whose staff can use this formula. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - The rule of parental inheritance applies to this list.5.Ignore the Skip Count field. It has no purpose in 2.0.6.Click Save.7.Click New Entry.8.Select an Owning Library from the drop down menu. This indicates the branch that + The rule of parental inheritance applies to this list.5. + Ignore the Skip Count field. It has no purpose in 2.0.6. + Click Save.7. + Click New Entry.8. + Select an Owning Library from the drop down menu. This indicates the branch that will receive the items. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server - Administration → Organizational Units).9.Select a Shelving Location from the drop down menu.10.In the Item Count field, enter the number of items that should be distributed to the - branch. You can enter the number or use the arrows on the right side of the field.11.Click Apply Changes. The screen will reload.12.To view the changes to your formula, click Admin → Server Administration → + Administration → Organizational Units).9. + Select a Shelving Location from the drop down menu.10. + In the Item Count field, enter the number of items that should be distributed to the + branch. You can enter the number or use the arrows on the right side of the field.11. + Click Apply Changes. The screen will reload.12. + To view the changes to your formula, click Admin → Server Administration → Acquisitions → Distribution Formulas. The item_count will reflect the entries to - your distribution formula. - To edit the Formula Name, click the hyperlinked name of the formula in + your distribution formula. + To edit the Formula Name, click the hyperlinked name of the formula in the top left corner. A pop up box will enable you to enter a new formula name. - Edit a distribution formulaEdit a distribution formula + Edit a distribution formulaEdit a distribution formula To edit a distribution formula, click the hyperlinked title of the formula. @@ -7388,16 +8490,22 @@ The following items are 90 days overdue and have been marked LOST. lists or purchase orders. You can define the alerts from which staff can choose. Line item alerts appear in a pop up box when the line item, or any of its copies, are marked as received. - Create a line item alertCreate a line item alert + Create a line item alertCreate a line item alert - 1.To create a line item alert, click Administration → Server Administration → - Acquisitions → Line Item Alerts.2.Click New Line Item Alert Text.3.Create a code for the text. No limits exist on the number of characters that can be - entered in this field.4.Create a description for the text. No limits exist on the number of characters that can - be entered in this field.5.Select an owning library from the drop down menu. The owning library indicates the + 1. + To create a line item alert, click Administration → Server Administration → + Acquisitions → Line Item Alerts.2. + Click New Line Item Alert Text.3. + Create a code for the text. No limits exist on the number of characters that can be + entered in this field.4. + Create a description for the text. No limits exist on the number of characters that can + be entered in this field.5. + Select an owning library from the drop down menu. The owning library indicates the organizational units whose staff can use this alert. This menu is populated with the shortnames that you created for your libraries in the organizational units tree (See Admin → Server Administration → Organizational Units). - 6.Click Save. + 6. + Click Save. Line Item MARC Attribute DefinitionsLine Item MARC Attribute Definitions @@ -7418,16 +8526,23 @@ The following items are 90 days overdue and have been marked LOST. Cancel reasons also enable you to suspend or delay a purchase. For example, you could create a cancel reason of “back ordered,” and you could choose to keep the debits associated with the purchase. - Create a cancel/suspend reasonCreate a cancel/suspend reason + Create a cancel/suspend reasonCreate a cancel/suspend reason - 1.To add a new cancel reason, click Administration → Server Administration → - Acquisitions → Cancel reasons.2.Click New Cancel Reason.3.Select a using library from the drop down menu. The using library indicates the + 1. + To add a new cancel reason, click Administration → Server Administration → + Acquisitions → Cancel reasons.2. + Click New Cancel Reason.3. + Select a using library from the drop down menu. The using library indicates the organizational units whose staff can use this cancel reason. This menu is populated with the shortnames that you created for your libraries in the organizational units - tree (See Admin → Server Administration → Organizational Units.)4.Create a label for the cancel reason. This label will appear when you select a cancel - reason on an item or a PO.5.Create a description of the cancel reason. This is a free text field and can be - comprised of any text of your choosing.6.If you want to retain the debits associated with the cancelled purchase, click the box - adjacent to Keep Debits?7.Click Save. + tree (See Admin → Server Administration → Organizational Units.)4. + Create a label for the cancel reason. This label will appear when you select a cancel + reason on an item or a PO.5. + Create a description of the cancel reason. This is a free text field and can be + comprised of any text of your choosing.6. + If you want to retain the debits associated with the cancelled purchase, click the box + adjacent to Keep Debits?7. + Click Save. Acquisitions Permissions in the Admin moduleAcquisitions Permissions in the Admin module @@ -7435,20 +8550,89 @@ The following items are 90 days overdue and have been marked LOST. Several setting in the Library Settings area of the Admin module pertain to functions in the Acquisitions module. You can access these settings by clicking Admin → Local Administration →Library Settings Editor. - •CAT: Delete bib if all copies are deleted via Acquisitions lineitem cancellation – If + •CAT: Delete bib if all copies are deleted via Acquisitions lineitem cancellation – If you cancel a line item, then all of the on order copies in the catalog are deleted. If, when you cancel a line item, you also want to delete the bib record, then set this - setting to TRUE.•Default circulation modifier – This modifier would be applied to items that are - created in the acquisitions module•Default copy location – This copy location would be applied to items that are - created in the acquisitions module•Fund Spending Limit for Block - When the amount remaining in the fund, including + setting to TRUE.•Default circulation modifier – This modifier would be applied to items that are + created in the acquisitions module•Default copy location – This copy location would be applied to items that are + created in the acquisitions module•Fund Spending Limit for Block - When the amount remaining in the fund, including spent money and encumbrances, goes below this percentage, attempts to spend - from the fund will be blocked.•Fund Spending Limit for Warning - When the amount remaining in the fund, + from the fund will be blocked.•Fund Spending Limit for Warning - When the amount remaining in the fund, including spent money and encumbrances, goes below this percentage, attempts to - spend from the fund will result in a warning to the staff.•Temporary barcode prefix - Temporary barcode prefix for items that are created in - the acquisitions module•Temporary call number prefix - Temporary call number prefix for items that are + spend from the fund will result in a warning to the staff.•Temporary barcode prefix - Temporary barcode prefix for items that are created in + the acquisitions module•Temporary call number prefix - Temporary call number prefix for items that are created in the acquisitions module + Chapter 19. Languages and LocalizationChapter 19. Languages and Localization + Report errors in this documentation using Launchpad. + Chapter 19. Languages and Localization + Report any errors in this documentation using Launchpad. + Chapter 19. Languages and LocalizationChapter 19. Languages and Localization + + Enabling and Disabling LanguagesEnabling and Disabling Languages + + Evergreen is bundled with support for a number of languages beyond American English (en-US). The translated interfaces are + split between static files that are automatically installed with Evergreen, and dynamic labels that can be stored in the Evergreen database. Evergreen is + installed with additional SQL files that contain translated dynamic labels for a number of languages, and to make the set of translated labels available in + all interfaces. Only a few steps are required to enable or disable one or more languages. + Enabling a LocalizationEnabling a Localization + + + To enable the translated labels for a given language to display in Evergreen, just populate the database with the translated labels and enable the localization. The + following example illustrates how to enable Canadian French (fr-CA) support in the database. These same steps can be used with any of the + languages bundled with Evergreen, or you can create and add your own localization. + 1. + The translated labels for each locale are stored in SQL files named "950.data.seed-values-xx-YY.sql" where "xx-YY" represents the locale code for + the translation. Load the translated labels into the Evergreen database using the command psql, substituting your user, host and + database connection information accordingly: + +$ psql -U <username> -h <hostname> -d <database> \ +-f /path/to/Evergreen-source/Open-ILS/src/sql/Pg/950.data.seed-values-fr-CA.sql + + 2. + Ensure the locale is enabled in the Evergreen database by using the utility psql to check for the existence of the locale in the + table config.i18n_locale: + + +SELECT code, marc_code, name, description +FROM config.i18n_locale +WHERE code = 'fr-CA'; + + + As shown in the following example, if one row of output is returned, then the locale is already enabled: + +code | marc_code | name | description +------+-----------+-----------------+----------------- +fr-CA | fre | French (Canada) | Canadian French +(1 row) + + If zero rows of output are returned, then the locale is not enabled: + +code | marc_code | name | description +------+-----------+------+------------- +(0 rows) + + To enable a locale, use psql to insert a row into the table config.i18n_locale as follows: + +INSERT INTO config.i18n_locale (code, marc_code, name, description) +VALUES ('fr-CA', 'fre', 'French (Canada)', 'Canadian French'); + + + + Disabling a LocalizationDisabling a Localization + + + You might not want to offer all of the localizations that are preconfigured in Evergreen. If you choose to disable the dynamic labels for a locale, just delete those + entries from the table config.i18n_locale using the psql utility: + +DELETE FROM config.i18n_locale +WHERE code = 'fr-CA'; + + + + + @@ -7457,6 +8641,34 @@ The following items are 90 days overdue and have been marked LOST. Report any errors in this documentation using Launchpad. Part V. ReportsReports are a powerful tool in Evergreen and can be used for statistical comparisons or collection maintenance. The following part covers everything dealing with reports from starting the reporter daemon to viewing reports your library has created. The range of topics in this part is quite broad and different chapters will be useful to different roles in an Evergreen library system. + Chapter 20. Starting and Stopping the Reporter DaemonChapter 20. Starting and Stopping the Reporter Daemon + Report errors in this documentation using Launchpad. + Chapter 20. Starting and Stopping the Reporter Daemon + Report any errors in this documentation using Launchpad. + Chapter 20. Starting and Stopping the Reporter DaemonChapter 20. Starting and Stopping the Reporter Daemon + + Before you can view reports, the Evergreen administrator must start the reporter daemon from the command line of the Evergreen server. + The reporter daemon periodically checks for requests for new reports or scheduled reports and gets them running. + + Starting the Reporter DaemonStarting the Reporter Daemon + + To start the reporter daemon, run the following command as the opensrf user: + clark-kent.pl --daemon + You can also specify other options: + •sleep=interval : number of seconds to sleep between checks for new reports to run; defaults to 10•lockfile=filename : where to place the lockfile for the process; defaults to /tmp/reporter-LOCK•concurrency=integer : number of reporter daemon processes to run; defaults to 1•boostrap=filename : OpenSRF bootstrap configuration file; defaults to /openils/conf/opensrf_core.xml + + The open-ils.reporter process must be running and enabled on the gateway before the reporter daemon can be started. + Remember that if the server is restarted, the reporter daemon will need to be restarted before you can view reports unless you have configured your server to start the daemon + automatically at start up time. + + Stopping the Reporter DaemonStopping the Reporter Daemon + + To stop the reporter daemon, you have to kill the process and remove the lockfile. Assuming you're running just a single process and that the lockfile is + in the default location, perform the following commands as the opensrf user: + kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` + rm /tmp/reporter-LOCK + + @@ -7474,18 +8686,18 @@ The following items are 90 days overdue and have been marked LOST. Report any errors in this documentation using Launchpad. Part VII. DevelopmentThis part will allow you to customize the Evergreen OPAC, develop useful SQL queries and help you learn the skills necessary for developing new Evergreen applications. It is intended for experienced Evergreen administrators and Evergreen developers who wish to customize Evergreen or enhance their knowledge of the database structure and code. Some of these chapters are introductory in nature, but others assume some level of web development, programming, or database administration experience. - Chapter 18. Evergreen File Structure and Configuration FilesChapter 18. Evergreen File Structure and Configuration Files + Chapter 21. Evergreen File Structure and Configuration FilesChapter 21. Evergreen File Structure and Configuration Files Report errors in this documentation using Launchpad. - Chapter 18. Evergreen File Structure and Configuration Files + Chapter 21. Evergreen File Structure and Configuration Files Report any errors in this documentation using Launchpad. - Chapter 18. Evergreen File Structure and Configuration FilesChapter 18. Evergreen File Structure and Configuration FilesAbstractThis section will describe the basic file structure and cover key configuration files. Understanding the directory and file structure of Evergreen will allow you + Chapter 21. Evergreen File Structure and Configuration FilesChapter 21. Evergreen File Structure and Configuration FilesAbstractThis section will describe the basic file structure and cover key configuration files. Understanding the directory and file structure of Evergreen will allow you to be able to customize your Evergreen software and take full advantage of many features. Evergreen Directory StructureEvergreen Directory Structure This is the top level directory structure of Evergreen located in the default installation directory /openils: - Table 18.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and + Table 21.1. Evergreen Directory StructureDirectoryDescriptionbinContains many critical Perl and shell scripts such as autogen.sh and oils.ctl. confContains the configuration scripts including the two most important base configuration files opensrf_core.xml and opensrf.xml.includeContains the header files used by the scripts written in C.libContains the core code of Evergreen including the C code and perl modules. In particular, the perl modules in the subdirectoryperl5/OpenILS are of particular interest to developers. varLargest directory and includes the web directories (web), lock pid fies @@ -7495,22 +8707,22 @@ The following items are 90 days overdue and have been marked LOST. Evergreen Configuration FilesEvergreen Configuration Files - Table 18.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. - It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take - effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. + Table 21.2. Key Evergreen Configuration FilesFileDescription/openils/conf/opensrf_core.xmlFiles which controls which Evergreen services are run on the public and private routers. For a service to run, it must be registered in this file. This file also controls the loglevel and points to the log file for the services. An Evergreen restart is required for changes to take effect./openils/conf/opensrf.xmlUse this file to set directory locations, the default locale, default notice settings and settings for all Evergreen services. + It is critical for any administrator to understand the settings in this file. An Evergreen restart is required for changes to take effect./openils/conf/fm_IDL.xml Used for linking the OpenSRF/Evergreen services to the Evergreen database tables. An Evergreen restart is required for changes to take + effect. Running autogen.sh is also required./etc/apache2/eg_vhost.confControls the Evergreen virtual site. Allows to configure the skin for the OPAC or configure various directories within the Apache web server. An Apache restart is required for changes to this file to take effect. - Table 18.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and - Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. + Table 21.3. Useful Evergreen ScriptsFileDescription/openils/bin/autogen.shUsed to update changes to org units and the fm_IDL.xml file. Will generate web and staff client pages based on contents of files and + Evergreen database entries./openils/bin/clark-kent.plPerl script for starting the reporter./openils/bin/action_trigger_runner.plPerl script used to trigger the actions set up in the action trigger tool in the staff client./openils/bin/osrf_ctl.shThe start up script for OpenSRF and Evergreen. /openils/bin/reshelving_ complete.srfshChange status from “reshelving” to “available” for items which have been in reshelving for a certain amount of time./openils/bin/srfshUsed to start the OpenSRF shell. - Chapter 19. Customizing the Staff ClientChapter 19. Customizing the Staff Client + Chapter 22. Customizing the Staff ClientChapter 22. Customizing the Staff Client Report errors in this documentation using Launchpad. - Chapter 19. Customizing the Staff Client + Chapter 22. Customizing the Staff Client Report any errors in this documentation using Launchpad. - Chapter 19. Customizing the Staff ClientChapter 19. Customizing the Staff Client + Chapter 22. Customizing the Staff ClientChapter 22. Customizing the Staff Client - + This chapter will give you some guidance on customizing the staff client. The files related to the staff client are located in the directory /openils/var/web/xul/[staff client version]/server/ Changing Colors and ImagesChanging Colors and Images @@ -7522,20 +8734,20 @@ The following items are 90 days overdue and have been marked LOST. Changing Labels and MessagesChanging Labels and Messages - + You can customize labels in the staff client by editing the corresponding DTD files. The staff client uses the same lang.dtd used by the OPAC. This file is located in /openils/var/web/opac/locale/[your locale]. Other labels are controlled by the staff client specific lang.dtd file in /openils/var/web/xul/client version]/server/locale/[your locale]/. Changing the Search SkinChanging the Search Skin There are a few ways to change the custom skin for OPAC searching in staff client. - Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings + Changing the Search Skin on Server - Overriding Local SettingsChanging the Search Skin on Server - Overriding Local Settings To change the opac search skins used by the staff client create a file named custom.js and place it in the /openils/var/web/xul/[staff client version]/server/skin/ directory. This will effect all staff clients since these settings will override local settings. For example, the following text in custom.js would set the staff client opac, details page, results page and browse function to the craftsman skin: - + urls['opac'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; urls['opac_rdetail'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml'; urls['opac_rresult'] = '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml'; @@ -7543,13 +8755,13 @@ urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; Restart the staff client to see the changes. - Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine + Changing the Search Skin on an individual MachineChanging the Search Skin on an individual Machine To change the search skin on an individual machine for personal preferences or needs, edit the file /[Evergreen staff client path]/build/chrome/content/main/constants.js. Find the lines which point to the urls for the OPAC and edit accordingly. For example, here is an example to set the opac, details page, results page and browse function to the craftsman skin: - + 'opac' : '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1', 'opac_rdetail' : '/opac/' + LOCALE + '/skin/craftsman/xml/rdetail.xml', 'opac_rresult' : '/opac/' + LOCALE + '/skin/craftsman/xml/rresult.xml', @@ -7560,59 +8772,59 @@ urls['browser'] = '/opac/' + LOCALE + '/skin/craftsman/xml/advanced.xml?nps=1'; - Chapter 20. Customizing the OPACChapter 20. Customizing the OPAC + Chapter 23. Customizing the OPACChapter 23. Customizing the OPAC Report errors in this documentation using Launchpad. - Chapter 20. Customizing the OPAC + Chapter 23. Customizing the OPAC Report any errors in this documentation using Launchpad. - Chapter 20. Customizing the OPACChapter 20. Customizing the OPAC + Chapter 23. Customizing the OPACChapter 23. Customizing the OPAC While Evergreen is ready to go out of the box, libraries will want to customize Evergreen with their own color scheme, logos and layout. This chapter will explain how to customize Evergreen to meet the needs of your users. For these task some knowledge of html and css is required. Many of these instructions assume an installation of Evergreen using the default file locations. - + Be sure to save a backup copy of all files you edit in a location other than /openils/var/web/opac/ as files here could be overwritten when you upgrade your copy of Evergreen. Change the Color SchemeChange the Color Scheme - + To change the color scheme of the default Evergreen skin, edit /openils/var/web/opac/theme/default/css/colors.css. From this one file you can change the 4 base color scheme as well as colors of specific elements. You can also create alternate themes for your users. - 1. + 1. Copy the css folder and its contents from the example alternate theme /openils/var/web/opac/theme/reddish/ to a new folder /openils/var/web/opac/theme/[your new theme]/. - 2. + 2. Edit /openils/var/web/opac/theme/[your new theme]/css/colors.css to use the colors you want. - 3. + 3. Link to your new style sheet by adding the following to /openils/var/web/opac/skin/default/xml/common/css_common.xml. - + <link type='text/css' rel="alternate stylesheet" title='&opac.style.yourtheme;' href="<!--#echo var='OILS_THEME_BASE'-->/yourtheme/css/colors.css" name='Default' csstype='color'/> - 4. + 4. Give your new theme a name users can select by adding the following to /openils/var/web/opac/locale/ [your locale]/opac.dtd. - <!ENTITY opac.style.yourtheme "YourTheme"> + <!ENTITY opac.style.yourtheme "YourTheme"> customizing Opac Text and Labelscustomizing Opac Text and Labels - + To change text and links used throughout the OPAC, edit the following files: - •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd - + •/openils/var/web/opac/locale/[your locale]/lang.dtd•/openils/var/web/opac/locale/[your locale]/opac.dtd + A better way to customize OPAC text is to create custom dtd files for your lang and opac customizations and then add a include statement above the default dtd files. - + <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" [ @@ -7624,46 +8836,46 @@ name='Default' csstype='color'/> While it is possible to add text to the xml files itself, it is a good practice to use the DTD file to control the text and refer to the DTD elements in the xml/html code. For example, the footer.xml file has this code to generate a copyright statement: - + <div id='copyright_text'> <span>&footer.copyright;</span> The included opac.dtd file in the en-US locale directory has this setting for &footer.copyright text: - <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> + <!ENTITY footer.copyright "Copyright © 2006-2010 Georgia Public Library Service, and others"> Logo ImagesLogo Images To change the logos used by default to your own logos, replace the following files with images of your own, appropriately sized. - •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg + •Large main logo:/openils/var/web/opac/images/main_logo.jpg•Small logo:/openils/var/web/opac/images/small_logo.jpg Added ContentAdded Content - + By default Evergreen includes customizable “Added Content” features to enhance the OPAC experience for your user. These features include Amazon book covers and Google books searching. These features can be turned off or customized. Book CoversBook Covers The default install of Evergreen includes Amazon book covers. The settings for this are controlled by the <added_content> section of /openils/conf/opensrf.xml. Here are the key elements of this configuration: - <module>OpenILS::WWW::AddedContent::Amazon</module> + <module>OpenILS::WWW::AddedContent::Amazon</module> This calls the Amazon perl module. If you wish to link to a different book cover service other than Amazon, you must create a new perl module and refer to it here. You will also need to change other settings accordingly. There are some available book cover perl modules available in trunk - <base_url>http://images.amazon.com/images/P/</base_url> + <base_url>http://images.amazon.com/images/P/</base_url> Base URL for Amazon added content fetching. This URL may need to be shortened when new (read: non-image) content fetching capabilities are added. - <timeout>1</timeout> + <timeout>1</timeout> Max number of seconds to wait for an added content request to return data. Data not returned within the timeout is considered a failure. - <retry_timeout>600</retry_timeout> + <retry_timeout>600</retry_timeout> After added content lookups have been disabled due to too many lookup failures, this is the amount of time to wait before we try again. - <max_errors>15</max_errors> + <max_errors>15</max_errors> Maximum number of consecutive lookup errors a given process can live before added content lookups are disabled for everyone. - <userid>MY_USER_ID</userid> + <userid>MY_USER_ID</userid> If a userid is required to access the added content. Google Books LinkGoogle Books Link - + The results page will display a Browse in Google Books Search link for items in the results page which have corresponding entries in Google Books. This will link to Google Books content including table of contents and complete versions of the work if it exists in Google Books. Items not in Google Books will not @@ -7671,10 +8883,10 @@ name='Default' csstype='color'/> /openils/var/web/opac/skin/default/js/result_common.js. By default, this feature is activated. SyndeticsSyndetics - + Sydantics is another option for added content, Here is an example of using Sydantics as your added content provider: - + <!-- We're using Syndetics --> <module>OpenILS::WWW::AddedContent::Syndetic</module> @@ -7706,7 +8918,7 @@ name='Default' csstype='color'/> </added_content> - Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ + Sydantics is a fee based service. For details, visit: http://www.bowker.com/syndetics/ Customizing the Results PageCustomizing the Results Page @@ -7714,13 +8926,13 @@ name='Default' csstype='color'/> The results page is extremely customizable and allows some built in features to be activated with some simple edits or more advanced customizations can be done by more experienced web developers. There are several critical files to edit if you wish to customize the results page: - •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results - page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but - requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. + •/openils/var/web/opac/skin/default/js/result_common.js - This file controls the JavaScript for the top level elements on the results + page and should only be edited by experienced web developers except for the Google books link setting mentioned previously.•/openils/var/web/opac/skin/default/js/rresult.js - Has some good controls of results page settings at the top of this file but + requires web development skills for editing this file.•/openils/var/web/opac/skin/default/xml/result/rresult_table.xml - This controls the layout of the items table on results page. Customizing the Details PageCustomizing the Details Page - + There are many options when customizing the details page in Evergreen. The default settings are effective for most libraries, but it is important to understand the full potential of Evergreen when displaying the details of items. Some quick features can be turned on and off by changing variable values in the file /openils/var/web/opac/skin/default/js/rdedail.js. @@ -7734,47 +8946,47 @@ name='Default' csstype='color'/> The /openils/var/web/opac/skin/default/xml/rdetail/rdetail_copyinfo.xml file allows you to format the display of the copy information. BibTemplateBibTemplate - + BibTemplate is an Evergreen-custom Dojo module which can be used to retrieve and format XML data served by the Evergreen unAPI service. unAPI is a protocol for requesting known objects in specific formats, and Evergreen uses this to supply data – bibliographic records, metarecords, monograph holdings information, Located URIs, and more to come – in many different formats from MARCXML to MODS to custom XML applications. - + Managing the display of information from raw XML can be difficult, and the purpose of BibTemplate is to make this simpler, as well as move the display closer to the client and away from the source data. This is good from a separation-of-responsibilities perspective, and also makes it easier to contain and control local customization. BibTemplate supports the following Evergreen meta data formats: - •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' - HTML APIHTML API + •MARCXML - datatype='marcxml-full' (default)•MODS 3.3: datatype='mods33'•Dublin Core: datatype='rdf_dc'•FGDC: datatype='fgdc' + HTML APIHTML API BibTemplate follows the Dojo convention of adding attributes to existing (X)HTML in order to progressively change its behavior. The 1.6.0 HTML API consists of a - set of attributes that are added to existing OPAC markup, and fall into two classes: - • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. + set of attributes that are added to existing OPAC markup, and fall into two classes: + • The slot marker – Elements that denote the location of bibliographic data to insert.•The slot formatter – Elements that specify how the named data should be formatted for display. - Slot MarkerSlot Marker - + Slot MarkerSlot Marker + A slot marker is any displayable HTML element that has a type attribute with a value starting with opac/slot-data. This element will become the container for the formatted data. A slot marker is required in order to retrieve, format and display data using BibTemplate. A slot marker must also have an attribute called query containing a CSS3 selector. This selector is applied to the XML returned by the unAPI service in order to gather the specific XML - Nodes that should be considered for formatting. + Nodes that should be considered for formatting. The slot marker can also specify the format of the data to be returned from the unAPI service. This can be specified by adding +{format} to the type attribute, as in opac/slot-data+mods33-full. The default data format is marcxml-uri, which is an augmented MARCXML record containing Located URI information and unAPI links. Example of a slot marker: - <p type='opac/slot-data' query='datafield[tag=245]'></p> + <p type='opac/slot-data' query='datafield[tag=245]'></p> Most useful attribute match operators include: - • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value - Selectors always narrow, so select broadly and iterate through the NodeList + • datafield[tag=245] - exact match•datafield[tag^=65] - match start of value + Selectors always narrow, so select broadly and iterate through the NodeList - Slot FormatterSlot Formatter - + Slot FormatterSlot Formatter + A slot formatter is any invisible HTML element which has a type attribute with the value of opac/slot-format. (NOTE: before 1.6.0.4, only <script> elements were supported, though this restriction is now removed to support Internet Explorer.) Only one slot formatter element is allowed in each slot. The text contents - of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector + of this element are wrapped in a JavaScript function and run for each node returned by the query CSS3 selector specified on the slot marker. This function is passed one argument, called item, which an XML Node captured by the selector. This function should return HTML text. The output for all runs of the slot formatter is concatenated into a single string and used to replace the contents of the slot marker. The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes specified in the CSS3 selector. Example of a slot formatter: - + <td class='rdetail_item' id='rdetail_online' type='opac/slot-data' query='volumes volume uris uri' join=", "> <script type='opac/slot-format'><![CDATA[ @@ -7786,14 +8998,14 @@ name='Default' csstype='color'/> </td> - JavaScript APIJavaScript API - + JavaScript APIJavaScript API + In order for BibTemplate to find the slot markers and invoke the slot formatters JavaScript renderer must be instantiated and called. This must be done for each record that is to contribute to a pages display. The API for this is simple and straight-forward: The slot formatter is optional, and if not supplied BibTemplate will create a simple function which extracts and returns the text content of the XML Nodes - specified in the CSS3 selector. + specified in the CSS3 selector. Example of a slot formatter: - + dojo.require('openils.BibTemplate'); // Tell Dojo to load BibTemplate, if it is not already loaded // Create a renderer supplying the record id and the short name of the org unit, if known, @@ -7801,28 +9013,28 @@ name='Default' csstype='color'/> new openils.BibTemplate({ record : new CGI().param('r'), org_unit : here.shortname() }).render(); The argument hash supplied to the new openils.BibTemplate() constructor can have the following properties: - •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers + •record – The bibliographic record ID.•org_unit – The relevant Organizational Unit, used to restrict holdings scope as on a search result or record detail page.•root – The root element within the web page that BibTemplate should search for slot markers - BibTemplate ExamplesBibTemplate Examples + BibTemplate ExamplesBibTemplate Examples This is all that we had to add to display the contents of an arbitrary MARC field: - + <tr> <td>Bibliography note</td> <td type='opac/slot-data' query='datafield[tag=504]'></td> </tr> - If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. - To display a specific MARC subfield, add that subfield to the query attribute. + If multiple fields match, they are displayed on consecutive lines within the same left-hand cell. + To display a specific MARC subfield, add that subfield to the query attribute. For example, subfield $a is the only user-oriented subfield in field 586 (Awards Note) - + <tr> <td>Awards note</td> <td type='opac/slot-data' query='datafield[tag=586] subfield[code=a]'></td> </tr> Hide empty rows by default, and display them only if they have content: - + <tr class='hide_me' id='tag504'> <td>Bibliographic note</td> <td type='opac/slot-data' query='datafield[tag=504]'> @@ -7833,11 +9045,11 @@ name='Default' csstype='color'/> ]]></script> </td></tr> - •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - - avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item + •<![CDATA[ ... ]]> tells Evergreen Web server to treat the contents as literal “character data” - + avoids hilarity of entity substitution•<script type='opac/slot-format'>...</script>, contained within an “opac/slot-data” element, receives a variable named item containing the results of the query (a NodeList) Suppressing a subfield: - + <tr class='hide_me' id='tag700'> <td>Additional authors</td> <td type='opac/slot-data' query='datafield[tag=700]'> @@ -7862,13 +9074,13 @@ name='Default' csstype='color'/> start.html is the basic search display and advanced.html is the display for the advanced search option. By default, the Slimpac files include the same locale dtd as the regular OPAC (opac.dtd). However, the slimpac files do not use the same CSS files as the regular OPAC which means that if you change the OPAC color scheme, you must also edit the Slimpac files. - Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display + Customizing the Slimpac Results DisplayCustomizing the Slimpac Results Display Two files control the display results for the slimpac. Edit the XSL stylesheet (/openils/var/xsl/ATOM2XHTML.xsl) to edit the elements of the record as pulled from the XML output. You may also change the style of the page by editing the CSS stylesheet for the results display (/openils/var/web/opac/extras/os.css). - Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display + Customizing the Slimpac Details/Holdings DisplayCustomizing the Slimpac Details/Holdings Display It is also possible to customize the details page when viewing specific items from the results list. To edit the holdings display which contains the details of the specific record linked from the results display, edit the CSS stylesheet for the holdings/details page @@ -7880,7 +9092,7 @@ name='Default' csstype='color'/> It is possible to embed a simple search form into an html page which will allow users to search for materials in your Evergreen catalog. Here is code which can be embedded anywhere in the body of your web page: - + <form action="http://[domain name]/opac/[locale]/skin/default/xml/rresult.xml" method="get"> <div> Quick Catalog Search:<br /> @@ -7897,14 +9109,14 @@ name='Default' csstype='color'/> Replace [domain name] with the domain name of your Evergreen server and replace [locale] with the desired locale of - your Evergreen instance (e.g. en-US). This does a basic keyword search. Different types of searches and more advanced search forms can be developed. For further information on the url parameters used by Evergreen, see ??? for more details. + your Evergreen instance (e.g. en-US). This does a basic keyword search. Different types of searches and more advanced search forms can be developed. - Chapter 21. OpenSRFChapter 21. OpenSRF + Chapter 24. OpenSRFChapter 24. OpenSRF Report errors in this documentation using Launchpad. - Chapter 21. OpenSRF + Chapter 24. OpenSRF Report any errors in this documentation using Launchpad. - Chapter 21. OpenSRFChapter 21. OpenSRF + Chapter 24. OpenSRFChapter 24. OpenSRF One of the claimed advantages of @@ -7916,7 +9128,7 @@ name='Default' csstype='color'/> Introducing OpenSRFIntroducing OpenSRF - + OpenSRF is a message routing network that offers scalability and failover support for individual services and entire servers with minimal development and deployment overhead. You can use OpenSRF to build loosely-coupled applications @@ -7964,29 +9176,29 @@ name='Default' csstype='color'/> language. Writing an OpenSRF binding for another language is a relatively small task if that language offers libraries that support the core technologies on which OpenSRF depends: - • + • Extensible Messaging and Presence Protocol (XMPP, sometimes referred to as Jabber) - provides the base messaging infrastructure between OpenSRF clients and services - - • + + • JavaScript Object Notation (JSON) - serializes the content of each XMPP message in a standardized and concise format - • + • memcached - provides the caching service - - • + + • syslog - the standard UNIX logging service - + Unfortunately, the OpenSRF @@ -7998,7 +9210,7 @@ name='Default' csstype='color'/> The recommended option for would-be developers of another language binding is to use the Python implementation as the cleanest basis for a port to another language. - + Writing an OpenSRF ServiceWriting an OpenSRF Service @@ -8017,19 +9229,19 @@ name='Default' csstype='color'/> arguments and results are typical core language data types: strings, numbers, booleans, arrays, and hashes. To implement a new OpenSRF service, perform the following steps: - 1. + 1. Include the base OpenSRF support libraries - 2. + 2. Write the code for each of your OpenSRF methods as separate procedures - 3. + 3. Register each method - 4. + 4. Add the service definition to the OpenSRF configuration files @@ -8037,7 +9249,7 @@ name='Default' csstype='color'/> For example, the following code implements an OpenSRF service. The service includes one method named opensrf.simple-text.reverse() that accepts one string as input and returns the reversed version of that string: - + #!/usr/bin/perl package OpenSRF::Application::Demo::SimpleText; @@ -8071,13 +9283,13 @@ __PACKAGE__->register_method( Registering a service with the OpenSRF configuration filesRegistering a service with the OpenSRF configuration files Two files control most of the configuration for OpenSRF: - • + • opensrf.xml contains the configuration for the service itself, as well as a list of which application servers in your OpenSRF cluster should start the service. - • + • opensrf_core.xml (often referred to as the "bootstrap configuration" file) contains the OpenSRF networking information, including the XMPP server @@ -8085,13 +9297,13 @@ __PACKAGE__->register_method( this for a new service if the new service needs to be accessible via the public router. - + Begin by defining the service itself in opensrf.xml. To register the opensrf.simple-text service, add the following section to the <apps> element (corresponding to the XPath /opensrf/default/apps/): - - + + <apps> <opensrf.simple-text> <keepalive>3</keepalive> @@ -8114,17 +9326,17 @@ __PACKAGE__->register_method( <!-- other OpenSRF services registered here... --> </apps> - + The element name is the name that the OpenSRF control scripts use to refer to the service. - + The <keepalive> element specifies the interval (in seconds) between checks to determine if the service is still running. - + The <stateless> element specifies whether OpenSRF clients can call methods from this service without first having to create a connection to a @@ -8132,61 +9344,61 @@ __PACKAGE__->register_method( the client can simply issue a request and the router will forward the request to an available service and the result will be returned directly to the client. - + The <language> element specifies the programming language in which the service is implemented. - + The <implementation> element pecifies the name of the library or module in which the service is implemented. - + (C implementations only): The <max_requests> element, as a direct child of the service element name, specifies the maximum number of requests a process serves before it is killed and replaced by a new process. - + (Perl implementations only): The <max_requests> element, as a direct child of the <unix_config> element, specifies the maximum number of requests a process serves before it is killed and replaced by a new process. - + The <unix_log> element specifies the name of the log file for language-specific log messages such as syntax warnings. - + The <unix_sock> element specifies the name of the UNIX socket used for inter-process communications. - + The <unix_pid> element specifies the name of the PID file for the master process for the service. - + The <min_children> element specifies the minimum number of child processes that should be running at any given time. - + The <max_children> element specifies the maximum number of child processes that should be running at any given time. - + The <min_spare_children> element specifies the minimum number of idle child processes that should be available to handle incoming requests. If there are fewer than this number of spare child processes, new processes will be spawned. - + The`<max_spare_children>` element specifies the maximum number of idle child processes that should be available to handle incoming requests. If there @@ -8198,7 +9410,7 @@ __PACKAGE__->register_method( edit the opensrf_core.xml configuration file to add the service to the list of publicly accessible services: Making a service publicly accessible in opensrf_core.xml.  - + <router> <!-- This is the public router. On this router, we only register applications which should be accessible to everyone on the opensrf network --> @@ -8211,17 +9423,17 @@ __PACKAGE__->register_method( </router> - + This section of the opensrf_core.xml file is located at XPath /config/opensrf/routers/. - + public.localhost is the canonical public router domain in the OpenSRF installation instructions. - + Each <service> element contained in the <services> element offers their services via the public router as well as the private router. @@ -8232,7 +9444,7 @@ __PACKAGE__->register_method( Calling an OpenSRF methodCalling an OpenSRF method - + OpenSRF clients in any supported language can invoke OpenSRF services in any supported language. So let’s see a few examples of how we can call our fancy new opensrf.simple-text.reverse() method: @@ -8246,7 +9458,7 @@ __PACKAGE__->register_method( The following example calls the opensrf.simple-text.reverse method of the opensrf.simple-text OpenSRF service, passing the string "foobar" as the only method argument: - + $ srfsh srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" @@ -8271,12 +9483,12 @@ Request Time in seconds: 0.016718 command, srfsh lists all of the methods offered by the service. If you pass a partial method name, srfsh lists all of the methods that match that portion of the method name. - The quality and availability of the descriptive information for each + The quality and availability of the descriptive information for each method depends on the developer to register the method with complete and accurate information. The quality varies across the set of OpenSRF and Evergreen APIs, although some effort is being put towards improving the state of the internal documentation. - + srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" --> opensrf.simple-text @@ -8309,43 +9521,43 @@ Received Data: { } } - + stream denotes whether the method supports streaming responses or not. - + package identifies which package or library implements the method. - + api_name identifies the name of the OpenSRF method. - + signature is a hash that describes the parameters for the method. - + params is an array of hashes describing each parameter in the method; each parameter has a description (desc), name (name), and type (type). - + desc is a string that describes the method itself. - + return is a hash that describes the return value for the method; it contains a description of the return value (desc) and the type of the returned value (type). - + method identifies the name of the function or method in the source implementation. - + argc is an integer describing the minimum number of arguments that must be passed to this method. @@ -8356,7 +9568,7 @@ Received Data: { To call an OpenSRF method from Perl, you must connect to the OpenSRF service, issue the request to the method, and then retrieve the results. - + #/usr/bin/perl use strict; use OpenSRF::AppSession; @@ -8395,20 +9607,20 @@ print "Word count: " . $result->{'word_count'} . "\n"; $session->disconnect(); - + The OpenSRF::System->bootstrap_client() method reads the OpenSRF configuration information from the indicated file and creates an XMPP client connection based on that information. - + The OpenSRF::AppSession->create() method accepts one argument - the name of the OpenSRF service to which you want to want to make one or more requests - and returns an object prepared to use the client connection to make those requests. - + The OpenSRF::AppSession->request() method accepts a minimum of one argument - the name of the OpenSRF method to which you want to make a request - @@ -8417,19 +9629,19 @@ $session->disconnect(); opensrf.simple-text.substring method defined by the opensrf.simple-text OpenSRF service. - + The gather() method, called on the result object returned by the request() method, iterates over all of the possible results from the result object and returns a single variable. - + This request() call passes two strings to the opensrf.simple-text.split method defined by the opensrf.simple-text OpenSRF service and returns (via gather()) a reference to an array of results. - + The opensrf.simple-text.split() method is a streaming method that returns an array of results with one element per recv() call on the @@ -8437,25 +9649,25 @@ $session->disconnect(); - + While the gather() convenience method returns only the content of the complete set of results for a given request, the recv() method returns an OpenSRF result object with status, statusCode, and content fields as we saw in the HTTP results example. - + This request() call passes an array to the opensrf.simple-text.statistics method defined by the opensrf.simple-text OpenSRF service. - + The result object returns a hash reference via gather(). The hash contains the length and word_count keys we defined in the method. - + The OpenSRF::AppSession->disconnect() method closes the XMPP client connection and cleans up resources associated with the session. @@ -8473,7 +9685,7 @@ $session->disconnect(); Basic text splitting method.  - + sub text_split { my $self = shift; my $conn = shift; @@ -8506,7 +9718,7 @@ __PACKAGE__->register_method( relationships to other objects, the permissions required to create, read, update, or delete objects of that type, and the database table or view on which they are based. - + The Evergreen fieldmapper offers a great deal of convenience for working with complex system objects beyond the basic mapping of classes to database schemas. Although the result is passed over the wire as a JSON object @@ -8519,7 +9731,7 @@ __PACKAGE__->register_method( configuration file and dynamically register OpenSRF methods for creating, reading, updating, and deleting all of the defined classes. Example fieldmapper class definition for "Open User Summary".  - + <class id="mous" controller="open-ils.cstore open-ils.pcrud" oils_obj:fieldmapper="money::open_user_summary" oils_persist:tablename="money.open_usr_summary" @@ -8543,11 +9755,11 @@ __PACKAGE__->register_method( </class> - + The <class> element defines the class: - • + • The id attribute defines the class hint that identifies the class both elsewhere in the fieldmapper configuration file, such as in the value of the @@ -8555,26 +9767,26 @@ __PACKAGE__->register_method( it is instantiated. For example, an "Open User Summary" JSON object would have the top level property of "__c":"mous". - • + • The controller attribute identifies the services that have direct access to this class. If open-ils.pcrud is not listed, for example, then there is no means to directly access members of this class through a public service. - • + • The oils_obj:fieldmapper attribute defines the name of the Perl fieldmapper class that will be dynamically generated to provide setter and getter methods for instances of the class. - • + • The oils_persist:tablename attribute identifies the schema name and table name of the database table that stores the data that represents the instances of this class. In this case, the schema is money and the table is open_usr_summary. - • + • The reporter:label attribute defines a human-readable name for the class used in the reporting interface to identify the class. These names are defined @@ -8582,16 +9794,16 @@ __PACKAGE__->register_method( so that they can be translated and served in the user’s language of choice. - + The <fields> element lists all of the fields that belong to the object. - • + • The oils_persist:primary attribute identifies the field that acts as the primary key for the object; in this case, the field with the name usr. - • + • The oils_persist:sequence attribute identifies the sequence object (if any) in this database provides values for new instances of this class. In @@ -8599,51 +9811,51 @@ __PACKAGE__->register_method( table, so no sequence is used to populate these instances. - + Each <field> element defines a single field with the following attributes: - • + • The name attribute identifies the column name of the field in the underlying database table as well as providing a name for the setter / getter method that can be invoked in the JSON or native version of the object. - • + • The reporter:datatype attribute defines how the reporter should treat the contents of the field for the purposes of querying and display. - • + • The reporter:label attribute can be used to provide a human-readable name for each field; without it, the reporter falls back to the value of the name attribute. - + The <links> element contains a set of zero or more <link> elements, each of which defines a relationship between the class being described and another class. - • + • The field attribute identifies the field named in this class that links to the external class. - • + • The reltype attribute identifies the kind of relationship between the classes; in the case of has_a, each value in the usr field is guaranteed to have a corresponding value in the external class. - • + • The key attribute identifies the name of the field in the external class to which this field links. - • + • The rarely-used map attribute identifies a second class to which the external class links; it enables this field to define a direct @@ -8651,29 +9863,29 @@ __PACKAGE__->register_method( avoid having to retrieve all of the linked members of an intermediate class just to retrieve the instances from the actual desired target class. - • + • The class attribute identifies the external class to which this field links. - + The <permacrud> element defines the permissions that must have been granted to a user to operate on instances of this class. - + The <retrieve> element is one of four possible children of the <actions> element that define the permissions required for each action: create, retrieve, update, and delete. - • + • The permission attribute identifies the name of the permission that must have been granted to the user to perform the action. - • + • The contextfield attribute, if it exists, defines the field in this class that identifies the library within the system for which the user must have @@ -8682,7 +9894,7 @@ __PACKAGE__->register_method( at that library. - + The rarely-used <context> element identifies a linked field (link attribute) in this class which links to an external class that holds the field @@ -8728,7 +9940,7 @@ __PACKAGE__->register_method( In the following example, the text splitting method has been reimplemented to support streaming; very few changes are required: Text splitting method - streaming mode.  - + sub text_split { my $self = shift; my $conn = shift; @@ -8749,13 +9961,13 @@ __PACKAGE__->register_method( ); - + Rather than returning a reference to the array, a streaming method loops over the contents of the array and invokes the respond() method of the connection object on each element of the array. - + Registering the method as a streaming method instructs OpenSRF to also register an atomic variant (opensrf.simple-text.split.atomic). @@ -8776,7 +9988,7 @@ __PACKAGE__->register_method( You can include many calls to the OpenSRF logger; only those that are higher than your configured logging level will actually hit the log. The following example exercises all of the available logging levels in OpenSRF: - + use OpenSRF::Utils::Logger; my $logger = OpenSRF::Utils::Logger; # some code in some function @@ -8792,7 +10004,7 @@ my $logger = OpenSRF::Utils::Logger; statements on a system running at the default logging level of INFO, you will only see the INFO, WARN, and ERR messages, as follows: Results of logging calls at the default level of INFO.  - + [2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] [2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] [2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] @@ -8801,7 +10013,7 @@ my $logger = OpenSRF::Utils::Logger; If you then increase the the logging level to INTERNAL (5), the logs will contain much more information, as follows: Results of logging calls at the default level of INTERNAL.  - + [2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] [2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] [2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] @@ -8826,7 +10038,7 @@ my $logger = OpenSRF::Utils::Logger; Caching results: one secret of scalabilityCaching results: one secret of scalability - + If you have ever used an application that depends on a remote Web service outside of your control—say, if you need to retrieve results from a microblogging service—you know the pain of latency and dependability (or the @@ -8837,7 +10049,7 @@ my $logger = OpenSRF::Utils::Logger; by sleeping for 10 seconds the first time it receives a given cache key and cannot retrieve a corresponding value from the cache: Simple caching OpenSRF service.  - + use OpenSRF::Utils::Cache; sub test_cache { my $self = shift; @@ -8857,40 +10069,40 @@ sub test_cache { } - + The OpenSRF::Utils::Cache module provides access to the built-in caching support in OpenSRF. - + The constructor for the cache object accepts a single argument to define the cache type for the object. Each cache type can use a separate memcache server to keep the caches separated. Most Evergreen services use the global cache, while the anon cache is used for Web sessions. - + The cache key is simply a string that uniquely identifies the value you want to store or retrieve. This line creates a cache key based on the OpenSRF method name and request input value. - + The get_cache() method checks to see if the cache key already exists. If a matching key is found, the service immediately returns the stored value. - + If the cache key does not exist, the code sleeps for 10 seconds to simulate a call to a slow remote Web service or an intensive process. - + The $cache_timeout variable represents a value for the lifetime of the cache key in seconds. - + After the code retrieves its value (or, in the case of this example, finishes sleeping), it creates the cache entry by calling the put_cache() @@ -8921,7 +10133,7 @@ sub test_cache { The settings for OpenSRF services are maintained in the opensrf.xml XML configuration file. The structure of the XML document consists of a root element <opensrf> containing two child elements: - • + • The <default> element contains an <apps> element describing all OpenSRF services running on this system—see the section called “Registering a service with the OpenSRF configuration files” --, as @@ -8929,7 +10141,7 @@ sub test_cache { purposes. For example, Evergreen uses this section for email notification and inter-library patron privacy settings. - • + • The <hosts> element contains one element per host that participates in this OpenSRF system. Each host element must include an <activeapps> element @@ -8939,18 +10151,18 @@ sub test_cache { OpenSRF includes a service named opensrf.settings to provide distributed cached access to the configuration settings with a simple API: - • + • opensrf.settings.default_config.get accepts zero arguments and returns the complete set of default settings as a JSON document. - • + • opensrf.settings.host_config.get accepts one argument (hostname) and returns the complete set of settings, as customized for that hostname, as a JSON document. - • + • opensrf.settings.xpath.get accepts one argument (an XPath expression) and returns the portion of @@ -8974,7 +10186,7 @@ sub test_cache { OpenSRF Communication FlowsOpenSRF Communication Flows - + Now that you have seen that it truly is easy to create an OpenSRF service, we can take a look at what is going on under the covers to make all of this work for you. @@ -9006,7 +10218,7 @@ sub test_cache { OpenSRF communication flows over XMPPOpenSRF communication flows over XMPP - + In a minimal OpenSRF deployment, two XMPP users named "router" connect to the XMPP server, with one connected to the private XMPP domain and one connected to the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to @@ -9046,7 +10258,7 @@ sub test_cache { OpenSRF communication flows over HTTPOpenSRF communication flows over HTTP - + In some contexts, access to a full XMPP client is not a practical option. For example, while XMPP clients have been implemented in JavaScript, you might be concerned about browser compatibility and processing overhead - or you might @@ -9058,7 +10270,7 @@ sub test_cache { OpenSRF requests and returns OpenSRF results as HTTP results to the initiating HTTP client. Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - + # curl request broken up over multiple lines for legibility curl -H "X-OpenSRF-service: opensrf.simple-text" --data 'osrf-msg=[ \ @@ -9070,52 +10282,52 @@ curl -H "X-OpenSRF-service: opensrf.simple-text" - + The X-OpenSRF-service header identifies the OpenSRF service of interest. - + The POST request consists of a single parameter, the osrf-msg value, which contains a JSON array. - + The first object is an OpenSRF message ("__c":"osrfMessage") with a set of parameters ("__p":{}). - • + • The identifier for the request ("threadTrace":0); this value is echoed back in the result. - • + • The message type ("type":"REQUEST"). - • + • The locale for the message; if the OpenSRF method is locale-sensitive, it can check the locale for each OpenSRF request and return different information depending on the locale. - • + • The payload of the message ("payload":{}) containing the OpenSRF method request ("__c":"osrfMethod") and its parameters ("__p:"{}). - • + • The method name for the request ("method":"opensrf.simple-text.reverse"). - • + • A set of JSON parameters to pass to the method ("params":["foobar"]); in this case, a single string "foobar". - + The URL on which the OpenSRF HTTP translator is listening, /osrf-http-translator is the default location in the Apache example @@ -9123,7 +10335,7 @@ http://localhost/osrf-http-translator Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator.  - + # HTTP response broken up over multiple lines for legibility [{"__c":"osrfMessage","__p": {"threadTrace":0, "payload": @@ -9141,63 +10353,63 @@ http://localhost/osrf-http-translator - + The OpenSRF HTTP Translator returns an array of JSON objects in its response. Each object in the response is an OpenSRF message ("__c":"osrfMessage") with a collection of response parameters ("__p":). - + The OpenSRF message identifier ("threadTrace":0) confirms that this message is in response to the request matching the same identifier. - + The message includes a payload JSON object ("payload":) with an OpenSRF result for the request ("__c":"osrfResult"). - + The result includes a status indicator string ("status":"OK"), the content of the result response - in this case, a single string "raboof" ("content":"raboof") - and an integer status code for the request ("statusCode":200). - + The message also includes the message type ("type":"RESULT") and the message locale ("locale":"en-CA"). - + The second message in the set of results from the response. - + Again, the message identifier confirms that this message is in response to a particular request. - + The payload of the message denotes that this message is an OpenSRF connection status message ("__c":"osrfConnectStatus"), with some information about the particular OpenSRF connection that was used for this request. - + The response parameters for an OpenSRF connection status message include a verbose status ("status":"Request Complete") and an integer status code for the connection status (`"statusCode":205). - + The message also includes the message type ("type":"RESULT") and the message locale ("locale":"en-CA"). - Before adding a new public OpenSRF service, ensure that it does + Before adding a new public OpenSRF service, ensure that it does not introduce privilege escalation or unchecked access to data. For example, the Evergreen open-ils.cstore private service is an object-relational mapper that provides read and write access to the entire Evergreen database, so it @@ -9240,7 +10452,7 @@ http://localhost/osrf-http-translator A request for opensrf.simple-text.reverse("foobar"):  - + <message from='router@private.localhost/opensrf.simple-text' to='opensrf@private.localhost/opensrf.simple-text_listener_at_localhost_6275' router_from='opensrf@private.localhost/_karmic_126678.3719_6288' @@ -9262,7 +10474,7 @@ http://localhost/osrf-http-translator A response from opensrf.simple-text.reverse("foobar").  - + <message from='opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285' to='opensrf@private.localhost/_karmic_126678.3719_6288' router_command='' router_class='' osrf_xid='' @@ -9298,63 +10510,63 @@ http://localhost/osrf-http-translator __PACKAGE__->register_method(): - • + • The method member specifies the name of the procedure in this module that is being registered as an OpenSRF method. - • + • The api_name member specifies the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix. - • + • The optional api_level member can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1. - • + • The optional argc member specifies the minimal number of arguments that the method expects. - • + • The optional stream member, if set to any value, specifies that the method supports returning multiple values from a single call to subsequent requests. OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a single request. Streaming methods are useful if you are returning hundreds of records and want to act on the results as they return. - • + • The optional signature member is a hash that describes the method’s purpose, arguments, and return value. - • + • The desc member of the signature hash describes the method’s purpose. - • + • The params member of the signature hash is an array of hashes in which each array element describes the corresponding method argument in order. - • + • The name member of the argument hash specifies the name of the argument. - • + • The desc member of the argument hash describes the argument’s purpose. - • + • The type member of the argument hash specifies the data type of the argument: for example, string, integer, boolean, number, array, or hash. - • + • The return member of the signature hash is a hash that describes the return value of the method. - • + • The desc member of the return hash describes the return value. - • + • The type member of the return hash specifies the data type of the return value: for example, string, integer, boolean, number, array, or hash. @@ -9369,92 +10581,92 @@ http://localhost/osrf-http-translator Evergreen is currently the primary showcase for the use of OpenSRF as an application architecture. Evergreen 1.6.1 includes the following set of OpenSRF services: - • + • The open-ils.actor service supports common tasks for working with user accounts and libraries. - • + • The open-ils.auth service supports authentication of Evergreen users. - • + • The open-ils.booking service supports the management of reservations for bookable items. - • + • The open-ils.cat service supports common cataloging tasks, such as creating, modifying, and merging bibliographic and authority records. - • + • The open-ils.circ service supports circulation tasks such as checking out items and calculating due dates. - • + • The open-ils.collections service supports tasks that assist collections agencies in contacting users with outstanding fines above a certain threshold. - • + • The open-ils.cstore private service supports unrestricted access to Evergreen fieldmapper objects. - • + • The open-ils.ingest private service supports tasks for importing data such as bibliographic and authority records. - • + • The open-ils.pcrud service supports permission-based access to Evergreen fieldmapper objects. - • + • The open-ils.penalty penalty service supports the calculation of penalties for users, such as being blocked from further borrowing, for conditions such as having too many items checked out or too many unpaid fines. - • + • The open-ils.reporter service supports the creation and scheduling of reports. - • + • The open-ils.reporter-store private service supports access to Evergreen fieldmapper objects for the reporting service. - • + • The open-ils.search service supports searching across bibliographic records, authority records, serial records, Z39.50 sources, and ZIP codes. - • + • The open-ils.storage private service supports a deprecated method of providing access to Evergreen fieldmapper objects. Implemented in Perl, this service has largely been replaced by the much faster C-based open-ils.cstore service. - • + • The open-ils.supercat service supports transforms of MARC records into other formats, such as MODS, as well as providing Atom and RSS feeds and SRU access. - • + • The open-ils.trigger private service supports event-based triggers for actions such as overdue and holds available notification emails. - • + • The open-ils.vandelay service supports the import and export of batches of bibliographic and authority records. @@ -9477,53 +10689,53 @@ http://localhost/osrf-http-translator - Chapter 22. Evergreen Data Models and AccessChapter 22. Evergreen Data Models and Access + Chapter 25. Evergreen Data Models and AccessChapter 25. Evergreen Data Models and Access Report errors in this documentation using Launchpad. - Chapter 22. Evergreen Data Models and Access + Chapter 25. Evergreen Data Models and Access Report any errors in this documentation using Launchpad. - Chapter 22. Evergreen Data Models and AccessChapter 22. Evergreen Data Models and Access + Chapter 25. Evergreen Data Models and AccessChapter 25. Evergreen Data Models and Access This chapter was taken from Dan Scott's Developer Workshop, February 2010. Exploring the Database SchemaExploring the Database Schema - The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL - adheres closely to ANSI SQL standards, the use of schemas, SQL functions + The database schema is tied pretty tightly to PostgreSQL. Although PostgreSQL + adheres closely to ANSI SQL standards, the use of schemas, SQL functions implemented in both plpgsql and plperl, and PostgreSQL’s native full-text search would make it… challenging… to port to other database platforms. A few common PostgreSQL interfaces for poking around the schema and manipulating data are: - • + • - psql (the command line client) + psql (the command line client) - • + • - pgadminIII (a GUI client). + pgadminIII (a GUI client). Or you can read through the source files in Open-ILS/src/sql/Pg. Let’s take a quick tour through the schemas, pointing out some highlights and some key interdependencies: - • + • actor.org_unit → asset.copy_location - • + • actor.usr → actor.card - • + • biblio.record_entry → asset.call_number → asset.copy - • + • config.metabib_field → metabib.*_field_entry - This documentation also contains an Appendix for the Evergreen Chapter 26, Database Schema. + This documentation also contains an Appendix for the Evergreen Chapter 29, Database Schema. Database access methodsDatabase access methods @@ -9536,7 +10748,7 @@ http://localhost/osrf-http-translator Evergreen Interface Definition Language (IDL)Evergreen Interface Definition Language (IDL) - + Defines properties and required permissions for Evergreen classes. To reduce network overhead, a given object is identified via a class-hint and serialized as a JSON array of properties (no named properties). @@ -9544,7 +10756,7 @@ http://localhost/osrf-http-translator fm_IDL.xml file and makes DRY people happier :) - • + • … oils_persist:readonly tells us, if true, that the data lives in the database, but is pulled from the SELECT statement defined in the <oils_persist:source_definition> child element @@ -9552,7 +10764,7 @@ http://localhost/osrf-http-translator IDL basic example (config.language_map)IDL basic example (config.language_map) - + <class id="clm" controller="open-ils.cstore open-ils.pcrud" oils_obj:fieldmapper="config::language_map" oils_persist:tablename="config.language_map" @@ -9575,34 +10787,34 @@ http://localhost/osrf-http-translator - + The class element defines the attributes and permissions for classes, and relationships between classes. - - • + + • The id attribute on the class element defines the class hint that is used everywhere in Evergreen. - • + • The controller attribute defines the OpenSRF services that provide access to the data for the class objects. - + The oils_obj::fieldmapper attribute defines the name of the class that is generated by OpenILS::Utils::Fieldmapper. - + The oils_persist:tablename attribute defines the name of the table that contains the data for the class objects. - + The reporter interface uses reporter:label attribute values in the source list to provide meaningful class and attribute names. The @@ -9610,7 +10822,7 @@ http://localhost/osrf-http-translator oils_persist:field_safe is true. For example, - + srfsh# request open-ils.fielder open-ils.fielder.clm.atomic \ {"query":{"code":{"=":"eng"}}} @@ -9623,57 +10835,57 @@ Received Data: [ ] - + The fields element defines the list of fields for the class. - - • + + • The oils_persist:primary attribute defines the column that acts as the primary key for the table. - • + • The oils_persist:sequence attribute holds the name of the database sequence. - + Each field element defines one property of the class. - - • + + • The name attribute defines the getter/setter method name for the field. - • + • The reporter:label attribute defines the attribute name as used in the reporter interface. - • + • The reporter:selector attribute defines the field used in the reporter filter interface to provide a selectable list. This gives the user a more meaningful access point than the raw numeric ID or abstract code. - • + • The reporter:datatype attribute defines the type of data held by this property for the purposes of the reporter. - + The oils_persist:i18n attribute, when true, means that translated values for the field’s contents may be accessible in different locales. - + - + The permacrud element defines the permissions (if any) required to create, retrieve, update, and delete data for this @@ -9681,32 +10893,32 @@ Received Data: [ for the permissions to be applied. - + Each action requires one or more permission values that the user must possess to perform the action. - • + • If the global_required attribute is true, then the user must have been granted that permission globally (depth = 0) to perform the action. - • + • The context_field attribute denotes the <field> that identifies the org_unit at which the user must have the pertinent permission. - • - + • + An action element may contain a <context_field> element that defines the linked class (identified by the link attribute) and the field in the linked class that identifies the org_unit where the permission must be held. - • - + • + If the <context_field> element contains a jump attribute, then it defines a link to a link to a class with a field identifying @@ -9718,40 +10930,40 @@ Received Data: [ Reporter data types and their possible valuesReporter data types and their possible values - • + • bool: Boolean true or false - • + • id: ID of the row in the database - • + • int: integer value - • + • interval: PostgreSQL time interval - • + • link: link to another class, as defined in the <links> element of the class definition - • + • money: currency amount - • + • org_unit: list of org_units - • + • text: text value - • + • timestamp: PostgreSQL timestamp @@ -9764,7 +10976,7 @@ Received Data: [ can contain fields that link to fields in other classes. The <links> element defines which fields link to fields in other classes, and the nature of the relationship: - + <class id="aws" controller="open-ils.cstore" oils_obj:fieldmapper="actor::workstation" oils_persist:tablename="actor.workstation" @@ -9790,16 +11002,16 @@ Received Data: [ </links> </class> - + This field includes an oils_persist:virtual attribute with the value of true, meaning that the linked class circ is a virtual class. - + The <links> element contains 0 or more <link> elements. - + Each <link> element defines the field (field) that links to a different class (class), the relationship (rel_type) between this field and the target @@ -9812,38 +11024,38 @@ Received Data: [ open-ils.cstore data access interfacesopen-ils.cstore data access interfaces - + For each class documented in the IDL, the open-ils.cstore service automatically generates a set of data access methods, based on the oils_persist:tablename class attribute. For example, for the class hint clm, cstore generates the following methods with the config.language_map qualifer: - • + • open-ils.cstore.direct.config.language_map.id_list {"code" { "like": "e%" } } Retrieves a list composed only of the IDs that match the query. - • + • open-ils.cstore.direct.config.language_map.retrieve "eng" Retrieves the object that matches a specific ID. - • + • open-ils.cstore.direct.config.language_map.search {"code" : "eng"} Retrieves a list of objects that match the query. - • + • open-ils.cstore.direct.config.language_map.create <_object_> Creates a new object from the passed in object. - • + • open-ils.cstore.direct.config.language_map.update <_object_> Updates the object that has been passed in. - • + • open-ils.cstore.direct.config.language_map.delete "eng" @@ -9852,33 +11064,33 @@ Received Data: [ open-ils.pcrud data access interfacesopen-ils.pcrud data access interfaces - + For each class documented in the IDL, the open-ils.pcrud service automatically generates a set of data access methods, based on the oils_persist:tablename class attribute. For example, for the class hint clm, open-ils.pcrud generates the following methods that parallel the open-ils.cstore interface: - • + • open-ils.pcrud.id_list.clm <_authtoken_>, { "code": { "like": "e%" } } - • + • open-ils.pcrud.retrieve.clm <_authtoken_>, "eng" - • + • open-ils.pcrud.search.clm <_authtoken_>, { "code": "eng" } - • + • open-ils.pcrud.create.clm <_authtoken_>, <_object_> - • + • open-ils.pcrud.update.clm <_authtoken_>, <_object_> - • + • open-ils.pcrud.delete.clm <_authtoken_>, "eng" @@ -9889,27 +11101,27 @@ Received Data: [ Both open-ils.cstore and open-ils.pcrud enable you to control database transactions to ensure that a set of operations either all succeed, or all fail, atomically: - • + • open-ils.cstore.transaction.begin - • + • open-ils.cstore.transaction.commit - • + • open-ils.cstore.transaction.rollback - • + • open-ils.pcrud.transaction.begin - • + • open-ils.pcrud.transaction.commit - • + • open-ils.pcrud.transaction.rollback @@ -9917,27 +11129,27 @@ Received Data: [ At a more granular level, open-ils.cstore and open-ils.pcrud enable you to set database savepoints to ensure that a set of operations either all succeed, or all fail, atomically, within a given transaction: - • + • open-ils.cstore.savepoint.begin - • + • open-ils.cstore.savepoint.commit - • + • open-ils.cstore.savepoint.rollback - • + • open-ils.pcrud.savepoint.begin - • + • open-ils.pcrud.savepoint.commit - • + • open-ils.pcrud.savepoint.rollback @@ -9947,19 +11159,19 @@ Received Data: [ In srfsh, you can open a stateful connection using the open command, and then close the stateful connection using the close command - for example: - srfsh# open open-ils.cstore + srfsh# open open-ils.cstore ... perform various transaction-related work srfsh# close open-ils.cstore JSON QueriesJSON Queries - + Beyond simply retrieving objects by their ID using the \*.retrieve methods, you can issue queries against the \*.delete and \*.search methods using JSON to filter results with simple or complex search conditions. For example, to generate a list of barcodes that are held in a copy location that allows holds and is visible in the OPAC: - + srfsh# request open-ils.cstore open-ils.cstore.json_query {"select": {"acp":["barcode"], "acpl":["name"]}, "from": {"acp":"acpl"}, @@ -9978,32 +11190,32 @@ Received Data: { "name":"Stacks" } - + Invoke the json_query service. - + Select the barcode field from the acp class and the name field from the acpl class. - + Join the acp class to the acpl class based on the linked field defined in the IDL. - + Add a where clause to filter the results. We have more than one condition beginning with the same key, so we wrap the conditions inside an array. - + The first condition tests whether the boolean value of the holdable field on the acpl class is true. - + The second condition tests whether the boolean value of the opac_visible field on the acpl class is true. @@ -10034,7 +11246,7 @@ Received Data: { open-ils.cstore.direct.\*.retrieve methods allow you to specify a JSON structure defining the fields you wish to flesh in the returned object. Fleshing fields in objects returned by open-ils.cstore.  - + srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ { "flesh": 1, @@ -10044,13 +11256,13 @@ srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ } - + The flesh argument is the depth at which objects should be fleshed. For example, to flesh out a field that links to another object that includes a field that links to another object, you would specify a depth of 2. - + The flesh_fields argument contains a list of objects with the fields to flesh for each object. @@ -10060,7 +11272,7 @@ srfsh# request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ let’s also flesh the call number attached to the copy, and then flesh the bibliographic record attached to the call number. Fleshing fields in fields of objects returned by open-ils.cstore.  - + request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ { "flesh": 2, @@ -10091,71 +11303,71 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ oils_persist:virtual class attribute to true. So, let’s add an entry to the IDL for the open-ils.resolver.resolve_holdings service: - + And let’s make ResolverResolver.pm return an array composed of our new rhr classes rather than raw JSON objects: - + Once we add the new entry to the IDL and copy the revised ResolverResolver.pm Perl module to /openils/lib/perl5/OpenILS/Application/, we need to: - 1. + 1. Copy the updated IDL to both the /openils/conf/ and /openils/var/web/reports/ directories. The Dojo approach to parsing the IDL uses the IDL stored in the reports directory. - 2. + 2. Restart the Perl services to make the new IDL visible to the services and refresh the open-ils.resolver implementation - 3. + 3. - Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions + Rerun /openils/bin/autogen.sh to regenerate the JavaScript versions of the IDL required by the HTTP translator and gateway. - We also need to adjust our JavaScript client to use the nifty new + We also need to adjust our JavaScript client to use the nifty new objects that open-ils.resolver.resolve_holdings now returns. - The best approach is to use the support in Evergreen’s Dojo extensions + The best approach is to use the support in Evergreen’s Dojo extensions to generate the JavaScript classes directly from the IDL XML file. Accessing classes defined in the IDL via Fieldmapper.  - + - + Load the Dojo core. - + fieldmapper.AutoIDL reads /openils/var/reports/fm_IDL.xml to generate a list of class properties. - + fieldmapper.dojoData seems to provide a store for Evergreen data accessed via Dojo. - + fieldmapper.Fieldmapper converts the list of class properties into actual classes. - + fieldmapper.standardRequest invokes an OpenSRF method and returns an array of objects. - + The first argument to fieldmapper.standardRequest is an array containing the OpenSRF service name and method name. - + The second argument to fieldmapper.standardRequest is an array containing the arguments to pass to the OpenSRF method. - + As Fieldmapper has instantiated the returned objects based on their class hints, we can invoke getter/setter methods on the objects. @@ -10164,19 +11376,19 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ - Chapter 23. Introduction to SQL for Evergreen AdministratorsChapter 23. Introduction to SQL for Evergreen Administrators + Chapter 26. Introduction to SQL for Evergreen AdministratorsChapter 26. Introduction to SQL for Evergreen Administrators Report errors in this documentation using Launchpad. - Chapter 23. Introduction to SQL for Evergreen Administrators + Chapter 26. Introduction to SQL for Evergreen Administrators Report any errors in this documentation using Launchpad. - Chapter 23. Introduction to SQL for Evergreen AdministratorsChapter 23. Introduction to SQL for Evergreen Administrators + Chapter 26. Introduction to SQL for Evergreen AdministratorsChapter 26. Introduction to SQL for Evergreen Administrators This chapter was taken from Dan Scott's Introduction to SQL for Evergreen Administrators, February 2010. Introduction to SQL DatabasesIntroduction to SQL Databases - - IntroductionIntroduction + + IntroductionIntroduction Over time, the SQL database has become the standard method of storing, retrieving, and processing raw data for applications. Ranging from embedded @@ -10195,7 +11407,7 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ TablesTables - + The table is the cornerstone of a SQL database. Conceptually, a database table is similar to a single sheet in a spreadsheet: every table has one or more columns, with each row in the table containing values for each column. Each @@ -10204,7 +11416,7 @@ request open-ils.cstore open-ils.cstore.direct.asset.copy.retrieve 1, \ worry if the INSERT statement is completely unfamiliar, we’ll talk more about the syntax of the insert statement later. actor.usr_note database table.  - + evergreen=# INSERT INTO actor.usr_note (usr, creator, pub, title, value) VALUES (1, 1, TRUE, 'Who is this guy?', 'He''s the administrator!'); @@ -10229,13 +11441,13 @@ evergreen=# select id, usr, creator, pub, title, value from actor.usr_note; schemas as namespaces if you’re a programmer; or you might think of the schema / table / column relationship like the area code / exchange / local number structure of a telephone number. - Table 23.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc + Table 26.1. Examples: database object namesFull name Schema name Table name Field nameactor.usr_note.titleactorusr_notetitlebiblio.record_entry.marcbibliorecord_entrymarc The default schema name in PostgreSQL is public, so if you do not specify a schema name when creating or accessing a database object, PostgreSQL will use the public schema. As a result, you might not find the object that you’re looking for if you don’t use the appropriate schema. Example: Creating a table without a specific schema.  - + evergreen=# CREATE TABLE foobar (foo TEXT, bar TEXT); CREATE TABLE evergreen=# \d foobar @@ -10247,7 +11459,7 @@ evergreen=# \d foobar Example: Trying to access a unqualified table outside of the public schema.  - evergreen=# SELECT * FROM usr_note; + evergreen=# SELECT * FROM usr_note; ERROR: relation "usr_note" does not exist LINE 1: SELECT * FROM usr_note; ^ @@ -10255,8 +11467,8 @@ evergreen=# \d foobar Evergreen uses schemas to organize all of its tables with mostly intuitive, if short, schema names. Here’s the current (as of 2010-01-03) list of schemas used by Evergreen: - Table 23.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter - The term schema has two meanings in the world of SQL databases. We have + Table 26.2. Evergreen schema namesSchema name DescriptionacqAcquisitionsactionCirculation actionsaction_triggerEvent mechanismsactorEvergreen users and organization unitsassetCall numbers and copiesauditorTrack history of changes to selected tablesauthorityAuthority recordsbiblioBibliographic recordsbookingResource bookingsconfigEvergreen configurable optionscontainerBuckets for records, call numbers, copies, and usersextend_reporterExtra views for report definitionsmetabibMetadata about bibliographic recordsmoneyFines and billsofflineOffline transactionspermissionUser permissionsqueryStored SQL statementsreporterReport definitionssearchSearch functionsserialSerial MFHD recordsstatsConvenient views of circulation and asset statisticsvandelayMARC batch importer and exporter + The term schema has two meanings in the world of SQL databases. We have discussed the schema as a conceptual grouping of tables and other database objects within a given namespace; for example, "the actor schema contains the tables and functions related to users and organizational units". Another common @@ -10266,23 +11478,23 @@ evergreen=# \d foobar ColumnsColumns Each column definition consists of: - • + • a data type - • + • (optionally) a default value to be used whenever a row is inserted that does not contain a specific value - • + • (optionally) one or more constraints on the values beyond data type Although PostgreSQL supports dozens of data types, Evergreen makes our life easier by only using a handful. - Table 23.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money + Table 26.3. PostgreSQL data types used by EvergreenType name Description LimitsINTEGERMedium integer-2147483648 to +2147483647BIGINTLarge integer-9223372036854775808 to 9223372036854775807SERIALSequential integer1 to 2147483647BIGSERIALLarge sequential integer1 to 9223372036854775807TEXTVariable length character dataUnlimited lengthBOOLBooleanTRUE or FALSETIMESTAMP WITH TIME ZONETimestamp4713 BC to 294276 ADTIMETimeExpressed in HH:MM:SSNUMERIC(precision, scale)DecimalUp to 1000 digits of precision. In Evergreen mostly used for money values, with a precision of 6 and a scale of 2 (####.##). Full details about these data types are available from the data types section of @@ -10306,7 +11518,7 @@ evergreen=# \d foobar Natural primary keys are drawn from the intrinsic properties of the data being modelled. For example, some potential natural primary keys for a table that contains people would be: - Table 23.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license + Table 26.4. Example: Some potential natural primary keys for a table of peopleNatural key Pros ConsFirst name, last name, addressNo two people with the same name would ever live at the same address, right?Lots of columns force data duplication in referencing tablesSSN or driver’s licenseThese are guaranteed to be uniqueLots of people don’t have an SSN or a driver’s license To avoid problems with natural keys, many applications instead define surrogate primary keys. A surrogate primary keys is a column with an autoincrementing integer value added to a table definition that ensures uniqueness. @@ -10322,7 +11534,7 @@ evergreen=# \d foobar contained in the biblio.record_entry table can have call numbers attached to them. Call numbers are contained in the asset.call_number table, and they can have copies attached to them. Copies are contained in the asset.copy table. - Table 23.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id + Table 26.5. Example: Evergreen’s copy / call number / bibliographic record relationshipsTable Primary key Column with a foreign key Points toasset.copyasset.copy.idasset.copy.call_numberasset.call_number.idasset.call_numberasset.call_number.idasset.call_number.recordbiblio.record_entry.idbiblio.record_entrybiblio.record_entry.id Check constraintsCheck constraints @@ -10337,7 +11549,7 @@ evergreen=# \d foobar The actor.org_address table is a simple table in the Evergreen schema that we can use as a concrete example of many of the properties of databases that we have discussed so far. - + CREATE TABLE actor.org_address ( id SERIAL PRIMARY KEY, valid BOOL NOT NULL DEFAULT TRUE, @@ -10353,7 +11565,7 @@ CREATE TABLE actor.org_address ( post_code TEXT NOT NULL ); - + The column named id is defined with a special data type of SERIAL; if given no value when a row is inserted into a table, the database automatically @@ -10361,7 +11573,7 @@ CREATE TABLE actor.org_address ( popular data type for a primary key because it is guaranteed to be unique - and indeed, the constraint for this column identifies it as the PRIMARY KEY. - + The data type BOOL defines a boolean value: TRUE or FALSE are the only acceptable values for the column. The constraint NOT NULL instructs the @@ -10369,20 +11581,20 @@ CREATE TABLE actor.org_address ( property DEFAULT TRUE instructs the database to automatically set the value of the column to TRUE if no value is provided. - + The data type TEXT defines a text column of practically unlimited length. As with the previous column, there is a NOT NULL constraint, and a default value of 'MAILING' will result if no other value is supplied. - + The REFERENCES actor.org_unit (id) clause indicates that this column has a foreign key relationship to the actor.org_unit table, and that the value of this column in every row in this table must have a corresponding value in the id column in the referenced table (actor.org_unit). - + The column named street2 demonstrates that not all columns have constraints beyond data type. In this case, the column is allowed to be NULL or to contain a @@ -10404,7 +11616,7 @@ CREATE TABLE actor.org_address ( To display the definition of a database object such as a table, issue the command \d _object-name_. For example, to display the definition of the actor.usr_note table: - + $ psql evergreen psql (8.4.1) Type "help" for help. @@ -10431,16 +11643,16 @@ Foreign-key constraints: evergreen=# \q $ - + This is the most basic connection to a PostgreSQL database. You can use a number of other flags to specify user name, hostname, port, and other options. - + The \d command displays the definition of a database object. - + The \q command quits the psql session and returns you to the shell prompt. @@ -10453,7 +11665,7 @@ $ The SELECT statement is the basic tool for retrieving information from a database. The syntax for most SELECT statements is: - SELECT [columns(s)] + SELECT [columns(s)] FROM [table(s)] [WHERE condition(s)] [GROUP BY columns(s)] @@ -10464,7 +11676,7 @@ $ ; For example, to select all of the columns for each row in the actor.usr_address table, issue the following query: - SELECT * + SELECT * FROM actor.usr_address ; @@ -10476,7 +11688,7 @@ $ you want after the SELECT keyword. Separate each column name with a comma. For example, to select just the city, county, and state from the actor.usr_address table, issue the following query: - SELECT city, county, state + SELECT city, county, state FROM actor.usr_address ; @@ -10490,7 +11702,7 @@ $ For example, to sort the rows returned from your actor.usr_address query by city, with county and then zip code as the tie breakers, issue the following query: - + SELECT city, county, state FROM actor.usr_address ORDER BY city, county, post_code @@ -10508,7 +11720,7 @@ SELECT city, county, state For example, to restrict the results returned from your actor.usr_address query to only those rows containing a state value of Connecticut, issue the following query: - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -10520,7 +11732,7 @@ SELECT city, county, state actor.usr_address query to only those rows where the state column contains a value of Connecticut and the city column contains a value of Hartford, issue the following query: - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -10528,13 +11740,13 @@ SELECT city, county, state ORDER BY city, county, post_code ; - To return rows where the state is Connecticut and the city is Hartford or + To return rows where the state is Connecticut and the city is Hartford or New Haven, you must use parentheses to explicitly group the city value conditions together, or else the database will evaluate the OR city = 'New Haven' clause entirely on its own and match all rows where the city column is New Haven, even though the state might not be Connecticut. Trouble with OR.  - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -10546,7 +11758,7 @@ SELECT city, county, state Grouped OR’ed conditions.  - + SELECT city, county, state FROM actor.usr_address WHERE state = 'Connecticut' @@ -10564,34 +11776,34 @@ SELECT city, county, state WHERE clauses: Comparing two scalar valuesComparing two scalar values - • + • x = y (equal to) - • + • x != y (not equal to) - • + • x < y (less than) - • + • x > y (greater than) - • + • x LIKE y (TEXT value x matches a subset of TEXT y, where y is a string that can contain % as a wildcard for 0 or more characters, and _ as a wildcard for a single character. For example, WHERE 'all you can eat fish and chips and a big stick' LIKE '%fish%stick' would return TRUE) - • + • x ILIKE y (like LIKE, but the comparison ignores upper-case / lower-case) - • + • x IN y (x is in the list of values y, where y can be a list or a SELECT statement that returns a list) @@ -10609,7 +11821,7 @@ SELECT city, county, state To return rows from a table where a given column is not NULL, use the IS NOT NULL comparison operator. Retrieving rows where a column is not NULL.  - + SELECT id, first_given_name, family_name FROM actor.usr WHERE second_given_name IS NOT NULL @@ -10619,7 +11831,7 @@ SELECT id, first_given_name, family_name Similarly, to return rows from a table where a given column is NULL, use the IS NULL comparison operator. Retrieving rows where a column is NULL.  - + SELECT id, first_given_name, second_given_name, family_name FROM actor.usr WHERE second_given_name IS NULL @@ -10635,7 +11847,7 @@ SELECT id, first_given_name, second_given_name, family_name indistinguishable from an empty string; this is the default display method in psql. You can change the behaviour of psql using the pset command: Changing the way NULL values are displayed in psql.  - + evergreen=# \pset null '(null)' Null display is '(null)'. @@ -10664,7 +11876,7 @@ SELECT id, first_given_name, second_given_name, family_name For example, to change the last name of a user in the actor.usr table to L’estat, issue the following SQL: Escaping ' in TEXT values.  - + UPDATE actor.usr SET family_name = 'L''estat' WHERE profile IN ( @@ -10676,7 +11888,7 @@ UPDATE actor.usr When you retrieve the row from the database, the value is displayed with just a single ' character: - + SELECT id, family_name FROM actor.usr WHERE family_name = 'L''estat' @@ -10695,7 +11907,7 @@ SELECT id, family_name results for a range of values in a single query, rather than requiring you to issue one query per target value. Returning unique results of a single column with GROUP BY.  - + SELECT grp FROM permission.grp_perm_map GROUP BY grp @@ -10719,7 +11931,7 @@ SELECT grp following query shows us which groups have permissions at each depth in the library hierarchy: Returning unique results of multiple columns with GROUP BY.  - + SELECT grp, depth FROM permission.grp_perm_map GROUP BY grp, depth @@ -10750,7 +11962,7 @@ SELECT grp, depth appears in the table. Yes, this is a sneak peek at the use of aggregate functions! Keeners. Counting unique column combinations with GROUP BY.  - + SELECT grp, depth, COUNT(grp) FROM permission.grp_perm_map GROUP BY grp, depth @@ -10780,7 +11992,7 @@ SELECT grp, depth, COUNT(grp) is applied to the results. The following query restricts the results to those rows that have a depth of 0. Using the WHERE clause with GROUP BY.  - + SELECT grp, COUNT(grp) FROM permission.grp_perm_map WHERE depth = 0 @@ -10805,7 +12017,7 @@ SELECT grp, COUNT(grp) the following query restricts the returned rows to those that have more than 5 occurrences of the same value for grp in the table. GROUP BY restricted by a HAVING clause.  - + SELECT grp, COUNT(grp) FROM permission.grp_perm_map GROUP BY grp @@ -10835,7 +12047,7 @@ SELECT grp, COUNT(grp) them, then applying the DISTINCT keyword might be a sign that you are papering over a real problem. Returning unique results of multiple columns with DISTINCT.  - + SELECT DISTINCT grp, depth FROM permission.grp_perm_map ORDER BY depth, grp @@ -10869,7 +12081,7 @@ SELECT DISTINCT grp, depth example, in the following query we list the five most frequently used circulation modifiers: Using the LIMIT clause to restrict results.  - + SELECT circ_modifier, COUNT(circ_modifier) FROM asset.copy GROUP BY circ_modifier @@ -10894,7 +12106,7 @@ SELECT circ_modifier, COUNT(circ_modifier) In the following example, we use the OFFSET clause to get results 6 through 10 from the same query that we prevously executed. Using the OFFSET clause to return a specific subset of rows.  - + SELECT circ_modifier, COUNT(circ_modifier) FROM asset.copy GROUP BY circ_modifier @@ -10932,7 +12144,7 @@ SELECT circ_modifier, COUNT(circ_modifier) used to convert text values to upper-case, modifies the results in the following set of queries: Using the UPPER() scalar function to convert text values to upper-case.  - + -- First, without the UPPER() function for comparison SELECT shortname, name FROM actor.org_unit @@ -10962,27 +12174,27 @@ SELECT shortname, UPPER(name) There are so many scalar functions in PostgreSQL that we cannot cover them all here, but we can list some of the most commonly used functions: - • + • || - concatenates two text values together - • + • COALESCE() - returns the first non-NULL value from the list of arguments - • + • LOWER() - returns a text value converted to lower-case - • + • REPLACE() - returns a text value after replacing all occurrences of a given text value with a different text value - • + • REGEXP_REPLACE() - returns a text value after being transformed by a regular expression - • + • UPPER() - returns a text value converted to upper-case @@ -10994,23 +12206,23 @@ SELECT shortname, UPPER(name) Aggregate functions return a single value computed from the the complete set of values returned for the specified column. - • + • AVG() - • + • COUNT() - • + • MAX() - • + • MIN() - • + • SUM() @@ -11029,7 +12241,7 @@ SELECT shortname, UPPER(name) returned by the main SELECT statement to only those locations that have an opac_visible value of TRUE: Sub-select example.  - + SELECT call_number FROM asset.copy WHERE deleted IS FALSE @@ -11080,7 +12292,7 @@ SELECT call_number we have to fully qualify the column names in our queries with the schema and table names. A simple inner join.  - + SELECT * FROM actor.usr INNER JOIN actor.org_unit ON actor.usr.home_ou = actor.org_unit.id @@ -11122,7 +12334,7 @@ fiscal_calendar | 1 joining the actor.org_unit table to give us access to the user’s home library: Borrower Count by Profile (Adult, Child, etc)/Library.  - + SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree.name) FROM actor.usr INNER JOIN permission.grp_tree @@ -11159,7 +12371,7 @@ SELECT permission.grp_tree.name, actor.org_unit.name, COUNT(permission.grp_tree. example, we can write the previous INNER JOIN statement example using aliases instead of fully-qualified identifiers: Borrower Count by Profile (using aliases).  - + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au INNER JOIN permission.grp_tree pgt @@ -11187,19 +12399,19 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" An outer join returns all of the rows from one or both of the tables participating in the join. - • + • For a LEFT OUTER JOIN, the join returns all of the rows from the left-hand table and the rows matching the join condition from the right-hand table, with NULL values for the rows with no match in the right-hand table. - • + • A RIGHT OUTER JOIN behaves in the same way as a LEFT OUTER JOIN, with the exception that all rows are returned from the right-hand table participating in the join. - • + • For a FULL OUTER JOIN, the join returns all the rows from both the left-hand and right-hand tables, with NULL values for the rows with no match in either @@ -11207,7 +12419,7 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" Base tables for the OUTER JOIN examples.  - + SELECT * FROM aaa; id | stuff @@ -11231,7 +12443,7 @@ SELECT * FROM bbb; Example of a LEFT OUTER JOIN.  - + SELECT * FROM aaa LEFT OUTER JOIN bbb ON aaa.id = bbb.id ; @@ -11246,7 +12458,7 @@ SELECT * FROM aaa Example of a RIGHT OUTER JOIN.  - + SELECT * FROM aaa RIGHT OUTER JOIN bbb ON aaa.id = bbb.id ; @@ -11260,7 +12472,7 @@ SELECT * FROM aaa Example of a FULL OUTER JOIN.  - + SELECT * FROM aaa FULL OUTER JOIN bbb ON aaa.id = bbb.id ; @@ -11290,7 +12502,7 @@ SELECT * FROM aaa of columns with compatible data types: the union, intersection, and difference operators. Base tables for the set operation examples.  - + SELECT * FROM aaa; id | stuff @@ -11320,7 +12532,7 @@ SELECT * FROM bbb; does not return any duplicate rows. To return duplicate rows, use the UNION ALL operator. Example of a UNION set operation.  - + -- The parentheses are not required, but are intended to help -- illustrate the sets participating in the set operation ( @@ -11353,7 +12565,7 @@ ORDER BY 1 both the left-hand and right-hand tables. To return duplicate rows, use the INTERSECT ALL operator. Example of an INTERSECT set operation.  - + ( SELECT id, stuff FROM aaa @@ -11381,7 +12593,7 @@ ORDER BY 1 exist in the right-hand table. You are effectively subtracting the common rows from the left-hand table. Example of an EXCEPT set operation.  - + ( SELECT id, stuff FROM aaa @@ -11429,7 +12641,7 @@ ORDER BY 1 and a SELECT statement on which the view is built. The following example creates a view based on our borrower profile count: Creating a view.  - + CREATE VIEW actor.borrower_profile_count AS SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au @@ -11451,7 +12663,7 @@ CREATE VIEW actor.borrower_profile_count AS Then we issue a SELECT statement with a WHERE clause to further filter the results. Selecting results from a view.  - + SELECT * FROM actor.borrower_profile_count; Profile | Library | Count @@ -11519,7 +12731,7 @@ SELECT * row in the biblio.record_entry table with every row in the metabib.full_rec view: Query plan for a terrible query.  - + EXPLAIN SELECT * FROM biblio.record_entry FULL OUTER JOIN metabib.full_rec ON 1=1 @@ -11540,7 +12752,7 @@ EXPLAIN SELECT * the left-hand table with every row in the right-hand table and take the saner approach of using an INNER JOIN where the join condition is on the record ID. Query plan for a less terrible query.  - + EXPLAIN SELECT * FROM biblio.record_entry bre INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id; @@ -11559,7 +12771,7 @@ EXPLAIN SELECT * the following example, we will limit the results to deleted records that were modified in the last month. Query plan for a realistic query.  - + EXPLAIN SELECT * FROM biblio.record_entry bre INNER JOIN metabib.full_rec mfr ON mfr.record = bre.id @@ -11593,7 +12805,7 @@ EXPLAIN SELECT * flattened MARC subfields is a fairly common operation, we could create a new index and see if that speeds up our query plan. Query plan with optimized access via a new index.  - + -- This index will take a long time to create on a large database -- of bibliographic records CREATE INDEX bib_record_idx ON metabib.real_full_rec (record); @@ -11624,7 +12836,7 @@ EXPLAIN SELECT * the execution estimate has dropped to 1558330 because the query planner can use the new index (bib_record_idx) rather than scanning the entire table. Success! - While indexes can significantly speed up read access to tables for common + While indexes can significantly speed up read access to tables for common filtering conditions, every time a row is created or updated the corresponding indexes also need to be maintained - which can decrease the performance of writes to the database. Be careful to keep the balance of read performance @@ -11644,14 +12856,14 @@ EXPLAIN SELECT * (this multiple row syntax requires PostgreSQL 8.2 or higher). For example, to insert two rows into the permission.usr_grp_map table: Inserting rows into the permission.usr_grp_map table.  - INSERT INTO permission.usr_grp_map (usr, grp) + INSERT INTO permission.usr_grp_map (usr, grp) VALUES (2, 10), (2, 4) ; Of course, as with the rest of SQL, you can replace individual column values with one or more use sub-selects: Inserting rows using sub-selects instead of integers.  - + INSERT INTO permission.usr_grp_map (usr, grp) VALUES ( (SELECT id FROM actor.usr @@ -11681,7 +12893,7 @@ INSERT INTO permission.usr_grp_map (usr, grp) the grp column value based on the id column values returned from permission.grp_tree: Inserting rows via a SELECT statement.  - + INSERT INTO permission.usr_grp_map (usr, grp) SELECT 1, id FROM permission.grp_tree @@ -11701,7 +12913,7 @@ INSERT 0 6 permission.grp_perm_map table where the permission maps to UPDATE_ORG_UNIT_CLOSING and the group is anything other than administrators: Deleting rows from a table.  - + DELETE FROM permission.grp_perm_map WHERE grp IN ( SELECT id @@ -11715,16 +12927,16 @@ DELETE FROM permission.grp_perm_map ; - There are two main reasons that a DELETE statement may not actually + There are two main reasons that a DELETE statement may not actually delete rows from a table, even when the rows meet the conditional clause. - 1. + 1. If the row contains a value that is the target of a relational constraint, for example, if another table has a foreign key pointing at your target table, you will be prevented from deleting a row with a value corresponding to a row in the dependent table. - 2. + 2. If the table has a rule that substitutes a different action for a DELETE statement, the deletion will not take place. In Evergreen it is common for a @@ -11746,7 +12958,7 @@ DELETE FROM permission.grp_perm_map respective new values, and (optionally) a WHERE clause identifying the row or rows that should be updated. Following is the syntax for the UPDATE statement: - UPDATE [table-name] + UPDATE [table-name] SET [column] TO [new-value] WHERE [condition] ; @@ -11759,7 +12971,7 @@ DELETE FROM permission.grp_perm_map Monthly circulation stats by collection code / libraryMonthly circulation stats by collection code / library Monthly Circulation Stats by Collection Code/Library.  - + SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Location" FROM asset.copy ac INNER JOIN asset.copy_location acl ON ac.location = acl.id @@ -11778,7 +12990,7 @@ SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", acl.name AS "Copy Loca Monthly circulation stats by borrower stat / libraryMonthly circulation stats by borrower stat / library Monthly Circulation Stats by Borrower Stat/Library.  - + SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry AS "Borrower Stat" FROM action.circulation acirc INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id @@ -11798,7 +13010,7 @@ SELECT COUNT(acirc.id) AS "COUNT", aou.name AS "Library", asceum.stat_cat_entry Monthly intralibrary loan stats by libraryMonthly intralibrary loan stats by library Monthly Intralibrary Loan Stats by Library.  - + SELECT aou.name AS "Library", COUNT(acirc.id) FROM action.circulation acirc INNER JOIN actor.org_unit aou ON acirc.circ_lib = aou.id @@ -11818,7 +13030,7 @@ SELECT aou.name AS "Library", COUNT(acirc.id) Monthly borrowers added by profile (adult, child, etc) / libraryMonthly borrowers added by profile (adult, child, etc) / library Monthly Borrowers Added by Profile (Adult, Child, etc)/Library.  - + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au INNER JOIN permission.grp_tree pgt @@ -11836,7 +13048,7 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" Borrower count by profile (adult, child, etc) / libraryBorrower count by profile (adult, child, etc) / library Borrower Count by Profile (Adult, Child, etc)/Library.  - + SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" FROM actor.usr au INNER JOIN permission.grp_tree pgt @@ -11854,7 +13066,7 @@ SELECT pgt.name AS "Profile", aou.name AS "Library", COUNT(pgt.name) AS "Count" We define a “collection” as a shelving location in Evergreen. Monthly Items Added by Collection/Library.  - + SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) FROM actor.org_unit aou INNER JOIN asset.call_number acn ON acn.owning_lib = aou.id @@ -11877,7 +13089,7 @@ SELECT aou.name AS "Library", acl.name, COUNT(ac.barcode) bibliographic record ID and display the title / author information for those records that have more than a given threshold of holds. Hold Purchase Alert by Library.  - + -- Title holds SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_id) FROM @@ -11933,7 +13145,7 @@ SELECT all_holds.bib_id, aou.name, rmsr.title, rmsr.author, COUNT(all_holds.bib_ Then we issue an UPDATE statement to set the home library for patrons with a physical address with a city that matches the city names in our staging table. Update borrower records with a different home library.  - + CREATE SCHEMA staging; CREATE TABLE staging.city_home_ou_map (city TEXT, ou_shortname TEXT, FOREIGN KEY (ou_shortname) REFERENCES actor.org_unit (shortname)); @@ -11964,11 +13176,11 @@ WHERE ( - Chapter 24. JSON QueriesChapter 24. JSON Queries + Chapter 27. JSON QueriesChapter 27. JSON Queries Report errors in this documentation using Launchpad. - Chapter 24. JSON Queries + Chapter 27. JSON Queries Report any errors in this documentation using Launchpad. - Chapter 24. JSON QueriesChapter 24. JSON Queries + Chapter 27. JSON QueriesChapter 27. JSON Queries The json_query facility provides a way for client applications to query the database over the network. Instead of constructing its own SQL, the application encodes a query in the form of a JSON string and passes it to the json_query service. Then the json_query service parses the JSON, constructs and executes the corresponding SQL, and returns the results to @@ -11981,7 +13193,7 @@ WHERE ( Nevertheless, the need to encode a query in a JSON string adds complications, because the client needs to know how to build the right JSON. JSON queries are also somewhat limiting -- they can't do all of the things that you can do with raw SQL. - The IDLThe IDL + The IDLThe IDL A JSON query does not refer to tables and columns. Instead, it refers to classes and fields, which the IDL maps to the corresponding database entities. @@ -11998,27 +13210,27 @@ WHERE ( You should also be familiar with JSON. However it is worth defining a couple of terms that have other meanings in other contexts: - •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: - { "a":"frobozz", "b":24, "c":null } - •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: - [ "Goober", 629, null, false, "glub" ] + •An "object" is a JSON object, i.e. a comma-separated list of name:value pairs, enclosed in curly braces, like this: + { "a":"frobozz", "b":24, "c":null } + •An "array" is a JSON array, i.e. a comma-separated list of values, enclosed in square brackets, like this: + [ "Goober", 629, null, false, "glub" ] - The ExamplesThe Examples + The ExamplesThe Examples The test_json_query utility generated the SQL for all of the sample queries in this tutorial. Newlines and indentation were then inserted manually for readability. All examples involve the actor.org_unit table, sometimes in combination with a few related tables. The queries themselves are designed to illustrate the syntax, not to do anything useful at the application level. For example, it's not meaningful to take the square root of an org_unit id, except to illustrate how to code a function call. The examples are like department store mannequins -- they have no brains, they're only for display. The simplest kind of query defines nothing but a FROM clause. For example: - + { "from":"aou" } In this minimal example we select from only one table. Later we will see how to join multiple tables. Since we don't supply a WHERE clause, json_query constructs a default WHERE clause for us, including all the available columns. The resulting SQL looks like this: - + SELECT "aou".billing_address AS "billing_address", "aou".holds_address AS "holds_address", @@ -12036,9 +13248,9 @@ FROM actor.org_unit AS "aou" ; - Default SELECT ClausesDefault SELECT Clauses + Default SELECT ClausesDefault SELECT Clauses - + The default SELECT clause includes every column that the IDL defines it as a non-virtual field for the class in question. If a column is present in the database but not defined in the IDL, json_query doesn't know about it. In the case of the example shown above, all the columns are defined in the IDL, so they all show up in the default SELECT clause. @@ -12049,28 +13261,28 @@ FROM In practice, the sequencing of columns in the SELECT clause is not significant. The result set is returned to the client program in the form of a data structure, which the client program can navigate however it chooses. - Other LessonsOther Lessons + Other LessonsOther Lessons There are other ways to get a default SELECT clause. However, default SELECT clauses are a distraction at this point, because most of the time you'll specify your own SELECT clause explicitly, as we will discuss later. Let's consider some more important aspects of this simple example -- more important because they apply to more complex queries as well. - • + • The entire JSON query is an object. In this simple case the object includes only one entry, for the FROM clause. Typically you'll also have entries for the SELECT clause and the WHERE clause, and possibly for HAVING, ORDER BY, LIMIT, or OFFSET clauses. There is no separate entry for a GROUP BY clause, which you can specify by other means. - • + • Although all the other entries are optional, you must include an entry for the FROM clause. You cannot, for example, do a SELECT USER the way you can in psql. - • + • Every column is qualified by an alias for the table. This alias is always the class name for the table, as defined in the IDL. - • + • Every column is aliased with the column name. There is a way to choose a different column alias (not shown here). - The SELECT ClauseThe SELECT Clause + The SELECT ClauseThe SELECT Clause The following variation also produces a default SELECT clause: - + { "from":"aou", "select": { @@ -12079,7 +13291,7 @@ FROM } ...and so does this one: - + { "select": { "aou":null @@ -12091,7 +13303,7 @@ FROM with a key of “select”. The value associated with this key is another JSON object, whose keys are class names. (These two examples also illustrate another point: unlike SQL, a JSON query doesn't care whether the FROM clause or the SELECT clause comes first.) Usually you don't want the default SELECT clause. Here's how to select only some of the columns: - + { "from":"aou", "select": { @@ -12102,7 +13314,7 @@ FROM The value associated with the class name is an array of column names. If you select columns from multiple tables (not shown here), you'll need a separate entry for each table, and a separate column list for each entry. The previous example results in the following SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12110,26 +13322,26 @@ FROM actor.org_unit AS "aou" ; - Fancier SELECT ClausesFancier SELECT Clauses + Fancier SELECT ClausesFancier SELECT Clauses The previous example featured an array of column names. More generally, it featured an array of field specifications, and one kind of field specification is a column name. The other kind is a JSON object, with some combination of the following keys: - • + • “column” -- the column name (required). - • + • “alias” -- used to define a column alias, which otherwise defaults to the column name. - • + • “aggregate” -- takes a value of true or false. Don't worry about this one yet. It concerns the use of GROUP BY clauses, which we will examine later. - • + • “transform” -- the name of an SQL function to be called. - • + • “result_field” -- used with "transform"; specifies an output column of a function that returns multiple columns at a time. - • + • “params” -- used with "transform"; provides a list of parameters for the function. They may be strings, numbers, or nulls. This example assigns a different column alias: - + { "from":"aou", "select": { @@ -12149,7 +13361,7 @@ FROM In this case, changing the column alias doesn't accomplish much. But if we were joining to the actor.org_unit_type table, which also has a "name" column, we could use different aliases to distinguish them. The following example uses a function to raise a column to upper case: - + { "from":"aou", "select": { @@ -12167,7 +13379,7 @@ FROM actor.org_unit AS "aou" ; Here we take a substring of the name, using the params element to pass parameters: - + { "from":"aou", "select": { @@ -12191,7 +13403,7 @@ FROM which is always the first parameter. They are always passed as strings, i.e. enclosed in quotes, even if the JSON expresses them as numbers. PostgreSQL will ordinarily coerce them to the right type. However if the function name is overloaded to accept different types, PostgreSQL may invoke a function other than the one intended. Finally we call a fictitious function "frobozz" that returns multiple columns, where we want only one of them: - + { "from":"aou", "select": { @@ -12214,7 +13426,7 @@ FROM The frobozz function doesn't actually exist, but json_query doesn't know that. The query won't fail until json_query tries to execute it in the database. - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do You can do some things in a SELECT clause with raw SQL (with psql, for example) that you can't do with a JSON query. Some of them matter and some of them don't. When you do a JOIN, you can't arrange the selected columns in any arbitrary sequence, because all of the columns from a given table must be grouped together. @@ -12224,7 +13436,7 @@ FROM to them (and it has to be the first parameter). You can't use a CASE expression. Instead, the client application can do the equivalent branching for itself. You can't select a subquery. In raw SQL you can do something like the following: - + SELECT id, name, @@ -12239,10 +13451,10 @@ FROM This contrived example is not very realistic. Normally you would use a JOIN in this case, and that's what you should do in a JSON query. Other cases may not be so easy to solve. - The WHERE ClauseThe WHERE Clause + The WHERE ClauseThe WHERE Clause Most queries need a WHERE clause, as in this simple example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12256,7 +13468,7 @@ FROM In this case, we use a special shortcut for expressing an equality condition. The column name is on the left of the colon, and the value to which we are equating it is on the right. Here's the resulting SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12269,10 +13481,10 @@ WHERE If you want to compare a column to NULL, put “null” (without quotation marks) to the right of the colon instead of a literal value. The resulting SQL will include “IS NULL” instead of an equals sign. - Other Kinds of ComparisonsOther Kinds of Comparisons + Other Kinds of ComparisonsOther Kinds of Comparisons Here's the same query (which generates the same SQL) without the special shortcut: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12284,7 +13496,7 @@ WHERE We still have an entry whose key is the column name, but this time the associated value is another JSON object. It must contain exactly one entry, with the comparison operator on the left of the colon, and the value to be compared on the right. The same syntax works for other kinds of comparison operators. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12294,7 +13506,7 @@ WHERE } ...turns into: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12305,7 +13517,7 @@ WHERE The condition '“=”:null' turns into IS NULL. Any other operator used with “null” turns into IS NOT NULL. You can use most of the comparison operators recognized by PostgreSQL: - + = <> != < > <= >= ~ ~* !~ !~* @@ -12314,13 +13526,13 @@ WHERE The only ones you can't use are “is distinct from” and “is not distinct from”. - Custom ComparisonsCustom Comparisons + Custom ComparisonsCustom Comparisons Here's a dirty little secret: json_query doesn't really pay much attention to the operator you supply. It merely checks to make sure that the operator doesn't contain any semicolons or white space, in order to prevent certain kinds of SQL injection. It also allows "similar to" as a special exception. As a result, you can slip an operator of your own devising into the SQL, so long as it doesn't contain any semicolons or white space, and doesn't create invalid syntax. Here's a contrived and rather silly example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12330,7 +13542,7 @@ WHERE } ...which results in the following SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12341,10 +13553,10 @@ WHERE It's hard to come up with a realistic case where this hack would be useful, but it could happen. - Comparing One Column to AnotherComparing One Column to Another + Comparing One Column to AnotherComparing One Column to Another Here's how to put another column on the right hand side of a comparison: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12356,7 +13568,7 @@ WHERE This syntax is similar to the previous examples, except that instead of comparing to a literal value, we compare to an object. This object has only a single entry, whose key is a table alias preceded by a leading plus sign. The associated value is the name of the column. Here's the resulting SQL: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12372,10 +13584,10 @@ WHERE The leading plus sign, combined with a table alias, can be used in other situations to designate the table to which a column belongs. We shall defer a discussion of this usage to the section on joins. - Testing Boolean ColumnsTesting Boolean Columns + Testing Boolean ColumnsTesting Boolean Columns In SQL, there are several ways to test a boolean column such as actor.org_unit.opac_visible. The most obvious way is to compare it to true or false: - + SELECT id FROM @@ -12385,7 +13597,7 @@ WHERE In a JSON query this approach doesn't work. If you try it, the "= true" test will turn into IS NULL. Don't do that. Instead, use a leading plus sign, as described in the preceding section, to treat the boolean column as a stand-alone condition: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -12395,7 +13607,7 @@ WHERE } Result: - + SELECT "aou".id AS "id" FROM @@ -12404,7 +13616,7 @@ WHERE "aou".opac_visible ; If you need to test for falsity, then write a test for truth and negate it with the "-not" operator. We will discuss the "-not" operator later, but here's a preview: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -12423,7 +13635,7 @@ WHERE NOT ( "aou".opac_visible ); You can also compare a boolean column directly to a more complex condition: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -12435,7 +13647,7 @@ WHERE } Here we compare a boolean column, not to a literal value, but to a boolean expression. The resulting SQL looks a little goofy, but it works: - + SELECT "aou".id AS "id" FROM @@ -12448,10 +13660,10 @@ WHERE In this case we compare the boolean column to a single simple condition. However you can include additional complications -- multiple conditions, IN lists, BETWEEN clauses, and other features as described below. - Multiple ConditionsMultiple Conditions + Multiple ConditionsMultiple Conditions If you need multiple conditions, just add them to the "where" object, separated by commas: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12462,7 +13674,7 @@ WHERE } The generated SQL connects the conditions with AND: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12474,10 +13686,10 @@ WHERE Later we will see how to use OR instead of AND. - Using ArraysUsing Arrays + Using ArraysUsing Arrays Here's a puzzler. Suppose you need two conditions for the same column. How do you code them in the same WHERE clause? For example, suppose you want something like this: - + SELECT id, name @@ -12488,7 +13700,7 @@ WHERE AND parent_ou <> 7; You might try a WHERE clause like this: - + "where": { "parent_ou":{ ">":3 }, "parent_ou":{ "<>":7 } @@ -12496,7 +13708,7 @@ WHERE Nope. Won't work. According to JSON rules, two entries in the same object can't have the same key. After slapping yourself in the forehead, you try something a little smarter: - + "where": { "parent_ou": { ">":3, @@ -12506,7 +13718,7 @@ WHERE Nice try, but that doesn't work either. Maybe it ought to work -- at least it's legal JSON -- but, no. Here's what works: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12517,7 +13729,7 @@ WHERE } We wrapped the two conditions into two separate JSON objects, and then wrapped those objects together into a JSON array. The resulting SQL looks like this: - + SELECT "aou".id AS "id", "aou".name AS "name @@ -12530,7 +13742,7 @@ AND That's not quite what we were hoping for, because the extra parentheses are so ugly. But they're harmless. This will do. If you're in the mood, you can use arrays to as many parentheses as you like, even if there is only one condition inside: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12543,7 +13755,7 @@ AND } ...yields: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12553,10 +13765,10 @@ WHERE ( ( ( ( ( ( "aou".parent_ou > 3 ) ) ) ) ) ); - How to ORHow to OR + How to ORHow to OR By default, json_query combines conditions with AND. When you need OR, here's how to do it: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12571,7 +13783,7 @@ WHERE We use “-or” as the key, with the conditions to be ORed in an associated object. The leading minus sign is there to make sure that the operator isn't confused with a column name. Later we'll see some other operators with leading minus signs. In a couple of spots we even use plus signs. Here are the results from the above example: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12585,7 +13797,7 @@ WHERE The conditions paired with “-or” are linked by OR and enclosed in parentheses. Here's how to do the same thing using an array, except that it produces an extra layer of parentheses: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12610,17 +13822,17 @@ WHERE It's possible, though not very useful, to have only a single condition subject to the “-or” operator. In that case, the condition appears by itself, since there's nothing to OR it to. This trick is another way to add an extraneous layer of parentheses. - Another way to ANDAnother way to AND + Another way to ANDAnother way to AND You can also use the “-and” operator. It works just like “-or”, except that it combines conditions with AND instead of OR. Since AND is the default, we don't usually need a separate operator for it, but it's available. In rare cases, nothing else will do -- you can't include two conditions in the same list because of the duplicate key problem, but you can't combine them with arrays either. In particular, you might need to combine them within an expression that you're comparing to a boolean column (see the subsection above on Testing Boolean Columns). - Negation with NOTNegation with NOT + Negation with NOTNegation with NOT The “-not” operator negates a condition or set of conditions. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12649,11 +13861,11 @@ WHERE In most cases you can achieve the same result by other means. However the “-not” operator is the only way to represent NOT BETWEEN (to be discussed later). - EXISTS with SubqueriesEXISTS with Subqueries + EXISTS with SubqueriesEXISTS with Subqueries Two other operators carry a leading minus sign: “-exists” and its negation “-not-exists”. These operators apply to subqueries, which have the same format as a full query. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12684,7 +13896,7 @@ EXISTS This kind of subquery is of limited use, because its WHERE clause doesn't have anything to do with the main query. It just shuts down the main query altogether if it isn't satisfied. More typical is a correlated subquery, whose WHERE clause refers to a row from the main query. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12700,7 +13912,7 @@ EXISTS } Note the use of “+aou” to qualify the id column in the inner WHERE clause. - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12716,10 +13928,10 @@ WHERE This latter example illustrates the syntax, but in practice, it would probably be more natural to use an IN clause with a subquery (to be discussed later). - BETWEEN ClausesBETWEEN Clauses + BETWEEN ClausesBETWEEN Clauses Here's how to express a BETWEEN clause: - + { "from":"aou", "select": { "aou":[ "id" ] }, @@ -12733,7 +13945,7 @@ WHERE The range bounds must be either numbers or string literals. Although SQL allows them to be null, a null doesn't make sense in this context, because a null never matches anything. Consequently json_query doesn't allow them. The resulting SQL is just what you would expect: - + SELECT "aou".id AS "id" FROM @@ -12742,10 +13954,10 @@ WHERE parent_ou BETWEEN '3' AND '7'; - IN and NOT IN ListsIN and NOT IN Lists + IN and NOT IN ListsIN and NOT IN Lists There are two ways to code an IN list. One way is simply to include the list of values in an array: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12756,7 +13968,7 @@ WHERE As with a BETWEEN clause, the values in the array must be numbers or string literals. Nulls aren't allowed. Here's the resulting SQL, which again is just what you would expect: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12766,7 +13978,7 @@ WHERE "aou".parent_ou IN (3, 5, 7); The other way is similar to the syntax shown above for a BETWEEN clause, except that the array may include any non-zero number of values: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12779,11 +13991,11 @@ WHERE For a NOT IN list, you can use the latter format, using the “not in” operator instead of “in”. Alternatively, you can use either format together with the “-not” operator. - IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries + IN and NOT IN Clauses with SubqueriesIN and NOT IN Clauses with Subqueries For an IN clause with a subquery, the syntax is similar to the second of the two formats for an IN list (see the previous subsection). The "in" or "not in" operator is paired, not with an array of values, but with an object representing the subquery. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12799,7 +14011,7 @@ WHERE } The results: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12819,10 +14031,10 @@ WHERE In SQL the subquery may select multiple columns, but in a JSON query it can select only a single column. For a NOT IN clause with a subquery, use the “not in” operator instead of “in”. - Comparing to a FunctionComparing to a Function + Comparing to a FunctionComparing to a Function Here's how to compare a column to a function call: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12833,7 +14045,7 @@ WHERE A comparison operator (“>” in this case) is paired with an array. The first entry in the array must be a string giving the name of the function. The remaining parameters, if any, are the parameters. They may be strings, numbers, or nulls. The resulting SQL for this example: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12845,12 +14057,12 @@ WHERE All parameters are passed as quoted strings -- even if, as in this case, they are really numbers. This syntax is somewhat limited in that the function parameters must be constants (hence the use of a silly example). - Putting a Function Call on the LeftPutting a Function Call on the Left + Putting a Function Call on the LeftPutting a Function Call on the Left In the discussion of the SELECT clause, we saw how you could transform the value of a selected column by passing it to a function. In the WHERE clause, you can use similar syntax to transform the value of a column before comparing it to something else. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12866,7 +14078,7 @@ WHERE The "transform" entry gives the name of the function that we will use on the left side of the comparison. The "value" entry designates the value on the right side of the comparison. - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12876,7 +14088,7 @@ WHERE upper("aou".name ) = 'CARTER BRANCH' ; As in the SELECT clause, you can pass literal values or nulls to the function as additional parameters by using an array tagged as “params”: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12906,11 +14118,11 @@ WHERE complicated than it needs to be. - Putting Function Calls on Both SidesPutting Function Calls on Both Sides + Putting Function Calls on Both SidesPutting Function Calls on Both Sides If you want to compare one function call to another, you can use the same syntax shown in the previous subsection -- except that the “value” entry carries an array instead of a literal value. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12933,19 +14145,19 @@ WHERE The format for the right side function is similar to what we saw earlier, in the subsection Comparing to a Function. Note that there are two different formats for defining function calls: - • + • For a function call to the left of the comparison, the function name is tagged as “transform”. The first parameter is always the relevant column name; additional parameters, if any, are in an array tagged as "params". The entry for “result_field”, if present, specifies a subcolumn. - • + • For a function call to the right of the comparison, the function name is the first entry in an array, together with any parameters. There's no way to specify a subcolumn. - Comparing a Function to a ConditionComparing a Function to a Condition + Comparing a Function to a ConditionComparing a Function to a Condition So far we have seen two kinds of data for the “value” tag. A string or number translates to a literal value, and an array translates to a function call. The third possibility is a JSON object, which translates to a condition. For example: - + { "from":"aou", "select": { "aou":[ "id", "name" ] }, @@ -12961,7 +14173,7 @@ WHERE The function tagged as “transform” must return boolean, or else json_query will generate invalid SQL. The function used here, “is_prime”, is fictitious. - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -12977,7 +14189,7 @@ WHERE In the example above we compared the boolean to a simple condition. However the expression on the right may include multiple conditions, IN lists, subqueries, and whatever other complications are necessary. - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do The WHERE clause is subject to some of the same limitations as the SELECT clause. However, in the WHERE clause these limitations are more limiting, because the client program can't compensate by doing some of the work for itself. @@ -12989,13 +14201,13 @@ WHERE As noted earlier: you can't use the comparison operators “is distinct from” or “is not distinct from”. Also as noted earlier: a subquery in an IN clause cannot select more than one column. - JOIN clausesJOIN clauses + JOIN clausesJOIN clauses Until now, our examples have selected from only one table at a time. As a result, the FROM clause has been very simple -- just a single string containing the class name of the relevant table. When the FROM clause joins multiple tables, the corresponding JSON naturally gets more complicated. SQL provides two ways to define a join. One way is to list both tables in the FROM clause, and put the join conditions in the WHERE clause: - + SELECT aou.id, aout.name @@ -13006,7 +14218,7 @@ WHERE aout.id = aou.ou_type; The other way is to use an explicit JOIN clause: - + SELECT aou.id, aout.name @@ -13016,7 +14228,7 @@ FROM ON ( aout.id = aou.ou_type ); JSON queries use only the second of these methods. The following example expresses the same query in JSON: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13030,7 +14242,7 @@ FROM that information consists entirely of a string containing the class name of the other table. So where is the join condition? It's in the IDL. Upon reading the IDL, json_query knows that actor.org_unit has a foreign key pointing to actor.org_unit_type, and builds a join condition accordingly: - + SELECT "aou".id AS "id", "aout".name AS "name" @@ -13040,7 +14252,7 @@ FROM ON ( "aout".id = "aou".ou_type ) ; In this case the core table is the child table, and the joined table is the parent table. We could just as well have written it the other way around: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13057,14 +14269,14 @@ FROM ON ( "aou".ou_type = "aout".id ) ; - Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly + Specifying The Join Columns ExplicitlySpecifying The Join Columns Explicitly While it's convenient to let json_query pick the join columns, it doesn't always work. For example, the actor.org_unit table has four different address ids, for four different kinds of addresses. Each of them is a foreign key to the actor.org_address table. Json_query can't guess which one you want if you don't tell it. (Actually it will try to guess. It will pick the first matching link that it finds in the IDL, which may or may not be the one you want.) Here's how to define exactly which columns you want for the join: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -13081,15 +14293,15 @@ FROM class name, and the associated data is another layer of JSON object containing the attributes of the join. Later we'll encounter other kinds of join attributes. For now, the only attributes that we're looking at are the ones that identify the join columns: “fkey” and “field”. The hard part is remembering which is which: - • + • “fkey” identifies the join column from the left table; - • + • “field” identifies the join column from the right table. When there are only two tables involved, the core table is on the left, and the non-core table is on the right. In more complex queries neither table may be the core table. Here is the result of the preceding JSON: - + SELECT "aou".id AS "id", "aoa".street1 AS "street1" @@ -13099,7 +14311,7 @@ FROM ON ( "aoa".id = "aou".holds_address ) ; In this example the child table is on the left and the parent table is on the right. We can swap the tables if we swap the join columns as well: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -13123,11 +14335,11 @@ FROM When you specify both of the join columns, json_query assumes that you know what you're doing. It doesn't check the IDL to confirm that the join makes sense. The burden is on you to avoid absurdities. - Specifying Only One Join ColumnSpecifying Only One Join Column + Specifying Only One Join ColumnSpecifying Only One Join Column We just saw how to specify both ends of a join. It turns out that there's a shortcut -- most of the time you only need to specify one end. Consider the following variation on the previous example: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -13146,11 +14358,11 @@ FROM column of actor.org_address. As noted earlier, there are four different foreign keys from actor.org_unit to actor.org_address, and json_query would have no way to guess which one we wanted. - Joining to Multiple TablesJoining to Multiple Tables + Joining to Multiple TablesJoining to Multiple Tables So far we have joined only two tables at a time. What if we need to join one table to two different tables? Here's an example: - + { "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, "from": { @@ -13168,7 +14380,7 @@ FROM For the second join, to actor.org_address, we have to specify at least the join column in the child table, as discussed earlier. We could also have specified the join column from the parent table, but we don't have to, so we didn't. Here is the resulting SQL: - + SELECT "aou".id AS "id", "aout".depth AS "depth", @@ -13183,11 +14395,11 @@ FROM Since there can be only one core table, the outermost object in the FROM clause can have only one entry, whose key is the class name of the core table. The next level has one entry for every table that's joined to the core table. - Nested JoinsNested Joins + Nested JoinsNested Joins Let's look at that last query again. It joins three tables, and the core table is the one in the middle. Can we make one of the end tables the core table instead? Yes, we can: - + { "select": { "aou":[ "id" ], "aout":[ "depth" ], "aoa":[ "street1" ] }, "from": { @@ -13204,7 +14416,7 @@ FROM The “join” attribute introduces another level of join. In this case "aou" is the left table for the nested join, and the right table for the original join. Here are the results: - + SELECT "aou".id AS "id", "aout".depth AS "depth", @@ -13217,11 +14429,11 @@ FROM ON ( "aout".id = "aou".ou_type ) ; - Outer JoinsOuter Joins + Outer JoinsOuter Joins By default, json_query constructs an inner join. If you need an outer join, you can add the join type as an attribute of the join: Yes, we can: - + { "select": { "aou":[ "id" ], "aoa":[ "street1" ] }, "from": { @@ -13235,7 +14447,7 @@ FROM } Here is the resulting SQL for this example: - + SELECT "aou".id AS "id", "aoa".street1 AS "street1" @@ -13245,12 +14457,12 @@ FROM ON ( "aou".mailing_address = "aoa".id ) ; - Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause + Referring to Joined Tables in the WHERE ClauseReferring to Joined Tables in the WHERE Clause In the WHERE clause of the generated SQL, every column name is qualified by a table alias, which is always the corresponding class name. If a column belongs to the core table, this qualification happens by default. If it belongs to a joined table, the JSON must specify what class name to use for an alias. For example: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13263,7 +14475,7 @@ FROM Note the peculiar operator “+aou” -- a plus sign followed by the relevant class name. This operator tells json_query to apply the specified class to the condition that follows. The result: - + SELECT "aou".id AS "id", "aout".name AS "name" @@ -13275,7 +14487,7 @@ WHERE ( "aou".parent_ou = 2 ); The plus-class operator may apply to multiple conditions: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13306,7 +14518,7 @@ WHERE special gyrations to apply the right table alias. In a more realistic case, however, you might need to apply conditions to both tables. Just swapping the tables wouldn't solve the problem. You can also use a plus-class operator to compare columns from two different tables: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13332,10 +14544,10 @@ WHERE Please don't expect that query to make any sense. It doesn't. But it illustrates the syntax. - Join FiltersJoin Filters + Join FiltersJoin Filters While the above approach certainly works, the special syntax needed is goofy and awkward. A somewhat cleaner solution is to include a condition in the JOIN clause: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13359,7 +14571,7 @@ FROM By default, json_query uses AND to combine the “filter” condition with the original join condition. If you need OR, you can use the “filter_op” attribute to say so: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13388,18 +14600,18 @@ FROM Remember, though, that it all gets combined with the the original join condition with an AND, or with an OR if you so specify. If you're not careful, the result may be a confusing mixture of AND and OR at the same level. - Joining to a SubqueryJoining to a Subquery + Joining to a SubqueryJoining to a Subquery In SQL you can put a subquery in a FROM clause, and select from it as if it were a table. A JSON query has no way to do that directly. The IDL, however, can define a class as a subquery instead of as a table. When you SELECT from it, json_query inserts the corresponding subquery into the FROM clause. For example: - + { "select":{ "iatc":[ "id", "dest", "copy_status" ] }, "from": "iatc" } There's nothing special-looking about this JSON, but json_query expands it as follows: - + SELECT "iatc".id AS "id", "iatc".dest AS "dest", @@ -13420,13 +14632,13 @@ FROM The “iatc” class is like a view, except that it's defined in the IDL instead of the database. In this case it provides a way to do a join that would otherwise be impossible through a JSON query, because it joins the same table in two different ways (see the next subsection). - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do In a JOIN, as with other SQL constructs, there are some things that you can't do with a JSON query. In particular, you can't specify a table alias, because the table alias is always the class name. As a result: - • + • You can't join a table to itself. For example, you can't join actor.org_unit to itself in order to select the name of the parent for every org_unit. - • + • You can't join to the same table in more than one way. For example, you can't join actor.org_unit to actor.org_address through four different foreign keys, to get four kinds of addresses in a single query. @@ -13438,7 +14650,7 @@ FROM condition, or to omit any join condition in order to obtain a Cartesian product. If necessary, you can devise such unconventional joins by combining the normal join conditions with join filters. For example, here's how to get a Cartesian product: - + { "select": { "aou":[ "id" ], "aout":[ "name" ] }, "from": { @@ -13468,11 +14680,11 @@ FROM Yes, it's ugly, but at least you're not likely to do it by accident. - Selecting from FunctionsSelecting from Functions + Selecting from FunctionsSelecting from Functions In SQL, you can put a function call in the FROM clause. The function may return multiple columns and multiple rows. Within the query, the function behaves like a table. A JSON query can also select from a function: - + { "from": [ "actor.org_unit_ancestors", 5 ] } @@ -13480,7 +14692,7 @@ FROM The data associated with “from” is an array instead of a string or an object. The first element in the array specifies the name of the function. Subsequent elements, if any, supply the parameters of the function; they must be literal values or nulls. Here is the resulting query: - + SELECT * FROM actor.org_unit_ancestors( '5' ) AS "actor.org_unit_ancestors" ; @@ -13489,10 +14701,10 @@ FROM another function. If you try to supply a SELECT list or a WHERE clause, json_query will ignore it. The generated query will always select every column, via a wild card asterisk, from every row. - The ORDER BY ClauseThe ORDER BY Clause + The ORDER BY ClauseThe ORDER BY Clause In most cases you can encode an ORDER BY clause as either an array or an object. Let's take a simple example and try it both ways. First the array: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -13502,7 +14714,7 @@ FROM } Now the object: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -13512,7 +14724,7 @@ FROM } The results are identical from either version: - + SELECT "aou".name AS "name" FROM @@ -13523,17 +14735,17 @@ ORDER BY The array format is more verbose, but as we shall see, it is also more flexible. It can do anything the object format can do, plus some things that the object format can't do. - ORDER BY as an ArrayORDER BY as an Array + ORDER BY as an ArrayORDER BY as an Array In the array format, each element of the array is an object defining one of the sort fields. Each such object must include at least two tags: - • + • The “class” tag provides the name of the class, which must be either the core class or a joined class. - • + • The “field” tag provides the field name, corresponding to one of the columns of the class. If you want to sort by multiple fields, just include a separate object for each field. If you want to sort a field in descending order, add a “direction” tag: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -13555,7 +14767,7 @@ ORDER BY upper("aou".name ); If you need additional parameters for the function, you can use the “params” tag to pass them: - + { "select":{ "aou":[ "name" ] }, "from": "aou", @@ -13570,7 +14782,7 @@ ORDER BY } The additional parameters appear as elements in an array. They may be numbers, strings, or nulls. - + SELECT "aou".name AS "name" FROM @@ -13582,11 +14794,11 @@ ORDER BY If the function returns multiple columns, you can use the “result_field” tag to indicate which one you want (not shown). - ORDER BY as an ObjectORDER BY as an Object + ORDER BY as an ObjectORDER BY as an Object When you encode the ORDER BY clause as an object, the keys of the object are class names. Each class must be either the core class or a joined class. The data for each class can be either an array or another layer of object. Here's an example with one of each: - + { "select":{ "aout":"id", "aou":[ "name" ] }, "from": { "aou":"aout" }, @@ -13603,7 +14815,7 @@ ORDER BY In this example, we use the “direction"” tag to specify that the name field be sorted in descending order. This tag works the same way here as described earlier. If the associated string starts with "D" or "d", the sort will be descending; otherwise it will be ascending. Here is the resulting SQL: - + SELECT "aou".name AS "name" FROM @@ -13614,7 +14826,7 @@ ORDER BY "aout".id, "aou".name DESC; - + { "select":{ "aou":[ "name", "id" ] }, "from": "aou", @@ -13634,7 +14846,7 @@ ORDER BY substr("aou".name,'1','8' ); - Things You Can't DoThings You Can't Do + Things You Can't DoThings You Can't Do If you encode the ORDER BY clause as an object, you may encounter a couple of restrictions. Because the key of such an object is the class name, all the fields from a given class must be grouped together. You can't sort by a column from one table, followed by @@ -13644,7 +14856,7 @@ ORDER BY there are situations where it can be useful, provided that the column is passed to a transforming function. For example, you might want a case-insensitive sort, except that for any given letter you want lower case to sort first. For example, you want “diBona” to sort before “Dibona”. Here's a way to do that, coding the ORDER BY clause as an array: - + { "select":{ "au":[ "family_name", "id" ] }, "from": "au", @@ -13664,12 +14876,12 @@ ORDER BY Such a sort is not possible where the ORDER BY clause is coded as an object. - The GROUP BY ClauseThe GROUP BY Clause + The GROUP BY ClauseThe GROUP BY Clause A JSON query has no separate construct to define a GROUP BY clause. Instead, the necessary information is distributed across the SELECT clause. However, the way it works is a bit backwards from what you might expect, so pay attention. Here's an example: - + { "select": { "aou": [ @@ -13682,7 +14894,7 @@ ORDER BY The “transform” tag is there just to give us an excuse to do a GROUP BY. What's important to notice is the “aggregate” tag. Here's the resulting SQL: - + SELECT "aou".parent_ou AS "parent_ou", max("aou".name ) AS "name" @@ -13698,12 +14910,12 @@ GROUP BY it appears only with a value of false, then there is no GROUP BY clause. If you really want to include every field in the GROUP BY clause, don't use “aggregate”. Use the “distinct” tag, as described in the next section. - The DISTINCT ClauseThe DISTINCT Clause + The DISTINCT ClauseThe DISTINCT Clause JSON queries don't generate DISTINCT clauses. However, they can generate GROUP BY clauses that include every item from the SELECT clause. The effect is the same as applying DISTINCT to the entire SELECT clause. For example: - + { "select": { "aou": [ @@ -13716,7 +14928,7 @@ GROUP BY } Note the “distinct” entry at the top level of the query object, with a value of “true”. - + SELECT "aou".parent_ou AS "parent_ou", "aou".ou_type AS "ou_type" @@ -13727,12 +14939,12 @@ GROUP BY The generated GROUP BY clause references every column in the SELECT clause by number. - The HAVING ClauseThe HAVING Clause + The HAVING ClauseThe HAVING Clause For a HAVING clause, add a “having” entry at the top level of the query object. For the associated data, you can use all the same syntax that you can use for a WHERE clause. Here's a simple example: - + { "select": { "aou": [ @@ -13756,7 +14968,7 @@ GROUP BY } We use the “aggregate” tag in the SELECT clause to give us a GROUP BY to go with the HAVING. Results: - + SELECT "aou".parent_ou AS "parent_ou", count("aou".id ) AS "id_count" @@ -13770,10 +14982,10 @@ HAVING In raw SQL we could have referred to “count( 1 )”. But since JSON queries cannot encode arbitrary expressions, we applied the count function to a column that cannot be null. - The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses + The LIMIT and OFFSET ClausesThe LIMIT and OFFSET Clauses To add an LIMIT or OFFSET clause, add an entry to the top level of a query object. For example: - + { "select": { "aou": [ "id", "name" ] @@ -13786,7 +14998,7 @@ HAVING The data associated with “offset” and “limit” may be either a number or a string, but if it's a string, it should have a number inside. Result: - + SELECT "aou".id AS "id", "aou".name AS "name" @@ -13799,25 +15011,25 @@ OFFSET 7; - Chapter 25. SuperCatChapter 25. SuperCat + Chapter 28. SuperCatChapter 28. SuperCat Report errors in this documentation using Launchpad. - Chapter 25. SuperCat + Chapter 28. SuperCat Report any errors in this documentation using Launchpad. - Chapter 25. SuperCatChapter 25. SuperCat + Chapter 28. SuperCatChapter 28. SuperCat Using SuperCatUsing SuperCat> - + SuperCat allows Evergreen record and information retrieval from a web browser using a based on a number of open web standards and formats. The following record types are supported: - •isbn•metarecord•record - Return a list of ISBNs for related recordsReturn a list of ISBNs for related records + •isbn•metarecord•record + Return a list of ISBNs for related recordsReturn a list of ISBNs for related records - + Similar to the OCLC xISBN service, Evergreen can return a list of related records based on its oISBN algorithm: http://<hostname>/opac/extras/osibn/<ISBN> For example, http://dev.gapines.org/opac/extras/oisbn/0439136350 returns: - + <idlist metarecord="302670"> <isbn record="250060">0790783525</isbn> <isbn record="20717">0736691316</isbn> @@ -13834,13 +15046,13 @@ OFFSET 7; </idlist> - Return recordsReturn records + Return recordsReturn records - + SuperCat can return records and metarecords in many different formats (see the section called “Supported formats” http://<hostname>/opac/extras/supercat/retrieve/<format>/<record-type>/<bib-ID> For example, http://dev.gapines.org/opac/extras/supercat/retrieve/mods/record/555 returns: - + <mods:modsCollection version="3.0"> <mods:mods xsi:schemaLocation="http://www.loc.gov/mods/ http://www.loc.gov/standards/mods/mods.xsd"> <titleInfo> @@ -13880,9 +15092,9 @@ OFFSET 7; </mods:modsCollection> - Return a feed of recently edited or created recordsReturn a feed of recently edited or created records + Return a feed of recently edited or created recordsReturn a feed of recently edited or created records - + SuperCat can return feeds of recently edited or created authority and bibliographic records: http://<hostname>/opac/extras/feed/freshmeat/<feed-type>/[authority|biblio]/[import|edit]/<limit>/<date> The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. @@ -13890,12 +15102,12 @@ OFFSET 7; Feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. For example, http://dev.gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 - Browse recordsBrowse records + Browse recordsBrowse records SuperCat can browse records in HTML and XML formats: http://<hostname>/opac/extras/supercat/browse/<format>/call_number/<org_unit>/<call_number> For example, http://dev.gapines.org/opac/extras/browse/xml/call_number/-/GV returns: - + <hold:volumes xmlns:hold='http://open-ils.org/spec/holdings/v1'> <hold:volume id="tag:open-ils.org,2008:asset-call_number/130607" lib="FRRLS-FA" label="GUTCHEON BETH"> <act:owning_lib id="tag:open-ils.org,2008:actor-org_unit/111" name="Fayette County Public Library"/> @@ -13920,11 +15132,11 @@ OFFSET 7; Supported formatsSupported formats - + SuperCat maintains a list of supported formats for records and metarecords: http://<hostname>/opac/extras/supercat/formats/<record-type> For example, http://dev.gapines.org/opac/extras/supercat/formats/record returns: - + <formats> <format> <name>opac</name> @@ -13940,17 +15152,17 @@ OFFSET 7; Adding new SuperCat FormatsAdding new SuperCat Formats - + Adding SuperCat formats requires experience editing XSL files and familiarity with XML and Perl. SuperCat web services are based on the OpenSRF service, >open-ils.supercat. Developers are able to add new formats by adding the xsl stylesheet for the format. By default, the location of the stylesheets is /openils/var/xsl/. You must also add the feed to the perl modules openils/lib/perl5/OpenILS/WWW/SuperCat/feed.pm and openils/lib/perl5/OpenILS/WWW/SuperCat.pm. An Evergreen restart is required for the feed to be activated. - Use an existing xsl stylesheet and Perl module entry as a template for your new format. + Use an existing xsl stylesheet and Perl module entry as a template for your new format. Customizing SuperCat FormatsCustomizing SuperCat Formats - + Editing SuperCat formats requires experience editing XSL files and familiarity with XML.. It is possible to customize existing supercat formats using XSL stylesheets. You are able to change the content to be displayed and the design of the pages. In order to change the display of a specific format, edit the corresponding XSL file(s) for the particular format. The default location for the XSL stylesheets is @@ -13967,13 +15179,13 @@ OFFSET 7; Report errors in this documentation using Launchpad. Part VIII. Appendices Report any errors in this documentation using Launchpad. - Part VIII. AppendicesTable of Contents26. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index + Part VIII. AppendicesTable of Contents29. Database Schema Schema acq Schema action Schema action_trigger Schema actor Schema asset Schema auditor Schema authority Schema biblio Schema booking Schema config Schema container Schema extend_reporter Schema metabib Schema money Schema offline Schema permission Schema public Schema query Schema reporter Schema search Schema serial Schema staging Schema stats Schema vandelay A. About this Documentation About the Documentation Interest Group (DIG) How to Participate B. Getting More Information Glossary Index - Chapter 26. Database SchemaChapter 26. Database Schema + Chapter 29. Database SchemaChapter 29. Database Schema Report errors in this documentation using Launchpad. - Chapter 26. Database Schema + Chapter 29. Database Schema Report any errors in this documentation using Launchpad. - Chapter 26. Database SchemaChapter 26. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqacq_lineitem_historyacq_lineitem_historyFieldData TypeConstraints and Referencesaudit_idbigint + Chapter 29. Database SchemaChapter 29. Database SchemaThis is the schema for the Evergreen database.Schema acqSchema acqacq_lineitem_historyacq_lineitem_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -14105,7 +15317,7 @@ OFFSET 7; - acq_lineitem_lifecycleacq_lineitem_lifecycleFieldData TypeConstraints and References?column?bigint + acq_lineitem_lifecycleacq_lineitem_lifecycleFieldData TypeConstraints and References?column?bigint @@ -14210,7 +15422,7 @@ OFFSET 7; - acq_purchase_order_historyacq_purchase_order_historyFieldData TypeConstraints and Referencesaudit_idbigint + acq_purchase_order_historyacq_purchase_order_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -14328,7 +15540,7 @@ OFFSET 7; - acq_purchase_order_lifecycleacq_purchase_order_lifecycleFieldData TypeConstraints and References?column?bigint + acq_purchase_order_lifecycleacq_purchase_order_lifecycleFieldData TypeConstraints and References?column?bigint @@ -14413,7 +15625,7 @@ OFFSET 7; - all_fund_allocation_totalall_fund_allocation_totalFieldData TypeConstraints and Referencesfundinteger + all_fund_allocation_totalall_fund_allocation_totalFieldData TypeConstraints and Referencesfundinteger @@ -14428,7 +15640,7 @@ OFFSET 7; - all_fund_combined_balanceall_fund_combined_balanceFieldData TypeConstraints and Referencesfundinteger + all_fund_combined_balanceall_fund_combined_balanceFieldData TypeConstraints and Referencesfundinteger @@ -14443,7 +15655,7 @@ OFFSET 7; - all_fund_encumbrance_totalall_fund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger + all_fund_encumbrance_totalall_fund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger @@ -14458,7 +15670,7 @@ OFFSET 7; - all_fund_spent_balanceall_fund_spent_balanceFieldData TypeConstraints and Referencesfundinteger + all_fund_spent_balanceall_fund_spent_balanceFieldData TypeConstraints and Referencesfundinteger @@ -14473,7 +15685,7 @@ OFFSET 7; - all_fund_spent_totalall_fund_spent_totalFieldData TypeConstraints and Referencesfundinteger + all_fund_spent_totalall_fund_spent_totalFieldData TypeConstraints and Referencesfundinteger @@ -14488,7 +15700,7 @@ OFFSET 7; - cancel_reasoncancel_reasonFieldData TypeConstraints and Referencesidserial + cancel_reasoncancel_reasonFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -14559,12 +15771,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.lineitem_detail•acq.purchase_order•acq.user_request + •acq.lineitem•acq.lineitem_detail•acq.purchase_order•acq.user_request - claimclaimFieldData TypeConstraints and Referencesidserial + claimclaimFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -14610,12 +15822,12 @@ OFFSET 7; Tables referencing acq.claim_event via Foreign Key Constraints - •acq.claim_event + •acq.claim_event - claim_eventclaim_eventFieldData TypeConstraints and Referencesidbigserial + claim_eventclaim_eventFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -14688,7 +15900,7 @@ OFFSET 7; - claim_event_typeclaim_event_typeFieldData TypeConstraints and Referencesidserial + claim_event_typeclaim_event_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -14759,12 +15971,12 @@ OFFSET 7; Tables referencing acq.claim_event via Foreign Key Constraints - •acq.claim_event•acq.claim_policy_action•acq.serial_claim_event + •acq.claim_event•acq.claim_policy_action•acq.serial_claim_event - claim_policyclaim_policyFieldData TypeConstraints and Referencesidserial + claim_policyclaim_policyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -14826,12 +16038,12 @@ OFFSET 7; Tables referencing acq.claim_policy_action via Foreign Key Constraints - •acq.claim_policy_action•acq.lineitem•acq.provider + •acq.claim_policy_action•acq.lineitem•acq.provider - claim_policy_actionclaim_policy_actionFieldData TypeConstraints and Referencesidserial + claim_policy_actionclaim_policy_actionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -14899,7 +16111,7 @@ OFFSET 7; - claim_typeclaim_typeFieldData TypeConstraints and Referencesidserial + claim_typeclaim_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -14961,12 +16173,12 @@ OFFSET 7; Tables referencing acq.claim via Foreign Key Constraints - •acq.claim•acq.serial_claim + •acq.claim•acq.serial_claim - currency_typecurrency_typeFieldData TypeConstraints and Referencescodetext + currency_typecurrency_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -14989,12 +16201,12 @@ OFFSET 7; Tables referencing acq.exchange_rate via Foreign Key Constraints - •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider + •acq.exchange_rate•acq.fund•acq.fund_debit•acq.funding_source•acq.provider - debit_attributiondebit_attributionFieldData TypeConstraints and Referencesidinteger + debit_attributiondebit_attributionFieldData TypeConstraints and Referencesidinteger PRIMARY KEY @@ -15049,7 +16261,7 @@ OFFSET 7; - distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial + distribution_formuladistribution_formulaFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15111,12 +16323,12 @@ OFFSET 7; Tables referencing acq.distribution_formula_application via Foreign Key Constraints - •acq.distribution_formula_application•acq.distribution_formula_entry + •acq.distribution_formula_application•acq.distribution_formula_entry - distribution_formula_applicationdistribution_formula_applicationFieldData TypeConstraints and Referencesidbigserial + distribution_formula_applicationdistribution_formula_applicationFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -15184,7 +16396,7 @@ OFFSET 7; - distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial + distribution_formula_entrydistribution_formula_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15267,13 +16479,13 @@ OFFSET 7; - Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) + Constraints on distribution_formula_entryacqdfe_must_be_somewhereCHECK (((owning_lib IS NOT NULL) OR (location IS NOT NULL))) - edi_accountedi_accountFieldData TypeConstraints and Referencesidinteger + edi_accountedi_accountFieldData TypeConstraints and Referencesidinteger PRIMARY KEY @@ -15368,12 +16580,12 @@ OFFSET 7; Tables referencing acq.edi_message via Foreign Key Constraints - •acq.edi_message•acq.provider + •acq.edi_message•acq.provider - edi_messageedi_messageFieldData TypeConstraints and Referencesidserial + edi_messageedi_messageFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15472,13 +16684,13 @@ OFFSET 7; - Constraints on edi_messagestatus_valueCHECK ((status = ANY (ARRAY['new'::text, 'translated'::text, 'trans_error'::text, 'processed'::text, 'proc_error'::text, 'delete_error'::text, 'retry'::text, 'complete'::text])))valid_message_typeCHECK ((message_type = ANY (ARRAY['ORDERS'::text, 'ORDRSP'::text, 'INVOIC'::text, 'OSTENQ'::text, 'OSTRPT'::text]))) + Constraints on edi_messagestatus_valueCHECK ((status = ANY (ARRAY['new'::text, 'translated'::text, 'trans_error'::text, 'processed'::text, 'proc_error'::text, 'delete_error'::text, 'retry'::text, 'complete'::text])))valid_message_typeCHECK ((message_type = ANY (ARRAY['ORDERS'::text, 'ORDRSP'::text, 'INVOIC'::text, 'OSTENQ'::text, 'OSTRPT'::text]))) - exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial + exchange_rateexchange_rateFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15546,7 +16758,7 @@ OFFSET 7; - fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial + fiscal_calendarfiscal_calendarFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15571,12 +16783,12 @@ OFFSET 7; Tables referencing acq.fiscal_year via Foreign Key Constraints - •acq.fiscal_year•actor.org_unit + •acq.fiscal_year•actor.org_unit - fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial + fiscal_yearfiscal_yearFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15660,7 +16872,7 @@ OFFSET 7; - fundfundFieldData TypeConstraints and Referencesidserial + fundfundFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15809,19 +17021,19 @@ OFFSET 7; - Constraints on fundacq_fund_rollover_ implies_propagateCHECK ((propagate OR (NOT rollover))) + Constraints on fundacq_fund_rollover_ implies_propagateCHECK ((propagate OR (NOT rollover))) Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.fund_transfer•acq.invoice_item•acq.lineitem_detail•acq.po_item + •acq.fund_allocation•acq.fund_debit•acq.fund_tag_map•acq.fund_transfer•acq.invoice_item•acq.lineitem_detail•acq.po_item - fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial + fund_allocationfund_allocationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -15901,7 +17113,7 @@ OFFSET 7; - fund_allocation_percentfund_allocation_percentFieldData TypeConstraints and Referencesidserial + fund_allocation_percentfund_allocation_percentFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16008,13 +17220,13 @@ OFFSET 7; - Constraints on fund_allocation_percentpercentage_rangeCHECK (((percent >= (0)::numeric) AND (percent <= (100)::numeric))) + Constraints on fund_allocation_percentpercentage_rangeCHECK (((percent >= (0)::numeric) AND (percent <= (100)::numeric))) - fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger + fund_allocation_totalfund_allocation_totalFieldData TypeConstraints and Referencesfundinteger @@ -16029,7 +17241,7 @@ OFFSET 7; - fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger + fund_combined_balancefund_combined_balanceFieldData TypeConstraints and Referencesfundinteger @@ -16044,7 +17256,7 @@ OFFSET 7; - fund_debitfund_debitFieldData TypeConstraints and Referencesidserial + fund_debitfund_debitFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16141,12 +17353,12 @@ OFFSET 7; Tables referencing acq.debit_attribution via Foreign Key Constraints - •acq.debit_attribution•acq.invoice_item•acq.lineitem_detail•acq.po_item + •acq.debit_attribution•acq.invoice_item•acq.lineitem_detail•acq.po_item - fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger + fund_debit_totalfund_debit_totalFieldData TypeConstraints and Referencesfundinteger @@ -16166,7 +17378,7 @@ OFFSET 7; - fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger + fund_encumbrance_totalfund_encumbrance_totalFieldData TypeConstraints and Referencesfundinteger @@ -16181,7 +17393,7 @@ OFFSET 7; - fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger + fund_spent_balancefund_spent_balanceFieldData TypeConstraints and Referencesfundinteger @@ -16196,7 +17408,7 @@ OFFSET 7; - fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger + fund_spent_totalfund_spent_totalFieldData TypeConstraints and Referencesfundinteger @@ -16211,7 +17423,7 @@ OFFSET 7; - fund_tagfund_tagFieldData TypeConstraints and Referencesidserial + fund_tagfund_tagFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16266,12 +17478,12 @@ OFFSET 7; Tables referencing acq.fund_tag_map via Foreign Key Constraints - •acq.fund_tag_map + •acq.fund_tag_map - fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial + fund_tag_mapfund_tag_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16330,7 +17542,7 @@ OFFSET 7; - fund_transferfund_transferFieldData TypeConstraints and Referencesidserial + fund_transferfund_transferFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16427,7 +17639,7 @@ OFFSET 7; - funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial + funding_sourcefunding_sourceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16508,12 +17720,12 @@ OFFSET 7; Tables referencing acq.fund_allocation via Foreign Key Constraints - •acq.fund_allocation•acq.fund_allocation_percent•acq.funding_source_credit + •acq.fund_allocation•acq.fund_allocation_percent•acq.funding_source_credit - funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger + funding_source_allocation_totalfunding_source_allocation_totalFieldData TypeConstraints and Referencesfunding_sourceinteger @@ -16528,7 +17740,7 @@ OFFSET 7; - funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger + funding_source_balancefunding_source_balanceFieldData TypeConstraints and Referencesfunding_sourceinteger @@ -16543,7 +17755,7 @@ OFFSET 7; - funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial + funding_source_creditfunding_source_creditFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16601,12 +17813,12 @@ OFFSET 7; Tables referencing acq.debit_attribution via Foreign Key Constraints - •acq.debit_attribution•acq.fund_transfer + •acq.debit_attribution•acq.fund_transfer - funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger + funding_source_credit_totalfunding_source_credit_totalFieldData TypeConstraints and Referencesfunding_sourceinteger @@ -16621,7 +17833,7 @@ OFFSET 7; - invoiceinvoiceFieldData TypeConstraints and Referencesidserial + invoiceinvoiceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16765,12 +17977,12 @@ OFFSET 7; Tables referencing acq.invoice_entry via Foreign Key Constraints - •acq.invoice_entry•acq.invoice_item + •acq.invoice_entry•acq.invoice_item - invoice_entryinvoice_entryFieldData TypeConstraints and Referencesidserial + invoice_entryinvoice_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16862,7 +18074,7 @@ OFFSET 7; - invoice_iteminvoice_itemFieldData TypeConstraints and Referencesidserial + invoice_iteminvoice_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -16990,7 +18202,7 @@ OFFSET 7; - invoice_item_typeinvoice_item_typeFieldData TypeConstraints and Referencescodetext + invoice_item_typeinvoice_item_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -17024,12 +18236,12 @@ OFFSET 7; Tables referencing acq.invoice_item via Foreign Key Constraints - •acq.invoice_item•acq.po_item + •acq.invoice_item•acq.po_item - invoice_methodinvoice_methodFieldData TypeConstraints and Referencescodetext + invoice_methodinvoice_methodFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -17054,12 +18266,12 @@ OFFSET 7; Tables referencing acq.invoice via Foreign Key Constraints - •acq.invoice + •acq.invoice - invoice_payment_methodinvoice_payment_methodFieldData TypeConstraints and Referencescodetext + invoice_payment_methodinvoice_payment_methodFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -17084,12 +18296,12 @@ OFFSET 7; Tables referencing acq.invoice via Foreign Key Constraints - •acq.invoice + •acq.invoice - lineitemlineitemFieldData TypeConstraints and Referencesidbigserial + lineitemlineitemFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -17267,19 +18479,19 @@ OFFSET 7; - Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) + Constraints on lineitempicklist_or_poCHECK (((picklist IS NOT NULL) OR (purchase_order IS NOT NULL))) Tables referencing acq.distribution_formula_application via Foreign Key Constraints - •acq.distribution_formula_application•acq.invoice_entry•acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note•acq.user_request + •acq.distribution_formula_application•acq.invoice_entry•acq.lineitem_attr•acq.lineitem_detail•acq.lineitem_note•acq.user_request - lineitem_alert_textlineitem_alert_textFieldData TypeConstraints and Referencesidserial + lineitem_alert_textlineitem_alert_textFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -17339,12 +18551,12 @@ OFFSET 7; Tables referencing acq.lineitem_note via Foreign Key Constraints - •acq.lineitem_note + •acq.lineitem_note - lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial + lineitem_attrlineitem_attrFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -17403,7 +18615,7 @@ OFFSET 7; - lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial + lineitem_attr_definitionlineitem_attr_definitionFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -17452,7 +18664,7 @@ OFFSET 7; - lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial + lineitem_detaillineitem_detailFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -17586,12 +18798,12 @@ OFFSET 7; Tables referencing acq.claim via Foreign Key Constraints - •acq.claim + •acq.claim - lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_generated_attr_definitionlineitem_generated_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -17649,7 +18861,7 @@ OFFSET 7; - lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_local_attr_definitionlineitem_local_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -17700,7 +18912,7 @@ OFFSET 7; - lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_marc_attr_definitionlineitem_marc_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -17758,7 +18970,7 @@ OFFSET 7; - lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial + lineitem_notelineitem_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -17863,7 +19075,7 @@ OFFSET 7; - lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_provider_attr_definitionlineitem_provider_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -17935,7 +19147,7 @@ OFFSET 7; - lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint + lineitem_usr_attr_definitionlineitem_usr_attr_definitionFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -18000,7 +19212,7 @@ OFFSET 7; - ordered_funding_source_creditordered_funding_source_creditFieldData TypeConstraints and Referencessort_priorityinteger + ordered_funding_source_creditordered_funding_source_creditFieldData TypeConstraints and Referencessort_priorityinteger @@ -18035,7 +19247,7 @@ OFFSET 7; - picklistpicklistFieldData TypeConstraints and Referencesidserial + picklistpicklistFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18150,12 +19362,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem + •acq.lineitem - po_itempo_itemFieldData TypeConstraints and Referencesidserial + po_itempo_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18248,12 +19460,12 @@ OFFSET 7; Tables referencing acq.invoice_item via Foreign Key Constraints - •acq.invoice_item + •acq.invoice_item - po_notepo_noteFieldData TypeConstraints and Referencesidserial + po_notepo_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18346,7 +19558,7 @@ OFFSET 7; - providerproviderFieldData TypeConstraints and Referencesidserial + providerproviderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18510,12 +19722,12 @@ OFFSET 7; Tables referencing acq.edi_account via Foreign Key Constraints - •acq.edi_account•acq.invoice•acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.provider_note•acq.purchase_order + •acq.edi_account•acq.invoice•acq.lineitem•acq.lineitem_provider_attr_definition•acq.provider_address•acq.provider_contact•acq.provider_holding_subfield_map•acq.provider_note•acq.purchase_order - provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial + provider_addressprovider_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18610,7 +19822,7 @@ OFFSET 7; - provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial + provider_contactprovider_contactFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18664,12 +19876,12 @@ OFFSET 7; Tables referencing acq.provider_contact_address via Foreign Key Constraints - •acq.provider_contact_address + •acq.provider_contact_address - provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial + provider_contact_addressprovider_contact_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18764,7 +19976,7 @@ OFFSET 7; - provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial + provider_holding_subfield_mapprovider_holding_subfield_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18825,7 +20037,7 @@ OFFSET 7; - provider_noteprovider_noteFieldData TypeConstraints and Referencesidserial + provider_noteprovider_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -18909,7 +20121,7 @@ OFFSET 7; - purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial + purchase_orderpurchase_orderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19057,12 +20269,12 @@ OFFSET 7; Tables referencing acq.edi_message via Foreign Key Constraints - •acq.edi_message•acq.invoice_entry•acq.invoice_item•acq.lineitem•acq.po_item•acq.po_note + •acq.edi_message•acq.invoice_entry•acq.invoice_item•acq.lineitem•acq.po_item•acq.po_note - serial_claimserial_claimFieldData TypeConstraints and Referencesidserial + serial_claimserial_claimFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19108,12 +20320,12 @@ OFFSET 7; Tables referencing acq.serial_claim_event via Foreign Key Constraints - •acq.serial_claim_event + •acq.serial_claim_event - serial_claim_eventserial_claim_eventFieldData TypeConstraints and Referencesidbigserial + serial_claim_eventserial_claim_eventFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -19186,7 +20398,7 @@ OFFSET 7; - user_requestuser_requestFieldData TypeConstraints and Referencesidserial + user_requestuser_requestFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19383,7 +20595,7 @@ OFFSET 7; - user_request_typeuser_request_typeFieldData TypeConstraints and Referencesidserial + user_request_typeuser_request_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -19415,12 +20627,12 @@ OFFSET 7; Tables referencing acq.user_request via Foreign Key Constraints - •acq.user_request + •acq.user_request - Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext + Schema actionSchema actionaged_circulationaged_circulationFieldData TypeConstraints and Referencesusr_post_codetext @@ -19662,7 +20874,7 @@ OFFSET 7; - all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint + all_circulationall_circulationFieldData TypeConstraints and Referencesidbigint @@ -19852,7 +21064,7 @@ OFFSET 7; - billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint + billable_circulationsbillable_circulationsFieldData TypeConstraints and Referencesidbigint @@ -20007,7 +21219,7 @@ OFFSET 7; - circulationcirculationFieldData TypeConstraints and Referencesidbigint + circulationcirculationFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -20244,19 +21456,19 @@ OFFSET 7; - Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text, 'CLAIMSNEVERCHECKEDOUT'::text]))) + Constraints on circulationcirculation_stop_fines_checkCHECK ((stop_fines = ANY (ARRAY['CHECKIN'::text, 'CLAIMSRETURNED'::text, 'LOST'::text, 'MAXFINES'::text, 'RENEW'::text, 'LONGOVERDUE'::text, 'CLAIMSNEVERCHECKEDOUT'::text]))) Tables referencing action.circulation via Foreign Key Constraints - •action.circulation + •action.circulation - fieldsetfieldsetFieldData TypeConstraints and Referencesidserial + fieldsetfieldsetFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20372,19 +21584,19 @@ OFFSET 7; - Constraints on fieldsetfieldset_one_or_the_otherCHECK ((((stored_query IS NOT NULL) AND (pkey_value IS NULL)) OR ((pkey_value IS NOT NULL) AND (stored_query IS NULL))))valid_statusCHECK ((status = ANY (ARRAY['PENDING'::text, 'APPLIED'::text, 'ERROR'::text]))) + Constraints on fieldsetfieldset_one_or_the_otherCHECK ((((stored_query IS NOT NULL) AND (pkey_value IS NULL)) OR ((pkey_value IS NOT NULL) AND (stored_query IS NULL))))valid_statusCHECK ((status = ANY (ARRAY['PENDING'::text, 'APPLIED'::text, 'ERROR'::text]))) Tables referencing action.fieldset_col_val via Foreign Key Constraints - •action.fieldset_col_val + •action.fieldset_col_val - fieldset_col_valfieldset_col_valFieldData TypeConstraints and Referencesidserial + fieldset_col_valfieldset_col_valFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20443,7 +21655,7 @@ OFFSET 7; - hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidbigserial + hold_copy_maphold_copy_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -20497,7 +21709,7 @@ OFFSET 7; - hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial + hold_notificationhold_notificationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20561,7 +21773,7 @@ OFFSET 7; - hold_requesthold_requestFieldData TypeConstraints and Referencesidserial + hold_requesthold_requestFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20810,12 +22022,12 @@ OFFSET 7; Tables referencing action.hold_copy_map via Foreign Key Constraints - •action.hold_copy_map•action.hold_notification•action.hold_request_note•action.hold_transit_copy + •action.hold_copy_map•action.hold_notification•action.hold_request_note•action.hold_transit_copy - hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial + hold_request_cancel_causehold_request_cancel_causeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -20845,12 +22057,12 @@ OFFSET 7; Tables referencing action.hold_request via Foreign Key Constraints - •action.hold_request + •action.hold_request - hold_request_notehold_request_noteFieldData TypeConstraints and Referencesidbigserial + hold_request_notehold_request_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -20922,7 +22134,7 @@ OFFSET 7; - hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger + hold_transit_copyhold_transit_copyFieldData TypeConstraints and Referencesidinteger PRIMARY KEY @@ -21010,7 +22222,7 @@ OFFSET 7; - in_house_usein_house_useFieldData TypeConstraints and Referencesidserial + in_house_usein_house_useFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21071,7 +22283,7 @@ OFFSET 7; - non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial + non_cat_in_house_usenon_cat_in_house_useFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21139,7 +22351,7 @@ OFFSET 7; - non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial + non_cataloged_circulationnon_cataloged_circulationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21221,7 +22433,7 @@ OFFSET 7; - open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint + open_circulationopen_circulationFieldData TypeConstraints and Referencesidbigint @@ -21376,7 +22588,7 @@ OFFSET 7; - reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger + reservation_transit_copyreservation_transit_copyFieldData TypeConstraints and Referencesidinteger PRIMARY KEY @@ -21471,7 +22683,7 @@ OFFSET 7; - surveysurveyFieldData TypeConstraints and Referencesidserial + surveysurveyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21571,12 +22783,12 @@ OFFSET 7; Tables referencing action.survey_question via Foreign Key Constraints - •action.survey_question•action.survey_response + •action.survey_question•action.survey_response - survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial + survey_answersurvey_answerFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21615,12 +22827,12 @@ OFFSET 7; Tables referencing action.survey_response via Foreign Key Constraints - •action.survey_response + •action.survey_response - survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial + survey_questionsurvey_questionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21659,12 +22871,12 @@ OFFSET 7; Tables referencing action.survey_answer via Foreign Key Constraints - •action.survey_answer•action.survey_response + •action.survey_answer•action.survey_response - survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial + survey_responsesurvey_responseFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -21747,7 +22959,7 @@ OFFSET 7; - transit_copytransit_copyFieldData TypeConstraints and Referencesidserial + transit_copytransit_copyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -21857,12 +23069,12 @@ OFFSET 7; Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy + •action.transit_copy - unfulfilled_hold_innermost_loopunfulfilled_hold_innermost_loopFieldData TypeConstraints and Referencesholdinteger + unfulfilled_hold_innermost_loopunfulfilled_hold_innermost_loopFieldData TypeConstraints and Referencesholdinteger @@ -21882,7 +23094,7 @@ OFFSET 7; - unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial + unfulfilled_hold_listunfulfilled_hold_listFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -21929,7 +23141,7 @@ OFFSET 7; - unfulfilled_hold_loopsunfulfilled_hold_loopsFieldData TypeConstraints and Referencesholdinteger + unfulfilled_hold_loopsunfulfilled_hold_loopsFieldData TypeConstraints and Referencesholdinteger @@ -21949,7 +23161,7 @@ OFFSET 7; - unfulfilled_hold_max_loopunfulfilled_hold_max_loopFieldData TypeConstraints and Referencesholdinteger + unfulfilled_hold_max_loopunfulfilled_hold_max_loopFieldData TypeConstraints and Referencesholdinteger @@ -21964,7 +23176,7 @@ OFFSET 7; - unfulfilled_hold_min_loopunfulfilled_hold_min_loopFieldData TypeConstraints and Referencesholdinteger + unfulfilled_hold_min_loopunfulfilled_hold_min_loopFieldData TypeConstraints and Referencesholdinteger @@ -21979,7 +23191,7 @@ OFFSET 7; - Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext + Schema action_triggerSchema action_triggercleanupcleanupFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -22002,12 +23214,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - collectorcollectorFieldData TypeConstraints and Referencesmoduletext + collectorcollectorFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -22030,12 +23242,12 @@ OFFSET 7; Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment + •action_trigger.environment - environmentenvironmentFieldData TypeConstraints and Referencesidserial + environmentenvironmentFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22102,13 +23314,13 @@ OFFSET 7; - Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) + Constraints on environmentenvironment_label_checkCHECK ((label <> ALL (ARRAY['result'::text, 'target'::text, 'event'::text]))) - eventeventFieldData TypeConstraints and Referencesidbigserial + eventeventFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -22228,13 +23440,13 @@ OFFSET 7; - Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text])))event_user_data_checkCHECK (((user_data IS NULL) OR is_json(user_data))) + Constraints on eventevent_state_checkCHECK ((state = ANY (ARRAY['pending'::text, 'invalid'::text, 'found'::text, 'collecting'::text, 'collected'::text, 'validating'::text, 'valid'::text, 'reacting'::text, 'reacted'::text, 'cleaning'::text, 'complete'::text, 'error'::text])))event_user_data_checkCHECK (((user_data IS NULL) OR is_json(user_data))) - event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial + event_definitionevent_definitionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22463,12 +23675,12 @@ OFFSET 7; Tables referencing action_trigger.environment via Foreign Key Constraints - •action_trigger.environment•action_trigger.event•action_trigger.event_params + •action_trigger.environment•action_trigger.event•action_trigger.event_params - event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial + event_outputevent_outputFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -22511,12 +23723,12 @@ OFFSET 7; Tables referencing action_trigger.event via Foreign Key Constraints - •action_trigger.event + •action_trigger.event - event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial + event_paramsevent_paramsFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -22577,7 +23789,7 @@ OFFSET 7; - hookhookFieldData TypeConstraints and Referenceskeytext + hookhookFieldData TypeConstraints and Referenceskeytext PRIMARY KEY @@ -22616,12 +23828,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - reactorreactorFieldData TypeConstraints and Referencesmoduletext + reactorreactorFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -22644,12 +23856,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - validatorvalidatorFieldData TypeConstraints and Referencesmoduletext + validatorvalidatorFieldData TypeConstraints and Referencesmoduletext PRIMARY KEY @@ -22672,12 +23884,12 @@ OFFSET 7; Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition + •action_trigger.event_definition - Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial + Schema actorSchema actorcardcardFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22731,7 +23943,7 @@ OFFSET 7; - hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger + hours_of_operationhours_of_operationFieldData TypeConstraints and Referencesidinteger @@ -22881,7 +24093,7 @@ OFFSET 7; - org_addressorg_addressFieldData TypeConstraints and Referencesidserial + org_addressorg_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -22981,12 +24193,12 @@ OFFSET 7; Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit + •actor.org_unit - org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial + org_lassoorg_lassoFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23016,12 +24228,12 @@ OFFSET 7; Tables referencing actor.org_lasso_map via Foreign Key Constraints - •actor.org_lasso_map + •actor.org_lasso_map - org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial + org_lasso_maporg_lasso_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23066,7 +24278,7 @@ OFFSET 7; - org_unitorg_unitFieldData TypeConstraints and Referencesidserial + org_unitorg_unitFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23221,12 +24433,12 @@ OFFSET 7; Tables referencing acq.cancel_reason via Foreign Key Constraints - •acq.cancel_reason•acq.claim_event_type•acq.claim_policy•acq.claim_type•acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_allocation_percent•acq.fund_tag•acq.funding_source•acq.invoice•acq.lineitem_alert_text•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•acq.user_request•action.circulation•action.fieldset•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_location_order•asset.copy_template•asset.stat_cat•asset.stat_cat_entry•biblio.record_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•config.remote_account•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.distribution•serial.record_entry•serial.subscription•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition•vandelay.merge_profile + •acq.cancel_reason•acq.claim_event_type•acq.claim_policy•acq.claim_type•acq.distribution_formula•acq.distribution_formula_entry•acq.fund•acq.fund_allocation_percent•acq.fund_tag•acq.funding_source•acq.invoice•acq.lineitem_alert_text•acq.lineitem_detail•acq.picklist•acq.provider•acq.purchase_order•acq.user_request•action.circulation•action.fieldset•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•action.survey•action.transit_copy•action_trigger.event_definition•actor.hours_of_operation•actor.org_address•actor.org_lasso_map•actor.org_unit•actor.org_unit_closed•actor.org_unit_setting•actor.stat_cat•actor.stat_cat_entry•actor.usr•actor.usr_org_unit_opt_in•actor.usr_standing_penalty•actor.workstation•asset.call_number•asset.copy•asset.copy_location•asset.copy_location_order•asset.copy_template•asset.stat_cat•asset.stat_cat_entry•biblio.record_entry•booking.reservation•booking.resource•booking.resource_attr•booking.resource_attr_value•booking.resource_type•config.billing_type•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•config.idl_field_doc•config.remote_account•money.collections_tracker•permission.grp_penalty_threshold•permission.usr_work_ou_map•reporter.output_folder•reporter.report_folder•reporter.template_folder•serial.distribution•serial.record_entry•serial.subscription•vandelay.import_bib_trash_fields•vandelay.import_item_attr_definition•vandelay.merge_profile - org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial + org_unit_closedorg_unit_closedFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23276,7 +24488,7 @@ OFFSET 7; - org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial + org_unit_proximityorg_unit_proximityFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -23308,7 +24520,7 @@ OFFSET 7; - org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial + org_unit_settingorg_unit_settingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -23376,7 +24588,7 @@ OFFSET 7; - org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial + org_unit_typeorg_unit_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23445,12 +24657,12 @@ OFFSET 7; Tables referencing actor.org_unit via Foreign Key Constraints - •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint + •actor.org_unit•actor.org_unit_type•config.hold_matrix_matchpoint - stat_catstat_catFieldData TypeConstraints and Referencesidserial + stat_catstat_catFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23523,12 +24735,12 @@ OFFSET 7; Tables referencing actor.stat_cat_entry via Foreign Key Constraints - •actor.stat_cat_entry•actor.stat_cat_entry_usr_map + •actor.stat_cat_entry•actor.stat_cat_entry_usr_map - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial + stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -23604,7 +24816,7 @@ OFFSET 7; - stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial + stat_cat_entry_usr_mapstat_cat_entry_usr_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -23672,7 +24884,7 @@ OFFSET 7; - usrusrFieldData TypeConstraints and Referencesidserial + usrusrFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24023,12 +25235,12 @@ OFFSET 7; Tables referencing acq.claim_event via Foreign Key Constraints - •acq.claim_event•acq.distribution_formula_application•acq.fund_allocation•acq.fund_allocation_percent•acq.fund_transfer•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.provider_note•acq.purchase_order•acq.serial_claim_event•acq.user_request•action.circulation•action.fieldset•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_saved_search•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•asset.copy_template•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•serial.distribution_note•serial.issuance•serial.item•serial.item_note•serial.routing_list_user•serial.subscription_note•serial.unit•vandelay.queue + •acq.claim_event•acq.distribution_formula_application•acq.fund_allocation•acq.fund_allocation_percent•acq.fund_transfer•acq.lineitem•acq.lineitem_note•acq.lineitem_usr_attr_definition•acq.picklist•acq.po_note•acq.provider_note•acq.purchase_order•acq.serial_claim_event•acq.user_request•action.circulation•action.fieldset•action.hold_notification•action.hold_request•action.in_house_use•action.non_cat_in_house_use•action.non_cataloged_circulation•actor.card•actor.stat_cat_entry_usr_map•actor.usr_address•actor.usr_note•actor.usr_org_unit_opt_in•actor.usr_password_reset•actor.usr_saved_search•actor.usr_setting•actor.usr_standing_penalty•asset.call_number•asset.call_number_note•asset.copy•asset.copy_note•asset.copy_template•biblio.record_entry•biblio.record_note•booking.reservation•container.biblio_record_entry_bucket•container.call_number_bucket•container.copy_bucket•container.user_bucket•container.user_bucket_item•money.billable_xact•money.collections_tracker•permission.usr_grp_map•permission.usr_object_perm_map•permission.usr_perm_map•permission.usr_work_ou_map•reporter.output_folder•reporter.report•reporter.report_folder•reporter.schedule•reporter.template•reporter.template_folder•serial.distribution_note•serial.issuance•serial.item•serial.item_note•serial.routing_list_user•serial.subscription_note•serial.unit•vandelay.queue - usr_addressusr_addressFieldData TypeConstraints and Referencesidserial + usr_addressusr_addressFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24153,12 +25365,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•actor.usr_address + •actor.usr•actor.usr_address - usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial + usr_noteusr_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24233,7 +25445,7 @@ OFFSET 7; - usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial + usr_org_unit_opt_inusr_org_unit_opt_inFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24331,7 +25543,7 @@ OFFSET 7; - usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial + usr_password_resetusr_password_resetFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24387,7 +25599,7 @@ OFFSET 7; - usr_saved_searchusr_saved_searchFieldData TypeConstraints and Referencesidserial + usr_saved_searchusr_saved_searchFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24471,13 +25683,13 @@ OFFSET 7; - Constraints on usr_saved_searchvalid_query_textCHECK ((query_type = 'URL'::text))valid_targetCHECK ((target = ANY (ARRAY['record'::text, 'metarecord'::text, 'callnumber'::text]))) + Constraints on usr_saved_searchvalid_query_textCHECK ((query_type = 'URL'::text))valid_targetCHECK ((target = ANY (ARRAY['record'::text, 'metarecord'::text, 'callnumber'::text]))) - usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial + usr_settingusr_settingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24545,7 +25757,7 @@ OFFSET 7; - usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial + usr_standing_penaltyusr_standing_penaltyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24633,7 +25845,7 @@ OFFSET 7; - workstationworkstationFieldData TypeConstraints and Referencesidserial + workstationworkstationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -24679,12 +25891,12 @@ OFFSET 7; Tables referencing action.circulation via Foreign Key Constraints - •action.circulation•actor.usr_org_unit_opt_in•money.bnm_desk_payment + •action.circulation•actor.usr_org_unit_opt_in•money.bnm_desk_payment - Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial + Schema assetSchema assetcall_numbercall_numberFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24809,12 +26021,12 @@ OFFSET 7; Tables referencing asset.call_number_note via Foreign Key Constraints - •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.distribution•serial.unit + •asset.call_number_note•asset.copy•asset.uri_call_number_map•container.call_number_bucket_item•serial.distribution•serial.unit - call_number_classcall_number_classFieldData TypeConstraints and Referencesidbigserial + call_number_classcall_number_classFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24848,21 +26060,20 @@ OFFSET 7; NOT NULL; - DEFAULT '050ab,055ab,060ab,070ab,080ab,082ab,086ab,088ab,090,092,096,098,099'::text; - - + DEFAULT '050ab, 055ab, 060ab, 070ab, 080ab, 082ab, 086ab, 088ab, 090, 092, 096, 098, + 099 '::text; Tables referencing asset.call_number via Foreign Key Constraints - •asset.call_number + •asset.call_number - call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial + call_number_notecall_number_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -24937,7 +26148,7 @@ OFFSET 7; - copycopyFieldData TypeConstraints and Referencesidbigserial + copycopyFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25216,19 +26427,19 @@ OFFSET 7; - Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) + Constraints on copycopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) Tables referencing asset.copy_note via Foreign Key Constraints - •asset.copy_note•container.copy_bucket_item + •asset.copy_note•container.copy_bucket_item - copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial + copy_locationcopy_locationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25329,12 +26540,12 @@ OFFSET 7; Tables referencing acq.distribution_formula_entry via Foreign Key Constraints - •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•asset.copy_location_order•asset.copy_template + •acq.distribution_formula_entry•acq.lineitem_detail•asset.copy•asset.copy_location_order•asset.copy_template - copy_location_ordercopy_location_orderFieldData TypeConstraints and Referencesidserial + copy_location_ordercopy_location_orderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25402,7 +26613,7 @@ OFFSET 7; - copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial + copy_notecopy_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25477,7 +26688,7 @@ OFFSET 7; - copy_templatecopy_templateFieldData TypeConstraints and Referencesidserial + copy_templatecopy_templateFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25666,19 +26877,19 @@ OFFSET 7; - Constraints on copy_templatevalid_fine_levelCHECK (((fine_level IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3]))))valid_loan_durationCHECK (((loan_duration IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3])))) + Constraints on copy_templatevalid_fine_levelCHECK (((fine_level IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3]))))valid_loan_durationCHECK (((loan_duration IS NULL) OR (loan_duration = ANY (ARRAY[1, 2, 3])))) Tables referencing serial.distribution via Foreign Key Constraints - •serial.distribution + •serial.distribution - opac_visible_copiesopac_visible_copiesFieldData TypeConstraints and Referencesidbigint + opac_visible_copiesopac_visible_copiesFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -25705,7 +26916,7 @@ OFFSET 7; - stat_catstat_catFieldData TypeConstraints and Referencesidserial + stat_catstat_catFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25778,12 +26989,12 @@ OFFSET 7; Tables referencing asset.stat_cat_entry via Foreign Key Constraints - •asset.stat_cat_entry•asset.stat_cat_entry_copy_map + •asset.stat_cat_entry•asset.stat_cat_entry_copy_map - stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial + stat_cat_entrystat_cat_entryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -25860,12 +27071,12 @@ OFFSET 7; Tables referencing asset.stat_cat_entry_copy_map via Foreign Key Constraints - •asset.stat_cat_entry_copy_map + •asset.stat_cat_entry_copy_map - stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial + stat_cat_entry_copy_mapstat_cat_entry_copy_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25933,7 +27144,7 @@ OFFSET 7; - stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial + stat_cat_entry_transparency_mapstat_cat_entry_transparency_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -25987,7 +27198,7 @@ OFFSET 7; - uriuriFieldData TypeConstraints and Referencesidserial + uriuriFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -26031,12 +27242,12 @@ OFFSET 7; Tables referencing asset.uri_call_number_map via Foreign Key Constraints - •asset.uri_call_number_map•serial.item + •asset.uri_call_number_map•serial.item - uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial + uri_call_number_mapuri_call_number_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -26097,7 +27308,7 @@ OFFSET 7; - Schema auditorSchema auditoracq_invoice_entry_historyacq_invoice_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint + Schema auditorSchema auditoracq_invoice_entry_historyacq_invoice_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -26189,7 +27400,7 @@ OFFSET 7; - acq_invoice_entry_lifecycleacq_invoice_entry_lifecycleFieldData TypeConstraints and References?column?bigint + acq_invoice_entry_lifecycleacq_invoice_entry_lifecycleFieldData TypeConstraints and References?column?bigint @@ -26264,7 +27475,7 @@ OFFSET 7; - acq_invoice_historyacq_invoice_historyFieldData TypeConstraints and Referencesaudit_idbigint + acq_invoice_historyacq_invoice_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -26371,7 +27582,7 @@ OFFSET 7; - acq_invoice_item_historyacq_invoice_item_historyFieldData TypeConstraints and Referencesaudit_idbigint + acq_invoice_item_historyacq_invoice_item_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -26478,7 +27689,7 @@ OFFSET 7; - acq_invoice_item_lifecycleacq_invoice_item_lifecycleFieldData TypeConstraints and References?column?bigint + acq_invoice_item_lifecycleacq_invoice_item_lifecycleFieldData TypeConstraints and References?column?bigint @@ -26568,7 +27779,7 @@ OFFSET 7; - acq_invoice_lifecycleacq_invoice_lifecycleFieldData TypeConstraints and References?column?bigint + acq_invoice_lifecycleacq_invoice_lifecycleFieldData TypeConstraints and References?column?bigint @@ -26648,7 +27859,7 @@ OFFSET 7; - actor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint + actor_org_unit_historyactor_org_unit_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -26756,7 +27967,7 @@ OFFSET 7; - actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint + actor_org_unit_lifecycleactor_org_unit_lifecycleFieldData TypeConstraints and References?column?bigint @@ -26841,7 +28052,7 @@ OFFSET 7; - actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint + actor_usr_address_historyactor_usr_address_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -26964,7 +28175,7 @@ OFFSET 7; - actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint + actor_usr_address_lifecycleactor_usr_address_lifecycleFieldData TypeConstraints and References?column?bigint @@ -27054,7 +28265,7 @@ OFFSET 7; - actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint + actor_usr_historyactor_usr_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -27331,7 +28542,7 @@ OFFSET 7; - actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint + actor_usr_lifecycleactor_usr_lifecycleFieldData TypeConstraints and References?column?bigint @@ -27551,7 +28762,7 @@ OFFSET 7; - asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint + asset_call_number_historyasset_call_number_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -27653,7 +28864,7 @@ OFFSET 7; - asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint + asset_call_number_lifecycleasset_call_number_lifecycleFieldData TypeConstraints and References?column?bigint @@ -27728,7 +28939,7 @@ OFFSET 7; - asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint + asset_copy_historyasset_copy_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -27957,7 +29168,7 @@ OFFSET 7; - asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint + asset_copy_lifecycleasset_copy_lifecycleFieldData TypeConstraints and References?column?bigint @@ -28137,7 +29348,7 @@ OFFSET 7; - biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint + biblio_record_entry_historybiblio_record_entry_historyFieldData TypeConstraints and Referencesaudit_idbigint PRIMARY KEY @@ -28270,7 +29481,7 @@ OFFSET 7; - biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint + biblio_record_entry_lifecyclebiblio_record_entry_lifecycleFieldData TypeConstraints and References?column?bigint @@ -28370,7 +29581,7 @@ OFFSET 7; - Schema authoritySchema authoritybib_linkingbib_linkingFieldData TypeConstraints and Referencesidbigserial + Schema authoritySchema authoritybib_linkingbib_linkingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28415,7 +29626,7 @@ OFFSET 7; - full_recfull_recFieldData TypeConstraints and Referencesidbigserial + full_recfull_recFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28475,7 +29686,7 @@ OFFSET 7; - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial + rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28507,7 +29718,7 @@ OFFSET 7; - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28603,12 +29814,12 @@ OFFSET 7; Tables referencing authority.bib_linking via Foreign Key Constraints - •authority.bib_linking•authority.record_note•vandelay.authority_match•vandelay.queued_authority_record + •authority.bib_linking•authority.record_note•vandelay.authority_match•vandelay.queued_authority_record - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial + record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28682,7 +29893,7 @@ OFFSET 7; - tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint + tracing_linkstracing_linksFieldData TypeConstraints and Referencesrecordbigint @@ -28742,7 +29953,7 @@ OFFSET 7; - Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + Schema biblioSchema bibliorecord_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28892,12 +30103,12 @@ OFFSET 7; Tables referencing acq.lineitem via Foreign Key Constraints - •acq.lineitem•acq.user_request•asset.call_number•authority.bib_linking•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•serial.subscription•vandelay.bib_match•vandelay.queued_bib_record + •acq.lineitem•acq.user_request•asset.call_number•authority.bib_linking•biblio.record_note•booking.resource_type•container.biblio_record_entry_bucket_item•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.metarecord•metabib.metarecord_source_map•metabib.real_full_rec•metabib.rec_descriptor•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•serial.record_entry•serial.subscription•vandelay.bib_match•vandelay.queued_bib_record - record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial + record_noterecord_noteFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -28994,7 +30205,7 @@ OFFSET 7; - Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint + Schema bookingSchema bookingreservationreservationFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -29182,12 +30393,12 @@ OFFSET 7; Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation_attr_value_map + •action.reservation_transit_copy•booking.reservation_attr_value_map - reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial + reservation_attr_value_mapreservation_attr_value_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29248,7 +30459,7 @@ OFFSET 7; - resourceresourceFieldData TypeConstraints and Referencesidserial + resourceresourceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29353,12 +30564,12 @@ OFFSET 7; Tables referencing action.reservation_transit_copy via Foreign Key Constraints - •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map + •action.reservation_transit_copy•booking.reservation•booking.resource_attr_map - resource_attrresource_attrFieldData TypeConstraints and Referencesidserial + resource_attrresource_attrFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29436,12 +30647,12 @@ OFFSET 7; Tables referencing booking.resource_attr_map via Foreign Key Constraints - •booking.resource_attr_map•booking.resource_attr_value + •booking.resource_attr_map•booking.resource_attr_value - resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial + resource_attr_mapresource_attr_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29516,7 +30727,7 @@ OFFSET 7; - resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial + resource_attr_valueresource_attr_valueFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29593,12 +30804,12 @@ OFFSET 7; Tables referencing booking.reservation_attr_value_map via Foreign Key Constraints - •booking.reservation_attr_value_map•booking.resource_attr_map + •booking.reservation_attr_value_map•booking.resource_attr_map - resource_typeresource_typeFieldData TypeConstraints and Referencesidserial + resource_typeresource_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29713,12 +30924,12 @@ OFFSET 7; Tables referencing booking.reservation via Foreign Key Constraints - •booking.reservation•booking.resource•booking.resource_attr + •booking.reservation•booking.resource•booking.resource_attr - Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext + Schema configSchema configaudience_mapaudience_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -29747,7 +30958,7 @@ OFFSET 7; - bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext + bib_level_mapbib_level_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -29771,7 +30982,7 @@ OFFSET 7; - bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial + bib_sourcebib_sourceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29814,19 +31025,19 @@ OFFSET 7; - Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) + Constraints on bib_sourcebib_source_quality_checkCHECK (((quality >= 0) AND (quality <= 100))) Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record + •vandelay.queued_bib_record - biblio_fingerprintbiblio_fingerprintFieldData TypeConstraints and Referencesidserial + biblio_fingerprintbiblio_fingerprintFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29875,7 +31086,7 @@ OFFSET 7; - billing_typebilling_typeFieldData TypeConstraints and Referencesidserial + billing_typebilling_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29935,12 +31146,12 @@ OFFSET 7; Tables referencing money.billing via Foreign Key Constraints - •money.billing + •money.billing - circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial + circ_matrix_circ_mod_testcirc_matrix_circ_mod_testFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -29979,12 +31190,12 @@ OFFSET 7; Tables referencing config.circ_matrix_circ_mod_test_map via Foreign Key Constraints - •config.circ_matrix_circ_mod_test_map + •config.circ_matrix_circ_mod_test_map - circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial + circ_matrix_circ_mod_test_mapcirc_matrix_circ_mod_test_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30045,7 +31256,7 @@ OFFSET 7; - circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial + circ_matrix_matchpointcirc_matrix_matchpointFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30379,12 +31590,12 @@ OFFSET 7; Tables referencing config.circ_matrix_circ_mod_test via Foreign Key Constraints - •config.circ_matrix_circ_mod_test + •config.circ_matrix_circ_mod_test - circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext + circ_modifiercirc_modifierFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -30444,12 +31655,12 @@ OFFSET 7; Tables referencing acq.lineitem_detail via Foreign Key Constraints - •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •acq.lineitem_detail•asset.copy•config.circ_matrix_circ_mod_test_map•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial + copy_statuscopy_statusFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30499,12 +31710,12 @@ OFFSET 7; Tables referencing action.transit_copy via Foreign Key Constraints - •action.transit_copy•asset.copy•asset.copy_template + •action.transit_copy•asset.copy•asset.copy_template - global_flagglobal_flagFieldData TypeConstraints and Referencesnametext + global_flagglobal_flagFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -30542,7 +31753,7 @@ OFFSET 7; - hard_due_datehard_due_dateFieldData TypeConstraints and Referencesidserial + hard_due_datehard_due_dateFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30592,19 +31803,19 @@ OFFSET 7; - Constraints on hard_due_datehard_due_date_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on hard_due_datehard_due_date_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hard_due_date_values + •config.circ_matrix_matchpoint•config.hard_due_date_values - hard_due_date_valueshard_due_date_valuesFieldData TypeConstraints and Referencesidserial + hard_due_date_valueshard_due_date_valuesFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30649,7 +31860,7 @@ OFFSET 7; - hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial + hold_matrix_matchpointhold_matrix_matchpointFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -30988,7 +32199,7 @@ OFFSET 7; - i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial + i18n_corei18n_coreFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -31040,7 +32251,7 @@ OFFSET 7; - i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext + i18n_localei18n_localeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31091,12 +32302,12 @@ OFFSET 7; Tables referencing config.i18n_core via Foreign Key Constraints - •config.i18n_core + •config.i18n_core - identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial + identification_typeidentification_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31128,12 +32339,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr + •actor.usr - idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial + idl_field_docidl_field_docFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -31185,7 +32396,7 @@ OFFSET 7; - index_normalizerindex_normalizerFieldData TypeConstraints and Referencesidserial + index_normalizerindex_normalizerFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31236,12 +32447,12 @@ OFFSET 7; Tables referencing config.metabib_field_index_norm_map via Foreign Key Constraints - •config.metabib_field_index_norm_map + •config.metabib_field_index_norm_map - internal_flaginternal_flagFieldData TypeConstraints and Referencesnametext + internal_flaginternal_flagFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -31272,7 +32483,7 @@ OFFSET 7; - item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext + item_form_mapitem_form_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31297,12 +32508,12 @@ OFFSET 7; Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext + item_type_mapitem_type_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31327,12 +32538,12 @@ OFFSET 7; Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - language_maplanguage_mapFieldData TypeConstraints and Referencescodetext + language_maplanguage_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31357,12 +32568,12 @@ OFFSET 7; Tables referencing config.i18n_locale via Foreign Key Constraints - •config.i18n_locale + •config.i18n_locale - lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext + lit_form_maplit_form_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31391,7 +32602,7 @@ OFFSET 7; - marc21_ff_pos_mapmarc21_ff_pos_mapFieldData TypeConstraints and Referencesidserial + marc21_ff_pos_mapmarc21_ff_pos_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31452,7 +32663,7 @@ OFFSET 7; - marc21_physical_characteristic_subfield_mapmarc21_physical_characteristic_subfield_mapFieldData TypeConstraints and Referencesidserial + marc21_physical_characteristic_subfield_mapmarc21_physical_characteristic_subfield_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31512,12 +32723,12 @@ OFFSET 7; Tables referencing config.marc21_physical_characteristic_value_map via Foreign Key Constraints - •config.marc21_physical_characteristic_value_map + •config.marc21_physical_characteristic_value_map - marc21_physical_characteristic_type_mapmarc21_physical_characteristic_type_mapFieldData TypeConstraints and Referencesptype_keytext + marc21_physical_characteristic_type_mapmarc21_physical_characteristic_type_mapFieldData TypeConstraints and Referencesptype_keytext PRIMARY KEY @@ -31542,12 +32753,12 @@ OFFSET 7; Tables referencing config.marc21_physical_characteristic_subfield_map via Foreign Key Constraints - •config.marc21_physical_characteristic_subfield_map + •config.marc21_physical_characteristic_subfield_map - marc21_physical_characteristic_value_mapmarc21_physical_characteristic_value_mapFieldData TypeConstraints and Referencesidserial + marc21_physical_characteristic_value_mapmarc21_physical_characteristic_value_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31592,7 +32803,7 @@ OFFSET 7; - marc21_rec_type_mapmarc21_rec_type_mapFieldData TypeConstraints and Referencescodetext + marc21_rec_type_mapmarc21_rec_type_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -31623,7 +32834,7 @@ OFFSET 7; - metabib_classmetabib_classFieldData TypeConstraints and Referencesnametext + metabib_classmetabib_classFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -31655,12 +32866,12 @@ OFFSET 7; Tables referencing config.metabib_field via Foreign Key Constraints - •config.metabib_field•config.metabib_search_alias + •config.metabib_field•config.metabib_search_alias - metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial + metabib_fieldmetabib_fieldFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31761,12 +32972,12 @@ OFFSET 7; Tables referencing config.metabib_field_index_norm_map via Foreign Key Constraints - •config.metabib_field_index_norm_map•config.metabib_search_alias•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment + •config.metabib_field_index_norm_map•config.metabib_search_alias•metabib.author_field_entry•metabib.identifier_field_entry•metabib.keyword_field_entry•metabib.series_field_entry•metabib.subject_field_entry•metabib.title_field_entry•search.relevance_adjustment - metabib_field_index_norm_mapmetabib_field_index_norm_mapFieldData TypeConstraints and Referencesidserial + metabib_field_index_norm_mapmetabib_field_index_norm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31823,7 +33034,7 @@ OFFSET 7; - metabib_search_aliasmetabib_search_aliasFieldData TypeConstraints and Referencesaliastext + metabib_search_aliasmetabib_search_aliasFieldData TypeConstraints and Referencesaliastext PRIMARY KEY @@ -31866,7 +33077,7 @@ OFFSET 7; - net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial + net_access_levelnet_access_levelFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31898,12 +33109,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr + •actor.usr - non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial + non_cataloged_typenon_cataloged_typeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -31969,12 +33180,12 @@ OFFSET 7; Tables referencing action.non_cat_in_house_use via Foreign Key Constraints - •action.non_cat_in_house_use•action.non_cataloged_circulation + •action.non_cat_in_house_use•action.non_cataloged_circulation - org_unit_setting_typeorg_unit_setting_typeFieldData TypeConstraints and Referencesnametext + org_unit_setting_typeorg_unit_setting_typeFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32058,19 +33269,19 @@ OFFSET 7; - Constraints on org_unit_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) + Constraints on org_unit_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) Tables referencing actor.org_unit_setting via Foreign Key Constraints - •actor.org_unit_setting + •actor.org_unit_setting - remote_accountremote_accountFieldData TypeConstraints and Referencesidserial + remote_accountremote_accountFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32140,7 +33351,7 @@ OFFSET 7; - rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial + rule_age_hold_protectrule_age_hold_protectFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32183,19 +33394,19 @@ OFFSET 7; - Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_age_hold_protectrule_age_hold_ protect_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.hold_matrix_matchpoint via Foreign Key Constraints - •config.hold_matrix_matchpoint + •config.hold_matrix_matchpoint - rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial + rule_circ_durationrule_circ_durationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32252,19 +33463,19 @@ OFFSET 7; - Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_circ_durationrule_circ_duration_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint + •config.circ_matrix_matchpoint - rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial + rule_max_finerule_max_fineFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32309,19 +33520,19 @@ OFFSET 7; - Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_max_finerule_max_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint + •config.circ_matrix_matchpoint - rule_recurring_finerule_recurring_fineFieldData TypeConstraints and Referencesidserial + rule_recurring_finerule_recurring_fineFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32380,19 +33591,19 @@ OFFSET 7; - Constraints on rule_recurring_finerule_recurring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) + Constraints on rule_recurring_finerule_recurring_fine_name_checkCHECK ((name ~ '^\\w+$'::text)) Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint + •config.circ_matrix_matchpoint - settings_groupsettings_groupFieldData TypeConstraints and Referencesnametext + settings_groupsettings_groupFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32424,12 +33635,12 @@ OFFSET 7; Tables referencing config.org_unit_setting_type via Foreign Key Constraints - •config.org_unit_setting_type•config.usr_setting_type + •config.org_unit_setting_type•config.usr_setting_type - standingstandingFieldData TypeConstraints and Referencesidserial + standingstandingFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32461,12 +33672,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr + •actor.usr - standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial + standing_penaltystanding_penaltyFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32515,12 +33726,12 @@ OFFSET 7; Tables referencing actor.usr_standing_penalty via Foreign Key Constraints - •actor.usr_standing_penalty•permission.grp_penalty_threshold + •actor.usr_standing_penalty•permission.grp_penalty_threshold - upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext + upgrade_logupgrade_logFieldData TypeConstraints and Referencesversiontext PRIMARY KEY @@ -32546,7 +33757,7 @@ OFFSET 7; - usr_setting_typeusr_setting_typeFieldData TypeConstraints and Referencesnametext + usr_setting_typeusr_setting_typeFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32615,19 +33826,19 @@ OFFSET 7; - Constraints on usr_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) + Constraints on usr_setting_typecoust_no_empty_linkCHECK ((((datatype = 'link'::text) AND (fm_class IS NOT NULL)) OR ((datatype <> 'link'::text) AND (fm_class IS NULL))))coust_valid_datatypeCHECK ((datatype = ANY (ARRAY['bool'::text, 'integer'::text, 'float'::text, 'currency'::text, 'interval'::text, 'date'::text, 'string'::text, 'object'::text, 'array'::text, 'link'::text]))) Tables referencing action_trigger.event_definition via Foreign Key Constraints - •action_trigger.event_definition•actor.usr_setting + •action_trigger.event_definition•actor.usr_setting - videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext + videorecording_format_mapvideorecording_format_mapFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -32652,12 +33863,12 @@ OFFSET 7; Tables referencing config.circ_matrix_matchpoint via Foreign Key Constraints - •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint + •config.circ_matrix_matchpoint•config.hold_matrix_matchpoint - xml_transformxml_transformFieldData TypeConstraints and Referencesnametext + xml_transformxml_transformFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32696,12 +33907,12 @@ OFFSET 7; Tables referencing config.metabib_field via Foreign Key Constraints - •config.metabib_field + •config.metabib_field - z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial + z3950_attrz3950_attrFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32791,7 +34002,7 @@ OFFSET 7; - z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext + z3950_sourcez3950_sourceFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -32871,12 +34082,12 @@ OFFSET 7; Tables referencing config.z3950_attr via Foreign Key Constraints - •config.z3950_attr + •config.z3950_attr - Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial + Schema containerSchema containerbiblio_record_entry_bucketbiblio_record_entry_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -32973,12 +34184,12 @@ OFFSET 7; Tables referencing container.biblio_record_entry_bucket_item via Foreign Key Constraints - •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note + •container.biblio_record_entry_bucket_item•container.biblio_record_entry_bucket_note - biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial + biblio_record_entry_bucket_itembiblio_record_entry_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33038,12 +34249,12 @@ OFFSET 7; Tables referencing container.biblio_record_entry_bucket_item_note via Foreign Key Constraints - •container.biblio_record_entry_bucket_item_note + •container.biblio_record_entry_bucket_item_note - biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial + biblio_record_entry_bucket_item_notebiblio_record_entry_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33081,7 +34292,7 @@ OFFSET 7; - biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial + biblio_record_entry_bucket_notebiblio_record_entry_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33119,7 +34330,7 @@ OFFSET 7; - biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext + biblio_record_entry_bucket_typebiblio_record_entry_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33151,12 +34362,12 @@ OFFSET 7; Tables referencing container.biblio_record_entry_bucket via Foreign Key Constraints - •container.biblio_record_entry_bucket + •container.biblio_record_entry_bucket - call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial + call_number_bucketcall_number_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33253,12 +34464,12 @@ OFFSET 7; Tables referencing container.call_number_bucket_item via Foreign Key Constraints - •container.call_number_bucket_item•container.call_number_bucket_note + •container.call_number_bucket_item•container.call_number_bucket_note - call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial + call_number_bucket_itemcall_number_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33318,12 +34529,12 @@ OFFSET 7; Tables referencing container.call_number_bucket_item_note via Foreign Key Constraints - •container.call_number_bucket_item_note + •container.call_number_bucket_item_note - call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial + call_number_bucket_item_notecall_number_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33361,7 +34572,7 @@ OFFSET 7; - call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial + call_number_bucket_notecall_number_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33399,7 +34610,7 @@ OFFSET 7; - call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext + call_number_bucket_typecall_number_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33431,12 +34642,12 @@ OFFSET 7; Tables referencing container.call_number_bucket via Foreign Key Constraints - •container.call_number_bucket + •container.call_number_bucket - copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial + copy_bucketcopy_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33533,12 +34744,12 @@ OFFSET 7; Tables referencing container.copy_bucket_item via Foreign Key Constraints - •container.copy_bucket_item•container.copy_bucket_note + •container.copy_bucket_item•container.copy_bucket_note - copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial + copy_bucket_itemcopy_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33598,12 +34809,12 @@ OFFSET 7; Tables referencing container.copy_bucket_item_note via Foreign Key Constraints - •container.copy_bucket_item_note + •container.copy_bucket_item_note - copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial + copy_bucket_item_notecopy_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33641,7 +34852,7 @@ OFFSET 7; - copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial + copy_bucket_notecopy_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33679,7 +34890,7 @@ OFFSET 7; - copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext + copy_bucket_typecopy_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33711,12 +34922,12 @@ OFFSET 7; Tables referencing container.copy_bucket via Foreign Key Constraints - •container.copy_bucket + •container.copy_bucket - user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial + user_bucketuser_bucketFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33813,12 +35024,12 @@ OFFSET 7; Tables referencing container.user_bucket_item via Foreign Key Constraints - •container.user_bucket_item•container.user_bucket_note + •container.user_bucket_item•container.user_bucket_note - user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial + user_bucket_itemuser_bucket_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33878,12 +35089,12 @@ OFFSET 7; Tables referencing container.user_bucket_item_note via Foreign Key Constraints - •container.user_bucket_item_note + •container.user_bucket_item_note - user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial + user_bucket_item_noteuser_bucket_item_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33921,7 +35132,7 @@ OFFSET 7; - user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial + user_bucket_noteuser_bucket_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -33959,7 +35170,7 @@ OFFSET 7; - user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext + user_bucket_typeuser_bucket_typeFieldData TypeConstraints and Referencescodetext PRIMARY KEY @@ -33991,12 +35202,12 @@ OFFSET 7; Tables referencing container.user_bucket via Foreign Key Constraints - •container.user_bucket + •container.user_bucket - Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint + Schema extend_reporterSchema extend_reporterfull_circ_countfull_circ_countFieldData TypeConstraints and Referencesidbigint @@ -34011,7 +35222,7 @@ OFFSET 7; - global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint + global_bibs_by_holding_updateglobal_bibs_by_holding_updateFieldData TypeConstraints and Referencesidbigint @@ -34031,7 +35242,7 @@ OFFSET 7; - legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint + legacy_circ_countlegacy_circ_countFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -34055,7 +35266,7 @@ OFFSET 7; - Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial + Schema metabibSchema metabibauthor_field_entryauthor_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34114,7 +35325,7 @@ OFFSET 7; - facet_entryfacet_entryFieldData TypeConstraints and Referencesidbigserial + facet_entryfacet_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34152,7 +35363,7 @@ OFFSET 7; - full_recfull_recFieldData TypeConstraints and Referencesidbigint + full_recfull_recFieldData TypeConstraints and Referencesidbigint @@ -34197,7 +35408,7 @@ OFFSET 7; - identifier_field_entryidentifier_field_entryFieldData TypeConstraints and Referencesidbigserial + identifier_field_entryidentifier_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34256,7 +35467,7 @@ OFFSET 7; - keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial + keyword_field_entrykeyword_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34315,7 +35526,7 @@ OFFSET 7; - metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial + metarecordmetarecordFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34357,12 +35568,12 @@ OFFSET 7; Tables referencing metabib.metarecord_source_map via Foreign Key Constraints - •metabib.metarecord_source_map + •metabib.metarecord_source_map - metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial + metarecord_source_mapmetarecord_source_mapFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34407,7 +35618,7 @@ OFFSET 7; - real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint + real_full_recreal_full_recFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -34476,7 +35687,7 @@ OFFSET 7; - rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial + rec_descriptorrec_descriptorFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34580,7 +35791,7 @@ OFFSET 7; - series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial + series_field_entryseries_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34639,7 +35850,7 @@ OFFSET 7; - subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial + subject_field_entrysubject_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34698,7 +35909,7 @@ OFFSET 7; - title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial + title_field_entrytitle_field_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34757,7 +35968,7 @@ OFFSET 7; - Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial + Schema moneySchema moneybillable_xactbillable_xactFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -34807,7 +36018,7 @@ OFFSET 7; - billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint + billable_xact_summarybillable_xact_summaryFieldData TypeConstraints and Referencesidbigint @@ -34882,7 +36093,7 @@ OFFSET 7; - billable_xact_summary_location_viewbillable_xact_summary_location_viewFieldData TypeConstraints and Referencesidbigint + billable_xact_summary_location_viewbillable_xact_summary_location_viewFieldData TypeConstraints and Referencesidbigint @@ -34962,7 +36173,7 @@ OFFSET 7; - billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint + billable_xact_with_void_summarybillable_xact_with_void_summaryFieldData TypeConstraints and Referencesidbigint @@ -35037,7 +36248,7 @@ OFFSET 7; - billingbillingFieldData TypeConstraints and Referencesidbigserial + billingbillingFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -35122,7 +36333,7 @@ OFFSET 7; - bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint + bnm_desk_paymentbnm_desk_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35204,7 +36415,7 @@ OFFSET 7; - bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint + bnm_paymentbnm_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35274,7 +36485,7 @@ OFFSET 7; - bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint + bnm_payment_viewbnm_payment_viewFieldData TypeConstraints and Referencesidbigint @@ -35324,7 +36535,7 @@ OFFSET 7; - cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint + cash_paymentcash_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35399,7 +36610,7 @@ OFFSET 7; - cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger + cashdrawer_payment_viewcashdrawer_payment_viewFieldData TypeConstraints and Referencesorg_unitinteger @@ -35439,7 +36650,7 @@ OFFSET 7; - check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint + check_paymentcheck_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35521,7 +36732,7 @@ OFFSET 7; - collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial + collections_trackercollections_trackerFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -35585,7 +36796,7 @@ OFFSET 7; - credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint + credit_card_paymentcredit_card_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35700,7 +36911,7 @@ OFFSET 7; - credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint + credit_paymentcredit_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35770,7 +36981,7 @@ OFFSET 7; - desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint + desk_payment_viewdesk_payment_viewFieldData TypeConstraints and Referencesidbigint @@ -35825,7 +37036,7 @@ OFFSET 7; - forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint + forgive_paymentforgive_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35895,7 +37106,7 @@ OFFSET 7; - goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint + goods_paymentgoods_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -35965,7 +37176,7 @@ OFFSET 7; - grocerygroceryFieldData TypeConstraints and Referencesidbigint + grocerygroceryFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -36022,7 +37233,7 @@ OFFSET 7; - materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint + materialized_billable_xact_summarymaterialized_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -36104,7 +37315,7 @@ OFFSET 7; - non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint + non_drawer_payment_viewnon_drawer_payment_viewFieldData TypeConstraints and Referencesidbigint @@ -36154,7 +37365,7 @@ OFFSET 7; - open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint + open_billable_xact_summaryopen_billable_xact_summaryFieldData TypeConstraints and Referencesidbigint @@ -36234,7 +37445,7 @@ OFFSET 7; - open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint + open_transaction_billing_summaryopen_transaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36264,7 +37475,7 @@ OFFSET 7; - open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint + open_transaction_billing_type_summaryopen_transaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36294,7 +37505,7 @@ OFFSET 7; - open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint + open_transaction_payment_summaryopen_transaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36324,7 +37535,7 @@ OFFSET 7; - open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger + open_usr_circulation_summaryopen_usr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger @@ -36349,7 +37560,7 @@ OFFSET 7; - open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger + open_usr_summaryopen_usr_summaryFieldData TypeConstraints and Referencesusrinteger @@ -36374,7 +37585,7 @@ OFFSET 7; - paymentpaymentFieldData TypeConstraints and Referencesidbigserial + paymentpaymentFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -36428,7 +37639,7 @@ OFFSET 7; - payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint + payment_viewpayment_viewFieldData TypeConstraints and Referencesidbigint @@ -36468,7 +37679,7 @@ OFFSET 7; - transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_billing_summarytransaction_billing_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36498,7 +37709,7 @@ OFFSET 7; - transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_billing_type_summarytransaction_billing_type_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36528,7 +37739,7 @@ OFFSET 7; - transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_billing_with_void_summarytransaction_billing_with_void_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36558,7 +37769,7 @@ OFFSET 7; - transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_payment_summarytransaction_payment_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36588,7 +37799,7 @@ OFFSET 7; - transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint + transaction_payment_with_void_summarytransaction_payment_with_void_summaryFieldData TypeConstraints and Referencesxactbigint @@ -36618,7 +37829,7 @@ OFFSET 7; - usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger + usr_circulation_summaryusr_circulation_summaryFieldData TypeConstraints and Referencesusrinteger @@ -36643,7 +37854,7 @@ OFFSET 7; - usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger + usr_summaryusr_summaryFieldData TypeConstraints and Referencesusrinteger @@ -36668,7 +37879,7 @@ OFFSET 7; - work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint + work_paymentwork_paymentFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -36738,7 +37949,7 @@ OFFSET 7; - Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial + Schema offlineSchema offlinescriptscriptFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36804,7 +38015,7 @@ OFFSET 7; - sessionsessionFieldData TypeConstraints and Referenceskeytext + sessionsessionFieldData TypeConstraints and Referenceskeytext PRIMARY KEY @@ -36871,7 +38082,7 @@ OFFSET 7; - Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial + Schema permissionSchema permissiongrp_penalty_thresholdgrp_penalty_thresholdFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -36961,7 +38172,7 @@ OFFSET 7; - grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial + grp_perm_mapgrp_perm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37038,7 +38249,7 @@ OFFSET 7; - grp_treegrp_treeFieldData TypeConstraints and Referencesidserial + grp_treegrp_treeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37110,12 +38321,12 @@ OFFSET 7; Tables referencing actor.usr via Foreign Key Constraints - •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map + •actor.usr•config.circ_matrix_matchpoint•config.hold_matrix_matchpoint•permission.grp_penalty_threshold•permission.grp_perm_map•permission.grp_tree•permission.usr_grp_map - perm_listperm_listFieldData TypeConstraints and Referencesidserial + perm_listperm_listFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37152,12 +38363,12 @@ OFFSET 7; Tables referencing config.org_unit_setting_type via Foreign Key Constraints - •config.org_unit_setting_type•permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map + •config.org_unit_setting_type•permission.grp_perm_map•permission.usr_object_perm_map•permission.usr_perm_map - usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial + usr_grp_mapusr_grp_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37218,7 +38429,7 @@ OFFSET 7; - usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial + usr_object_perm_mapusr_object_perm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37318,7 +38529,7 @@ OFFSET 7; - usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial + usr_perm_mapusr_perm_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37395,7 +38606,7 @@ OFFSET 7; - usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial + usr_work_ou_mapusr_work_ou_mapFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37456,7 +38667,7 @@ OFFSET 7; - Schema publicSchema publicSchema querySchema querybind_variablebind_variableFieldData TypeConstraints and Referencesnametext + Schema publicSchema publicSchema querySchema querybind_variablebind_variableFieldData TypeConstraints and Referencesnametext PRIMARY KEY @@ -37497,19 +38708,19 @@ OFFSET 7; - Constraints on bind_variablebind_variable_typeCHECK ((type = ANY (ARRAY['string'::text, 'number'::text, 'string_list'::text, 'number_list'::text]))) + Constraints on bind_variablebind_variable_typeCHECK ((type = ANY (ARRAY['string'::text, 'number'::text, 'string_list'::text, 'number_list'::text]))) Tables referencing query.expression via Foreign Key Constraints - •query.expression + •query.expression - case_branchcase_branchFieldData TypeConstraints and Referencesidserial + case_branchcase_branchFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37589,7 +38800,7 @@ OFFSET 7; - datatypedatatypeFieldData TypeConstraints and Referencesidserial + datatypedatatypeFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -37636,19 +38847,19 @@ OFFSET 7; - Constraints on datatypeqdt_comp_not_numCHECK (((is_numeric IS FALSE) OR (is_composite IS FALSE))) + Constraints on datatypeqdt_comp_not_numCHECK (((is_numeric IS FALSE) OR (is_composite IS FALSE))) Tables referencing query.expression via Foreign Key Constraints - •query.expression•query.function_param_def•query.function_sig•query.record_column•query.subfield + •query.expression•query.function_param_def•query.function_sig•query.record_column•query.subfield - expr_xbetexpr_xbetFieldData TypeConstraints and Referencesidinteger + expr_xbetexpr_xbetFieldData TypeConstraints and Referencesidinteger @@ -37683,7 +38894,7 @@ OFFSET 7; - expr_xbindexpr_xbindFieldData TypeConstraints and Referencesidinteger + expr_xbindexpr_xbindFieldData TypeConstraints and Referencesidinteger @@ -37713,7 +38924,7 @@ OFFSET 7; - expr_xboolexpr_xboolFieldData TypeConstraints and Referencesidinteger + expr_xboolexpr_xboolFieldData TypeConstraints and Referencesidinteger @@ -37748,7 +38959,7 @@ OFFSET 7; - expr_xcaseexpr_xcaseFieldData TypeConstraints and Referencesidinteger + expr_xcaseexpr_xcaseFieldData TypeConstraints and Referencesidinteger @@ -37783,7 +38994,7 @@ OFFSET 7; - expr_xcastexpr_xcastFieldData TypeConstraints and Referencesidinteger + expr_xcastexpr_xcastFieldData TypeConstraints and Referencesidinteger @@ -37823,7 +39034,7 @@ OFFSET 7; - expr_xcolexpr_xcolFieldData TypeConstraints and Referencesidinteger + expr_xcolexpr_xcolFieldData TypeConstraints and Referencesidinteger @@ -37863,7 +39074,7 @@ OFFSET 7; - expr_xexexpr_xexFieldData TypeConstraints and Referencesidinteger + expr_xexexpr_xexFieldData TypeConstraints and Referencesidinteger @@ -37898,7 +39109,7 @@ OFFSET 7; - expr_xfuncexpr_xfuncFieldData TypeConstraints and Referencesidinteger + expr_xfuncexpr_xfuncFieldData TypeConstraints and Referencesidinteger @@ -37938,7 +39149,7 @@ OFFSET 7; - expr_xinexpr_xinFieldData TypeConstraints and Referencesidinteger + expr_xinexpr_xinFieldData TypeConstraints and Referencesidinteger @@ -37978,7 +39189,7 @@ OFFSET 7; - expr_xisnullexpr_xisnullFieldData TypeConstraints and Referencesidinteger + expr_xisnullexpr_xisnullFieldData TypeConstraints and Referencesidinteger @@ -38013,7 +39224,7 @@ OFFSET 7; - expr_xnullexpr_xnullFieldData TypeConstraints and Referencesidinteger + expr_xnullexpr_xnullFieldData TypeConstraints and Referencesidinteger @@ -38043,7 +39254,7 @@ OFFSET 7; - expr_xnumexpr_xnumFieldData TypeConstraints and Referencesidinteger + expr_xnumexpr_xnumFieldData TypeConstraints and Referencesidinteger @@ -38073,7 +39284,7 @@ OFFSET 7; - expr_xopexpr_xopFieldData TypeConstraints and Referencesidinteger + expr_xopexpr_xopFieldData TypeConstraints and Referencesidinteger @@ -38118,7 +39329,7 @@ OFFSET 7; - expr_xserexpr_xserFieldData TypeConstraints and Referencesidinteger + expr_xserexpr_xserFieldData TypeConstraints and Referencesidinteger @@ -38153,7 +39364,7 @@ OFFSET 7; - expr_xstrexpr_xstrFieldData TypeConstraints and Referencesidinteger + expr_xstrexpr_xstrFieldData TypeConstraints and Referencesidinteger @@ -38183,7 +39394,7 @@ OFFSET 7; - expr_xsubqexpr_xsubqFieldData TypeConstraints and Referencesidinteger + expr_xsubqexpr_xsubqFieldData TypeConstraints and Referencesidinteger @@ -38218,7 +39429,7 @@ OFFSET 7; - expressionexpressionFieldData TypeConstraints and Referencesidserial + expressionexpressionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38371,19 +39582,19 @@ OFFSET 7; - Constraints on expressionexpression_typeCHECK ((type = ANY (ARRAY['xbet'::text, 'xbind'::text, 'xbool'::text, 'xcase'::text, 'xcast'::text, 'xcol'::text, 'xex'::text, 'xfunc'::text, 'xin'::text, 'xisnull'::text, 'xnull'::text, 'xnum'::text, 'xop'::text, 'xser'::text, 'xstr'::text, 'xsubq'::text]))) + Constraints on expressionexpression_typeCHECK ((type = ANY (ARRAY['xbet'::text, 'xbind'::text, 'xbool'::text, 'xcase'::text, 'xcast'::text, 'xcol'::text, 'xex'::text, 'xfunc'::text, 'xin'::text, 'xisnull'::text, 'xnull'::text, 'xnum'::text, 'xop'::text, 'xser'::text, 'xstr'::text, 'xsubq'::text]))) Tables referencing query.case_branch via Foreign Key Constraints - •query.case_branch•query.expression•query.from_relation•query.order_by_item•query.select_item•query.stored_query + •query.case_branch•query.expression•query.from_relation•query.order_by_item•query.select_item•query.stored_query - from_relationfrom_relationFieldData TypeConstraints and Referencesidserial + from_relationfrom_relationFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38482,19 +39693,19 @@ OFFSET 7; - Constraints on from_relationgood_join_typeCHECK (((join_type IS NULL) OR (join_type = ANY (ARRAY['INNER'::text, 'LEFT'::text, 'RIGHT'::text, 'FULL'::text]))))join_or_coreCHECK (((((parent_relation IS NULL) AND (join_type IS NULL)) AND (on_clause IS NULL)) OR (((parent_relation IS NOT NULL) AND (join_type IS NOT NULL)) AND (on_clause IS NOT NULL))))relation_typeCHECK ((type = ANY (ARRAY['RELATION'::text, 'SUBQUERY'::text, 'FUNCTION'::text]))) + Constraints on from_relationgood_join_typeCHECK (((join_type IS NULL) OR (join_type = ANY (ARRAY['INNER'::text, 'LEFT'::text, 'RIGHT'::text, 'FULL'::text]))))join_or_coreCHECK (((((parent_relation IS NULL) AND (join_type IS NULL)) AND (on_clause IS NULL)) OR (((parent_relation IS NOT NULL) AND (join_type IS NOT NULL)) AND (on_clause IS NOT NULL))))relation_typeCHECK ((type = ANY (ARRAY['RELATION'::text, 'SUBQUERY'::text, 'FUNCTION'::text]))) Tables referencing query.from_relation via Foreign Key Constraints - •query.from_relation•query.record_column•query.stored_query + •query.from_relation•query.record_column•query.stored_query - function_param_deffunction_param_defFieldData TypeConstraints and Referencesidserial + function_param_deffunction_param_defFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38560,13 +39771,13 @@ OFFSET 7; - Constraints on function_param_defqfpd_pos_seq_noCHECK ((seq_no > 0)) + Constraints on function_param_defqfpd_pos_seq_noCHECK ((seq_no > 0)) - function_sigfunction_sigFieldData TypeConstraints and Referencesidserial + function_sigfunction_sigFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38609,19 +39820,19 @@ OFFSET 7; - Constraints on function_sigqfd_rtn_or_aggrCHECK (((return_type IS NULL) OR (is_aggregate = false))) + Constraints on function_sigqfd_rtn_or_aggrCHECK (((return_type IS NULL) OR (is_aggregate = false))) Tables referencing query.expression via Foreign Key Constraints - •query.expression•query.function_param_def + •query.expression•query.function_param_def - order_by_itemorder_by_itemFieldData TypeConstraints and Referencesidserial + order_by_itemorder_by_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38689,7 +39900,7 @@ OFFSET 7; - query_sequencequery_sequenceFieldData TypeConstraints and Referencesidserial + query_sequencequery_sequenceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38757,7 +39968,7 @@ OFFSET 7; - record_columnrecord_columnFieldData TypeConstraints and Referencesidserial + record_columnrecord_columnFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38832,7 +40043,7 @@ OFFSET 7; - select_itemselect_itemFieldData TypeConstraints and Referencesidserial + select_itemselect_itemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -38914,7 +40125,7 @@ OFFSET 7; - stored_querystored_queryFieldData TypeConstraints and Referencesidserial + stored_querystored_queryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39014,19 +40225,19 @@ OFFSET 7; - Constraints on stored_queryquery_typeCHECK ((type = ANY (ARRAY['SELECT'::text, 'UNION'::text, 'INTERSECT'::text, 'EXCEPT'::text]))) + Constraints on stored_queryquery_typeCHECK ((type = ANY (ARRAY['SELECT'::text, 'UNION'::text, 'INTERSECT'::text, 'EXCEPT'::text]))) Tables referencing action.fieldset via Foreign Key Constraints - •action.fieldset•query.expression•query.from_relation•query.order_by_item•query.query_sequence•query.select_item + •action.fieldset•query.expression•query.from_relation•query.order_by_item•query.query_sequence•query.select_item - subfieldsubfieldFieldData TypeConstraints and Referencesidserial + subfieldsubfieldFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39092,13 +40303,13 @@ OFFSET 7; - Constraints on subfieldqsf_pos_seq_noCHECK ((seq_no > 0)) + Constraints on subfieldqsf_pos_seq_noCHECK ((seq_no > 0)) - Schema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint + Schema reporterSchema reportercirc_typecirc_typeFieldData TypeConstraints and Referencesidbigint @@ -39113,7 +40324,7 @@ OFFSET 7; - currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger + currently_runningcurrently_runningFieldData TypeConstraints and Referencesidinteger @@ -39143,7 +40354,7 @@ OFFSET 7; - demographicdemographicFieldData TypeConstraints and Referencesidinteger + demographicdemographicFieldData TypeConstraints and Referencesidinteger @@ -39163,7 +40374,7 @@ OFFSET 7; - hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger + hold_request_recordhold_request_recordFieldData TypeConstraints and Referencesidinteger @@ -39188,7 +40399,7 @@ OFFSET 7; - materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint + materialized_simple_recordmaterialized_simple_recordFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -39255,7 +40466,7 @@ OFFSET 7; - old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint + old_super_simple_recordold_super_simple_recordFieldData TypeConstraints and Referencesidbigint @@ -39315,7 +40526,7 @@ OFFSET 7; - output_folderoutput_folderFieldData TypeConstraints and Referencesidserial + output_folderoutput_folderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39396,12 +40607,12 @@ OFFSET 7; Tables referencing reporter.output_folder via Foreign Key Constraints - •reporter.output_folder•reporter.schedule + •reporter.output_folder•reporter.schedule - overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint + overdue_circsoverdue_circsFieldData TypeConstraints and Referencesidbigint @@ -39556,7 +40767,7 @@ OFFSET 7; - overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger + overdue_reportsoverdue_reportsFieldData TypeConstraints and Referencesidinteger @@ -39586,7 +40797,7 @@ OFFSET 7; - pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger + pending_reportspending_reportsFieldData TypeConstraints and Referencesidinteger @@ -39616,7 +40827,7 @@ OFFSET 7; - reportreportFieldData TypeConstraints and Referencesidserial + reportreportFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39724,12 +40935,12 @@ OFFSET 7; Tables referencing reporter.schedule via Foreign Key Constraints - •reporter.schedule + •reporter.schedule - report_folderreport_folderFieldData TypeConstraints and Referencesidserial + report_folderreport_folderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39810,12 +41021,12 @@ OFFSET 7; Tables referencing reporter.report via Foreign Key Constraints - •reporter.report•reporter.report_folder + •reporter.report•reporter.report_folder - schedulescheduleFieldData TypeConstraints and Referencesidserial + schedulescheduleFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -39962,7 +41173,7 @@ OFFSET 7; - simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint + simple_recordsimple_recordFieldData TypeConstraints and Referencesidbigint @@ -40077,7 +41288,7 @@ OFFSET 7; - super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint + super_simple_recordsuper_simple_recordFieldData TypeConstraints and Referencesidbigint @@ -40137,7 +41348,7 @@ OFFSET 7; - templatetemplateFieldData TypeConstraints and Referencesidserial + templatetemplateFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40213,12 +41424,12 @@ OFFSET 7; Tables referencing reporter.report via Foreign Key Constraints - •reporter.report + •reporter.report - template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial + template_foldertemplate_folderFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40299,12 +41510,12 @@ OFFSET 7; Tables referencing reporter.template via Foreign Key Constraints - •reporter.template•reporter.template_folder + •reporter.template•reporter.template_folder - xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint + xact_billing_totalsxact_billing_totalsFieldData TypeConstraints and Referencesxactbigint @@ -40329,7 +41540,7 @@ OFFSET 7; - xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint + xact_paid_totalsxact_paid_totalsFieldData TypeConstraints and Referencesxactbigint @@ -40354,7 +41565,7 @@ OFFSET 7; - Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial + Schema searchSchema searchrelevance_adjustmentrelevance_adjustmentFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40408,13 +41619,13 @@ OFFSET 7; - Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) + Constraints on relevance_adjustmentrelevance_adjustment _bump_type_checkCHECK ((bump_type = ANY (ARRAY['word_order'::text, 'first_word'::text, 'full_match'::text]))) - Schema serialSchema serialbasic_summarybasic_summaryFieldData TypeConstraints and Referencesidserial + Schema serialSchema serialbasic_summarybasic_summaryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40466,7 +41677,7 @@ OFFSET 7; - caption_and_patterncaption_and_patternFieldData TypeConstraints and Referencesidserial + caption_and_patterncaption_and_patternFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40596,19 +41807,19 @@ OFFSET 7; - Constraints on caption_and_patterncap_typeCHECK ((type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text]))) + Constraints on caption_and_patterncap_typeCHECK ((type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text]))) Tables referencing serial.issuance via Foreign Key Constraints - •serial.issuance + •serial.issuance - distributiondistributionFieldData TypeConstraints and Referencesidserial + distributiondistributionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40733,19 +41944,19 @@ OFFSET 7; - Constraints on distributionsdist_summary_method_checkCHECK (((summary_method IS NULL) OR (summary_method = ANY (ARRAY['add_to_sre'::text, 'merge_with_sre'::text, 'use_sre_only'::text, 'use_sdist_only'::text])))) + Constraints on distributionsdist_summary_method_checkCHECK (((summary_method IS NULL) OR (summary_method = ANY (ARRAY['add_to_sre'::text, 'merge_with_sre'::text, 'use_sre_only'::text, 'use_sdist_only'::text])))) Tables referencing serial.basic_summary via Foreign Key Constraints - •serial.basic_summary•serial.distribution_note•serial.index_summary•serial.stream•serial.supplement_summary + •serial.basic_summary•serial.distribution_note•serial.index_summary•serial.stream•serial.supplement_summary - distribution_notedistribution_noteFieldData TypeConstraints and Referencesidserial + distribution_notedistribution_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40820,7 +42031,7 @@ OFFSET 7; - index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial + index_summaryindex_summaryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40872,7 +42083,7 @@ OFFSET 7; - issuanceissuanceFieldData TypeConstraints and Referencesidserial + issuanceissuanceFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -40984,19 +42195,19 @@ OFFSET 7; - Constraints on issuancevalid_holding_typeCHECK (((holding_type IS NULL) OR (holding_type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text])))) + Constraints on issuancevalid_holding_typeCHECK (((holding_type IS NULL) OR (holding_type = ANY (ARRAY['basic'::text, 'supplement'::text, 'index'::text])))) Tables referencing serial.item via Foreign Key Constraints - •serial.item + •serial.item - itemitemFieldData TypeConstraints and Referencesidserial + itemitemFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41135,19 +42346,19 @@ OFFSET 7; - Constraints on itemvalid_statusCHECK ((status = ANY (ARRAY['Bindery'::text, 'Bound'::text, 'Claimed'::text, 'Discarded'::text, 'Expected'::text, 'Not Held'::text, 'Not Published'::text, 'Received'::text]))) + Constraints on itemvalid_statusCHECK ((status = ANY (ARRAY['Bindery'::text, 'Bound'::text, 'Claimed'::text, 'Discarded'::text, 'Expected'::text, 'Not Held'::text, 'Not Published'::text, 'Received'::text]))) Tables referencing acq.serial_claim via Foreign Key Constraints - •acq.serial_claim•serial.item_note + •acq.serial_claim•serial.item_note - item_noteitem_noteFieldData TypeConstraints and Referencesidserial + item_noteitem_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41222,7 +42433,7 @@ OFFSET 7; - record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial + record_entryrecord_entryFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -41339,12 +42550,12 @@ OFFSET 7; Tables referencing serial.distribution via Foreign Key Constraints - •serial.distribution + •serial.distribution - routing_list_userrouting_list_userFieldData TypeConstraints and Referencesidserial + routing_list_userrouting_list_userFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41420,13 +42631,13 @@ OFFSET 7; - Constraints on routing_list_userreader_or_deptCHECK ((((reader IS NOT NULL) AND (department IS NULL)) OR ((reader IS NULL) AND (department IS NOT NULL)))) + Constraints on routing_list_userreader_or_deptCHECK ((((reader IS NOT NULL) AND (department IS NULL)) OR ((reader IS NULL) AND (department IS NOT NULL)))) - streamstreamFieldData TypeConstraints and Referencesidserial + streamstreamFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41463,12 +42674,12 @@ OFFSET 7; Tables referencing serial.item via Foreign Key Constraints - •serial.item•serial.routing_list_user + •serial.item•serial.routing_list_user - subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial + subscriptionsubscriptionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41531,12 +42742,12 @@ OFFSET 7; Tables referencing serial.caption_and_pattern via Foreign Key Constraints - •serial.caption_and_pattern•serial.distribution•serial.issuance•serial.subscription_note + •serial.caption_and_pattern•serial.distribution•serial.issuance•serial.subscription_note - subscription_notesubscription_noteFieldData TypeConstraints and Referencesidserial + subscription_notesubscription_noteFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41611,7 +42822,7 @@ OFFSET 7; - supplement_summarysupplement_summaryFieldData TypeConstraints and Referencesidserial + supplement_summarysupplement_summaryFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -41663,7 +42874,7 @@ OFFSET 7; - unitunitFieldData TypeConstraints and Referencesidbigint + unitunitFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -41935,19 +43146,19 @@ OFFSET 7; - Constraints on unitcopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) + Constraints on unitcopy_fine_level_checkCHECK ((fine_level = ANY (ARRAY[1, 2, 3])))copy_loan_duration_checkCHECK ((loan_duration = ANY (ARRAY[1, 2, 3]))) Tables referencing serial.item via Foreign Key Constraints - •serial.item + •serial.item - Schema stagingSchema stagingbilling_address_stagebilling_address_stageFieldData TypeConstraints and Referencesrow_idbigint + Schema stagingSchema stagingbilling_address_stagebilling_address_stageFieldData TypeConstraints and Referencesrow_idbigint PRIMARY KEY @@ -42031,7 +43242,7 @@ OFFSET 7; - card_stagecard_stageFieldData TypeConstraints and Referencesrow_idbigserial + card_stagecard_stageFieldData TypeConstraints and Referencesrow_idbigserial PRIMARY KEY @@ -42076,7 +43287,7 @@ OFFSET 7; - mailing_address_stagemailing_address_stageFieldData TypeConstraints and Referencesrow_idbigserial + mailing_address_stagemailing_address_stageFieldData TypeConstraints and Referencesrow_idbigserial PRIMARY KEY @@ -42158,7 +43369,7 @@ OFFSET 7; - statcat_stagestatcat_stageFieldData TypeConstraints and Referencesrow_idbigserial + statcat_stagestatcat_stageFieldData TypeConstraints and Referencesrow_idbigserial PRIMARY KEY @@ -42210,7 +43421,7 @@ OFFSET 7; - user_stageuser_stageFieldData TypeConstraints and Referencesrow_idbigserial + user_stageuser_stageFieldData TypeConstraints and Referencesrow_idbigserial PRIMARY KEY @@ -42307,7 +43518,7 @@ OFFSET 7; - Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint + Schema statsSchema statsfleshed_call_numberfleshed_call_numberFieldData TypeConstraints and Referencesidbigint @@ -42402,7 +43613,7 @@ OFFSET 7; - fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint + fleshed_circulationfleshed_circulationFieldData TypeConstraints and Referencesidbigint @@ -42602,7 +43813,7 @@ OFFSET 7; - fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint + fleshed_copyfleshed_copyFieldData TypeConstraints and Referencesidbigint @@ -42812,7 +44023,7 @@ OFFSET 7; - Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial + Schema vandelaySchema vandelayauthority_attr_definitionauthority_attr_definitionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -42874,12 +44085,12 @@ OFFSET 7; Tables referencing vandelay.queued_authority_record_attr via Foreign Key Constraints - •vandelay.queued_authority_record_attr + •vandelay.queued_authority_record_attr - authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial + authority_matchauthority_matchFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -42932,7 +44143,7 @@ OFFSET 7; - authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint + authority_queueauthority_queueFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -43005,19 +44216,19 @@ OFFSET 7; - Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + Constraints on authority_queueauthority_queue_ queue_type_checkCHECK ((queue_type = 'authority'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) Tables referencing vandelay.queued_authority_record via Foreign Key Constraints - •vandelay.queued_authority_record + •vandelay.queued_authority_record - bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial + bib_attr_definitionbib_attr_definitionFieldData TypeConstraints and Referencesidserial PRIMARY KEY @@ -43079,12 +44290,12 @@ OFFSET 7; Tables referencing vandelay.queued_bib_record_attr via Foreign Key Constraints - •vandelay.queued_bib_record_attr + •vandelay.queued_bib_record_attr - bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial + bib_matchbib_matchFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43142,13 +44353,13 @@ OFFSET 7; - Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) + Constraints on bib_matchbib_match_field_type_checkCHECK ((field_type = ANY (ARRAY['isbn'::text, 'tcn_value'::text, 'id'::text]))) - bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint + bib_queuebib_queueFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -43233,19 +44444,19 @@ OFFSET 7; - Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + Constraints on bib_queuebib_queue_queue_type_checkCHECK ((queue_type = 'bib'::text))queue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) Tables referencing vandelay.queued_bib_record via Foreign Key Constraints - •vandelay.queued_bib_record + •vandelay.queued_bib_record - import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial + import_bib_trash_fieldsimport_bib_trash_fieldsFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43299,7 +44510,7 @@ OFFSET 7; - import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial + import_itemimport_itemFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43439,7 +44650,7 @@ OFFSET 7; - import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial + import_item_attr_definitionimport_item_attr_definitionFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43615,12 +44826,12 @@ OFFSET 7; Tables referencing vandelay.bib_queue via Foreign Key Constraints - •vandelay.bib_queue•vandelay.import_item + •vandelay.bib_queue•vandelay.import_item - merge_profilemerge_profileFieldData TypeConstraints and Referencesidbigserial + merge_profilemerge_profileFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43692,13 +44903,13 @@ OFFSET 7; - Constraints on merge_profileadd_replace_strip_or_preserveCHECK ((((preserve_spec IS NOT NULL) OR (replace_spec IS NOT NULL)) OR ((preserve_spec IS NULL) AND (replace_spec IS NULL)))) + Constraints on merge_profileadd_replace_strip_or_preserveCHECK ((((preserve_spec IS NOT NULL) OR (replace_spec IS NOT NULL)) OR ((preserve_spec IS NULL) AND (replace_spec IS NULL)))) - queuequeueFieldData TypeConstraints and Referencesidbigserial + queuequeueFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43776,13 +44987,13 @@ OFFSET 7; - Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) + Constraints on queuequeue_queue_type_checkCHECK ((queue_type = ANY (ARRAY['bib'::text, 'authority'::text]))) - queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint + queued_authority_recordqueued_authority_recordFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -43855,19 +45066,19 @@ OFFSET 7; - Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + Constraints on queued_authority_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match•vandelay.queued_authority_record_attr + •vandelay.authority_match•vandelay.queued_authority_record_attr - queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial + queued_authority_record_attrqueued_authority_record_attrFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -43920,12 +45131,12 @@ OFFSET 7; Tables referencing vandelay.authority_match via Foreign Key Constraints - •vandelay.authority_match + •vandelay.authority_match - queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint + queued_bib_recordqueued_bib_recordFieldData TypeConstraints and Referencesidbigint PRIMARY KEY @@ -44010,19 +45221,19 @@ OFFSET 7; - Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + Constraints on queued_bib_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr + •vandelay.bib_match•vandelay.import_item•vandelay.queued_bib_record_attr - queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial + queued_bib_record_attrqueued_bib_record_attrFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY @@ -44075,23 +45286,15 @@ OFFSET 7; Tables referencing vandelay.bib_match via Foreign Key Constraints - •vandelay.bib_match + •vandelay.bib_match - queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial + queued_recordqueued_recordFieldData TypeConstraints and Referencesidbigserial PRIMARY KEY - - - - - - - - create_timetimestamp with time zone @@ -44125,7 +45328,7 @@ OFFSET 7; - Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) + Constraints on queued_recordqueued_record_purpose_checkCHECK ((purpose = ANY (ARRAY['import'::text, 'overlay'::text]))) @@ -44142,33 +45345,33 @@ OFFSET 7; The Evergreen DIG was established in May 2009 at the first Evergreen International Conference, where members of the Evergreen community committed to developing single-source, standards-based documentation for Evergreen. Since then, the DIG has been actively working toward that goal. - Table A.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences - Table A.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software + Table A.1. Evergreen DIG ParticipantsNameOrganizationJeremy BuhlerSITKAPaula Burton King County Library SystemMatt CarlsonKing County Library System Sarah ChildsHussey-Mayfield Memorial Public LibraryAnton ChuppinNova Scotia Provincial LibraryMarlene ColemanBeaufort Country LibraryKaren CollierKent County Public Library Shannon DineenSITKAGeorge DuimovichNRCan LibraryJennifer DurhamStatesboro Regional Library SystemJennifer FinneyFlorence County LibraryLynn FloydAnderson County LibrarySally FortinEquinox SoftwareTina JiSITKACatherine LemmerIndiana State LibraryRoma MatottPioneer Library SystemAndrea NeimanKent County Public LibraryKevin PischkeWilliam Jessup University Tara RobertsonN/ARod SchiffmanAlpha-G ConsultingSteve SheppardOpenBen ShumBibliomationRobert SoulliereMohawk CollegeLindsay StrattonPioneer Library SystemJenny TurnerPALSRepke de VriesInternational Institute for Social HistoryD. Ceabron WilliamsFlint River Regional Library System Tigran ZargaryanFundamental Scientific Library of the National Academy of Sciences + Table A.2. Past DIG ParticipantsNameOrganizationPaul Weiss Consultant/Sage Library SystemKaren SchneiderEquinox Software Special thanks goes to: - •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for + •Jeremy Buhler and SITKA for providing DocBook style sheets, style guides and large portions of content for this documentation.•Dan Scott from Laurentian University for providing large portions of content and many helpful tips.•Mike Rylander, Grace Dunbar, Galen Charleton, Jason Etheridge, Bill Erickson, Joe Atzberger, Scott McKellar and all the other folks at Equinox Software for contributing large portions of content on the wiki. There have been many other who have contributed there time to the Book of Evergreen project. Without their contributions to this community driven project, this documentation would not be possible. How to ParticipateHow to Participate - + Contributing to documentation is an excellent way to support Evergreen, even if you are new to documentation. In fact, beginners often have a distinct advantage over the experts, more easily spotting the places where documentation is lacking or where it is unclear. We welcome your contribution with planning, writing, editing, testing, translating to DocBook, and other tasks. Whatever your background or experience we are keen to have your help! What you can do: - •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. - Please send an email introducing yourself to the list.•Add yourself to the participant list - if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, - and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. - Volunteer RolesVolunteer Roles + •Join the Evergreen documentation listserv: list.georgialibraries.org/mailman/listinfo/open-ils-documentation . This is the primary way we communicate with each other. + Please send an email introducing yourself to the list.•Add yourself to the participant list + if you have an Evergreen DokuWiki account, or send a request to <docs@evergreen-ils.org>.•Check out the documentation outline to see which areas need work, + and let the DIG list know in which areas you would like to work.•Review the documentation and report any error or make suggestion using Launchpad. + Volunteer RolesVolunteer Roles We are now looking for people to help produce the documentation. If you interested in participating, email the DIG facilitators at <docs@evergreen-ils.org> or post on the documentation mailing list. We're looking for volunteers to work on the following: - •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as - Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not - officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style - guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. + •Writing – Produce the documentation (“from scratch,” and/or revised from existing materials). We're open to receiving content in any formats, such as + Word or Open Office, but of course, would be most delighted with DocBook xml format.•Testing – Compare the documents with the functions they describe and ensuring that the procedures accomplish the desired results. Even if you are not + officially in the DIG, we would appreciate any suggestions you may have for Evergreen documentation.•XML conversion – Convert existing documentation to DocBook format.•Editorial review – Ensuring the documentation is clear and follows Evergreen DIG style + guide conventions.•Style and Design – Edit the DocBook style sheets or post style tips and suggestions on the DIG list. @@ -44180,11 +45383,11 @@ OFFSET 7; This documentation is just one way to learn about Evergreen and find solutions to Evergreen challenges. Below is a list of many other resources to help you find answers to almost any question you might have. - Evergreen Wiki - Loads of information and the main portal to the Evergreen community. + Evergreen Wiki - Loads of information and the main portal to the Evergreen community. Evergreen mailing lists - These are excellent for initiating questions. There are several lists including: - •General list - General inquiries regarding Evergreen. If unsure about - which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including - questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and + •General list - General inquiries regarding Evergreen. If unsure about + which list to use, this is a good stating point.•Developer list - Technical questions should be asked here including + questions regarding installation. As well, patches can be submitted using this list and developer communication also takes place here. •DIG list - This list is used for questions and feedback regarding this documentation, the Documentation Interest Group and other documentation related ideas and issues. Evergreen Blog - Great for getting general news and updates about Evergreen. It is also an interesting historical read @@ -44192,10 +45395,10 @@ OFFSET 7; Evergreen IRC channel - Allows live chat. Many developers hang out here and will try to field technical questions. This is often the quickest way to get a solution to a specific problem. Just remember that while the channel is open 24/7, there are times when no one is available in the channel. The most active times for the IRC channel seem to be weekday afternoons (Eastern Standard Time). There is also an archive of logs from the chat sessions available on the - IRC page. + IRC page. Evergreen related community blogs - Evergreen related blog entries from the community. Resource Sharing Cooperative of Evergreen Libraries (RSCEL) - Provides some technical documents and a means for the - Evergreen community to collaborate with other libraries. + Evergreen community to collaborate with other libraries. List of current Evergreen libraries - Locate other libraries who are using Evergreen. @@ -44206,36 +45409,36 @@ OFFSET 7; GlossaryGlossary In this section we expand acronyms, define terms, and generally try to explain concepts used by Evergreen software. - AApacheOpen-source web server software used to serve both static + AApacheOpen-source web server software used to serve both static content and dynamic web pages in a secure and reliable way. More information is available at - http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of + http://apache.org.BBookbagsBookbags are lists of items that can be used for any number of purposes. For example, to keep track of what books you have read, books you would like to read, to maintain a class reading list, to maintain a reading list for a book club, to keep a list of books you would like for your birthday. There are an unlimited number of - uses.CCentOSA popular open-source operating system based on Red Hat + uses.CCentOSA popular open-source operating system based on Red Hat Enterprises Linux (also known as "RHEL") and often used for in web servers. More information is available at - http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with + http://www.centos.org.Closure CompilerA suite of open-source tools used to build web applications with Javascript; originally developed by Google. It is used to create special builds of the Evergreen Staff Client. More information is available at - http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in + http://code.google.com/closure/compiler/.CPANAn open-source archive of software modules written in Perl. More information is available at - http://www.cpan.org.See Also Perl.DDebianOne of the most popular open-source operating system using the + http://www.cpan.org.See Also Perl.DDebianOne of the most popular open-source operating system using the Linux kernel that provides over 25000 useful precompiled software packages. Also known as Debian GNU/Linux. More information is available at - http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings + http://www.debian.org.Domain nameA unique set of case-insensitive, alphanumeric strings separated by periods that are used to name organizations, web sites and addresses on the Internet (e.g.: www.esilibrary.com). Domain names can be reserved via third-party registration services, and can be associated with a - unique IP address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that is + unique IP address or suite of IP addresses.See Also IP Address.EejabberdAn open-source Jabber/XMPP instant messaging server that is used for client-server message passing within Evergreen. It runs under popular operating systems (e.g., Mac OSX, @@ -44245,108 +45448,108 @@ OFFSET 7; services for a Jabber domain across an extendable cluster of cheap, easily-replaced machine nodes. More information is available at - http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the + http://www.ejabberd.im.See Also Jabber, XMPP.FGGentooA popular open-source operating system built on the Linux kernel. More information is available at - http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of + http://www.gentoo.org.HIIP Address(Internet Protocol address) A numerical label consisting of four numbers separated by periods (e.g., "192.168.1.15") assigned to individual members of networked computing systems. It uniquely identifies each system on the network and allows controlled communication between such systems. The numerical label scheme must adhere to a strictly defined naming convention that is currently defined and overseen by the Internet Corporation for Assigned Names - and Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing + and Numbers ("ICANN").Item/copy BucketsVirtual “containers” to use in batch processing of item or copy records. They can be used to perform various - cataloging/holdings maintenance tasks in batch.JJabberThe communications protocol used for client-server message + cataloging/holdings maintenance tasks in batch.JJabberThe communications protocol used for client-server message passing within Evergreen. Now known as XMPP (eXtensible Messaging and - Presence Protocol), it was originally named "Jabber".See Also XMPP, ejabberd.KLMMARCThe MARC formats are standards for the representation and + Presence Protocol), it was originally named "Jabber".See Also XMPP, ejabberd.KLMMARCThe MARC formats are standards for the representation and communication of bibliographic and related information in - machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to + machine-readable form.MARCXMLFramework for working with MARC data in a XML environment.McCoyAn open-source application that allows add-on authors to provide secure updates to their users. It is used to create special builds of the Evergreen Staff Client. More information is available at - http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually + http://developer.mozilla.org/en/McCoy.memcachedA general-purpose distributed memory caching system, usually with a client-server architecture spread over multiple computing systems. It reduces the number of times a data source (e.g., a database) must be directly accessed by temporarily caching data in memory, therefore dramatically speeding up database-driven web - applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows + applications.NNetwork addressAlso known as an IP address (Internet Protocol address).See Also IP Address.nsisAn open-source software tool used to create Windows installers. It is used to create special builds of the Evergreen Staff Client. More information is available at - http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a + http://nsis.sourceforge.net.OOPACThe "Online Public Access Catalog"; an online database of a library's holdings; used to find resources in their collections; possibly searchable by keyword, title, author, subject or call - number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') + number.OpenSRFThe "Open Scalable Request Framework" (pronounced 'open surf') is a stateful, decentralized service architecture that allows developers to create applications for Evergreen with a minimum of - knowledge of its structure.PPerlThe high-level scripting language in which most of the business logic of Evergreen is written.See Also CPAN.PKIPublic Key Infrastructure (PKI) describes the schemes needed - to generate and maintain digital SSL Certificates.See Also ???TITLE???.PostgreSQLA popular open-source object-relational database management - system that underpins Evergreen software.PuTTYA popular open-source telnet/ssh client for the Windows and + knowledge of its structure.PPerlThe high-level scripting language in which most of the business logic of Evergreen is written.See Also CPAN.PKIPublic Key Infrastructure (PKI) describes the schemes needed + to generate and maintain digital SSL Certificates.See Also SSL Certificate.PostgreSQLA popular open-source object-relational database management + system that underpins Evergreen software.PuTTYA popular open-source telnet/ssh client for the Windows and Unix platforms. As used in Evergreen, a handy utility used to create an SSH Tunnel for connecting Staff Clients to Evergreen servers over insecure networks. More information is available at - http://www.chiark.greenend.org.uk/~sgtatham/putty/.See Also SSH tunnel.QRResource HackerAn open-source utility used to view, modify, rename, add, + http://www.chiark.greenend.org.uk/~sgtatham/putty/.See Also SSH tunnel.QRResource HackerAn open-source utility used to view, modify, rename, add, delete and extract resources in 32bit Windows executables. It is used to create special builds of the Evergreen Staff Client. More information is available at - Resource HackerRHELAlso known as "Red Hat Enterprises + Resource HackerRHELAlso known as "Red Hat Enterprises Linux". An official Linux distribution that is targeted at the commercial market. It is the basis of other popular Linux distributions, e.g., CentOS. More information is available at - http://www.redhat.com.SSIPSIP (Standard Interchange Protocol) is a communications + http://www.redhat.com.SSIPSIP (Standard Interchange Protocol) is a communications protocol used within Evergreen for transferring data to and from other third party devices, such as RFID and barcode scanners that handle patron and library material information. Version 2.0 (also known as "SIP2") is the current standard. It was originally - developed by the 3M Corporation.srfshA command language interpreter (shell) that executes commands + developed by the 3M Corporation.srfshA command language interpreter (shell) that executes commands read from the standard input. It is used to test the Open Service - Request Framework (OpenSRF).SRUSRU (Search & Retrieve URL Service) is a search protocol + Request Framework (OpenSRF).SRUSRU (Search & Retrieve URL Service) is a search protocol used in web search and retrieval. It expresses queries in Contextual Query Language (CQL) and transmits them as a URL, returning XML data - as if it were a web page.See Also SRW.SRWSRW (Search & Retrieve Web Service), also known as "SRU + as if it were a web page.See Also SRW.SRWSRW (Search & Retrieve Web Service), also known as "SRU via HTTP SOAP", is a search protocol used in web search and retrieval. It uses a SOAP interface and expresses both the query and - result as XML data streams.See Also SRU.SSHAn encrypted network protocol using public-key cryptography + result as XML data streams.See Also SRU.SSHAn encrypted network protocol using public-key cryptography that allows secure communications between systems on an insecure network. Typically used to access shell accounts but also supports tunneling, forwarding TCP ports and X11 connections, and - transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff + transferring files.SSH proxyAs used in Evergreen, a method of allowing one or more Staff Clients to communicate with one or more Evergreen servers over an insecure network by sending data through a secure SSH tunnel. It also buffers and caches all data travelling to and from Staff - Clients to speed up access to resources on Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network + Clients to speed up access to resources on Evergreen servers.See Also SSH, tunneling, SSH tunnel.SSH tunnelAn encrypted data channel existing over an SSH network connection. Used to securely transfer unencrypted data streams over - insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff + insecure networks.See Also SSH, tunneling.SSL CertificateAs used in Evergreen, it is a method of ensuring that Staff Clients are able to connect to legitimate Evergreen servers.In general, it is a special electronic document used to guarantee authenticity of a digital message. Also known as a "public key", or "identity" or "digital" certificate. It combines an identity (of a person or an organization) and a unique public key to form a so-called digital signature, and is used to verify that the public key does, in fact, belong with that particular - identity.See Also PKI.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients + identity.See Also PKI.TtunnelingAs used in Evergreen, it is a method of allowing Staff Clients to securely connect to legitimate Evergreen servers.In general, it is a method of encapsulating data provided in one network protocol (the "delivery"protocol), within data in a different network protocol (the "tunneling" protocol). Used to provide a secure path and secure communications through an insecure or incompatible network. Can be used to bypass firewalls by communicating via a protocol the firewall normally blocks, but - "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the + "wrapped" inside a protocol that the firewall does not block.See Also SSH tunnel.UUbuntuA popular open-source operating system using the Linux kernel that was originally based on the Debian GNU/Linux operating system. More information is available at - http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that + http://www.ubuntu.com.See Also Debian.VVirtual PCA popular commercial package of virtualization software that emulates the x86 microprocessor architecture. It is installed on a Windows "host" operating system and allows other "guest" (typically including Linux and Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that + to be loaded and executed.See Also Virtualization.VirtualBoxA popular commercial package of virtualization software that emulates the x86 microprocessor architecture. It can be installed on Linux, Mac OS X, @@ -44355,7 +45558,7 @@ OFFSET 7; systems and allows other "guest" (typically including Linux and Windows) operating systems - to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that + to be loaded and executed.See Also Virtualization.VirtualizationA method of executing software in a special environment that is partitioned or separated from the real underlying hardware and software resources. In typical usage, it allows a host operating system to encapsulate or emulate @@ -44364,7 +45567,7 @@ OFFSET 7; hosting environment. As used in Evergreen, it enables a copy of the Linux operating system running Evergreen software to execute within a - Windows environment.See Also VirtualBox, Virtual PC, VMware.VMwareA popular commercial package of virtualization software that + Windows environment.See Also VirtualBox, Virtual PC, VMware.VMwareA popular commercial package of virtualization software that emulates the x86 microprocessor architecture. It can be installed on Linux, Mac OS X, @@ -44373,54 +45576,54 @@ OFFSET 7; and allows other "guest" (typically including Linux and Windows) operating systems - to be loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing + to be loaded and executed.See Also Virtualization.Volume BucketsVirtual “containers” to use in batch processing of multiple volumes. They can be used to perform various - cataloging/holdings maintenance tasks in batch.WWineA popular open-source application that allows + cataloging/holdings maintenance tasks in batch.WWineA popular open-source application that allows Linux and Unix systems to run Windows executables. More information is available at - http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of + http://www.winehq.org/.XXMLThe eXtensible Markup Language, a subset of SGML; a set of rules for encoding information in a way that is both human- and machine-readable. It is primarily used to define documents but can also be used to define arbitrary data structures. It was originally - defined by the World Wide Web Consortium (W3C).XMPPThe open-standard communications protocol (based on XML) used + defined by the World Wide Web Consortium (W3C).XMPPThe open-standard communications protocol (based on XML) used for client-server message passing within Evergreen. It supports the concept of a consistent domain of message types that flow between software applications, possibly on different operating systems and architectures. More information is available - at http://xmpp.org.See Also Jabber, ejabberd.xpathThe XML Path Language, a query language based on a tree + at http://xmpp.org.See Also Jabber, ejabberd.xpathThe XML Path Language, a query language based on a tree representation of an XML document. It is used to programmatically select nodes from an XML document and to do minor computation involving strings, numbers and Boolean values. It allows you to identify parts of the XML document tree, to navigate around the tree, and to uniquely select nodes. The currently version is "XPath 2.0". It was originally defined by the World Wide Web Consortium - (W3C).XULThe XML User Interface Language, a specialized interface + (W3C).XULThe XML User Interface Language, a specialized interface language that allows building cross-platform applications that drive Mozilla-based browsers such as Firefox. More information is available at - https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides + https://developer.mozilla.org/en/XUL.xulrunnerA specialized run-time application environment that provides support for installing, upgrading and uninstalling XUL applications. It operates with Mozilla-based applications such as the Firefox browser. More information is available at - https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of - Z39.50 / SRW / SRU clients and servers.See Also SRU, SRW, Z39.50.yaz-clientA Z39.50/SRU client for connecting to YAZ servers. + https://developer.mozilla.org/en/XULRunner.See Also XUL.YYAZA programmers’ toolkit supporting the development of + Z39.50 / SRW / SRU clients and servers.See Also SRU, SRW, Z39.50.yaz-clientA Z39.50/SRU client for connecting to YAZ servers. More information is available at - http://www.indexdata.com/yaz/doc/yaz-client.htmlSee Also SRU.ZZ39.50An international standard client–server protocol for + http://www.indexdata.com/yaz/doc/yaz-client.htmlSee Also SRU.ZZ39.50An international standard client–server protocol for communication between computer systems, primarily library and information related systems.See Also SRU. IndexIndex Report errors in this documentation using Launchpad. Index Report any errors in this documentation using Launchpad. - IndexIndex + IndexIndex -Aaction triggers, creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, ANSI, Apache, , logs, Apache modules, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , BBibTemplate, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, BRE JSON, Ccomma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , opensrf_core.xml, , , , startup.pl, copy buckets, (see also item buckets)CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, Fieldmapper, firewall, GGNU General Public License, HHTTPtranslator, IIP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, LLinuxCentOS, , commands, Debian, , , , , Fedora, , Gentoo, Red Hat, RHEL, Ubuntu, , , , Wine, logsApache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, patrons, MODS, Nnetwork address, (see also ip address)OOPAC, added content, Google Books, customizingchanging the color scheme, details page, text and labels, testing, OpenSRF, , , , Communication Flows, configure, download, installation, services, , , Ppcrud, Perl, CPAN, , , permissions, PKI, proximity map, Python, , RRAID, receipt template editor, reportsstarting, Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, search resultscaching, security, SelfCheck, , Simple2ZOOM, SIP, , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SRW, SSH, proxy, Putty, tunneling, SSL, certificates, staff clientcustomizing, labels and messages, testing, staging table, SuperCat, formats, adding, customizing, ISBNs, recent records, records, syslog, , syslog-NG, Ttelnet, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVersion Control Systemgit, Subversion, , virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, Yyaz, , , ZZ39.50, , +Aaction triggers, creating, event definitions, hooks, processing, reactors, runner, validators, added contentSyndetics, ANSI, Apache, , logs, Apache modules, autogen, , , , Automated Circulation System, Automated Material Handling (AMH), , BBibTemplate, bookbags, booking reservationcancelling, capturing items, creating, creating a pull list, making a cataloged item bookable, non-bibliographic items, picking up, returning, setting booking permissions, BRE JSON, Ccash reports, closed dates editor, due dates, fines, comma delimited files, configuration filesApache, , , eg.conf, eg_vhost.conf, fm_IDL.xml, oils_sip.xml, , , oils_web.xml, opensrf.xml, , , opensrf_core.xml, , , , startup.pl, copy buckets, (see also item buckets)copy locations editor, copy stat cats, CQL, CSS3, , cstore, Ddatabases, backing up, PostgreSQL, , , , , , , , , , , directoriesbacking up, DocBook, Documentation Interest Group (DIG), , Dojo toolkit, , domain name, Dublin Core, , Eejabberd, , Ejabberd, environment variablePATH, PERL5LIB, Evergreen Interface Definition Language (IDL), action element, class element, context_field element, field element, fields element, permacrud element, Evergreen software dependencies, FFGDC, field documentationadministering field documentation, patron field documentation, Fieldmapper, firewall, GGNU General Public License, group penalty thresholds, creating local penalty thresholds, HHTTPtranslator, IIP Address, IRC chat, item buckets, (see also copy buckets)JJabber, jabber, (see also XMPP)Java, JavaScript, , , , JSON, , SELECT clauses, Llanguagesdisabling a localization, enabling a localization, library settings editor, LinuxCentOS, , commands, Debian, , , , , Fedora, , Gentoo, Red Hat, RHEL, Ubuntu, , , , Wine, logsApache, gateway.log, Log Rotate, logging levels, managing, osrfsys.log, router, srfsh.log, Mmagnetic media, mailing lists, , MARC, , MARCXML, , , memcached, , , migratingadding metarecords, batch updating patrons, bibliograohic records, exporting bibliographic recordsMARC, holdings, import into PostgreSQL, importing authority records, importing bibliographic records, patrons, MODS, Nnetwork address, (see also ip address)non-catalogued type editor, OOPAC, added content, Google Books, customizingchanging the color scheme, details page, text and labels, testing, OpenSRF, , , , Communication Flows, configure, download, installation, services, , , Ppatron stat cats, pcrud, Perl, CPAN, , , permissions, PKI, proximity map, Python, , RRAID, receipt template editor, , reportsstarting, Resource Sharing Cooperative of Evergreen Libraries (RSCEL), rsync, SSAN, search resultscaching, security, SelfCheck, , Simple2ZOOM, SIP, , , , slot formatter, slot marker, sql, , tables, srfsh, , , , srfsh (command-line OpenSRF client), SRU, , SRW, SSH, proxy, Putty, tunneling, SSL, certificates, staff clientcustomizing, labels and messages, fonts, printer settings, sounds, testing, staging table, statistical categories editor, SuperCat, formats, adding, customizing, ISBNs, recent records, records, surveys, syslog, , syslog-NG, Ttelnet, tunneling, (see also SSH tunneling)UunAPI, Unicode, VVersion Control Systemgit, Subversion, , virtualization, virtualization softwareVirtual PC, VirtualBox, VMware, volume buckets, Wweb serverApache, , , , , wiki, XXML, XMPP, , , (see also jabber)xpath, xUL, XULRunner, Yyaz, , , ZZ39.50, , diff --git a/2.0/stafftasks/acquisitions_module-GPLS.xml b/2.0/stafftasks/acquisitions_module-GPLS.xml index 20a9da6f99..5ef3e1a3c1 100644 --- a/2.0/stafftasks/acquisitions_module-GPLS.xml +++ b/2.0/stafftasks/acquisitions_module-GPLS.xml @@ -22,25 +22,25 @@ add brief records to new, pending or on-order purchase orders. Add brief records to a selection list - - Click Acquisitions → New Brief Record. You can also add brief records to + + Click Acquisitions → New Brief Record. You can also add brief records to an existing selection list by clicking the Actions menu on the selection list - and choosing Add Brief Record. - Choose a selection list from the drop down menu, or enter the name of a new selection list. - Enter bibliographic information in the desired fields. - Click Save Record. - + and choosing Add Brief Record. + Choose a selection list from the drop down menu, or enter the name of a new selection list. + Enter bibliographic information in the desired fields. + Click Save Record. + Add brief records to purchase orders You can add brief records to new or existing purchase orders. - - Open or create a purchase order. - Click Add Brief Record. - Enter bibliographic information in the desired fields. Notice that the - record is added to the purchase order that you just created. - Click Save Record. - + + Open or create a purchase order. + Click Add Brief Record. + Enter bibliographic information in the desired fields. Notice that the + record is added to the purchase order that you just created. + Click Save Record. +
@@ -54,30 +54,30 @@ Cancel/suspend copies You can cancel or suspend line items that are in a state of on order or pending order. - - Select the Copies link. - Click the Cancel link adjacent to the copy that you wish to cancel. - Select a cancel reason from the drop down menu that appears, and click Cancel copy. - + + Select the Copies link. + Click the Cancel link adjacent to the copy that you wish to cancel. + Select a cancel reason from the drop down menu that appears, and click Cancel copy. + Cancel/suspend line items You can cancel or suspend line items that are in a state of on order or pending order. - - Check the boxes of the line items that you wish to cancel. - Click Actions →Cancel Selected Lineitems. - Select a cancel reason from the drop down menu. Choose the cancel reason, - and click Cancel Line Items. The status of the line item is now cancelled. - + + Check the boxes of the line items that you wish to cancel. + Click Actions →Cancel Selected Lineitems. + Select a cancel reason from the drop down menu. Choose the cancel reason, + and click Cancel Line Items. The status of the line item is now cancelled. + Cancel/suspend purchase orders - - Notice the Cancel column in the top half of the purchase order. - Click the drop down arrow adjacent to Cancel order, and select a reason for - cancelling the order. - Click Cancel order. The state of the purchase order is cancelled. - + + Notice the Cancel column in the top half of the purchase order. + Click the drop down arrow adjacent to Cancel order, and select a reason for + cancelling the order. + Click Cancel order. The state of the purchase order is cancelled. +
@@ -94,45 +94,45 @@ the purchase order. The default claim policy for a provider is established when the provider is created and will be used for claiming if no claim policy has been applied. - - Open a selection list or purchase order. - Click the Actions drop down menu on the line item. - Click Apply Claim Policy. - A drop down menu of claim policies will appear. Choose a claim policy to + + Open a selection list or purchase order. + Click the Actions drop down menu on the line item. + Click Apply Claim Policy. + A drop down menu of claim policies will appear. Choose a claim policy to apply to the line item. The claim policy will be applied to all items that have - not been received or cancelled. - Click Save. - + not been received or cancelled. + Click Save. + Change a claim policy You can manually change a claim policy that has been applied to a line item. - - Open a selection list or purchase order. - Click the Actions drop down menu on the line item. - Click Change Claim Policy. - A drop down menu of claim policies will appear. Choose a claim policy to apply to the line - item. - Click Save. - + + Open a selection list or purchase order. + Click the Actions drop down menu on the line item. + Click Change Claim Policy. + A drop down menu of claim policies will appear. Choose a claim policy to apply to the line + item. + Click Save. + Claim an item You can manually claim items at any time after the item has been ordered. - - Open a purchase order. - Click the Actions drop down menu on the line item. - Click Claims. The number of existing claims appears in parentheses. - A drop down menu of items to be claimed and possible claim actions appears. + + Open a purchase order. + Click the Actions drop down menu on the line item. + Click Claims. The number of existing claims appears in parentheses. + A drop down menu of items to be claimed and possible claim actions appears. Check the boxes adjacent to the item that you want to claim and the action that - you will take. You can claim items that have not been received or cancelled. - Click Claim Selected. - Select a claim type from the drop down menu. Entering a note is optional. - Click Claim. - The number of existing claims on the line item updates, and a claim voucher + you will take. You can claim items that have not been received or cancelled. + Click Claim Selected. + Select a claim type from the drop down menu. Entering a note is optional. + Click Claim. + The number of existing claims on the line item updates, and a claim voucher appears. The voucher can be printed and mailed to the vendor to initiate the - claim. - + claim. + Produce a list of claim-ready items @@ -141,32 +141,32 @@ claiming. Evergreen can produce a list of items, by ordering branch, which is ready to be claimed. You can use this list to manually claim items from your provider. - - Click Acquisitions →Claim-Ready Items. - Choose a branch from the drop down menu to claim items that were ordered by this - branch. - Any items that meet the conditions for claiming will appear. - Check the box adjacent to the line items that you wish to claim. Click Claim selected items. - Select a claim type from the drop down menu. Entering a note is optional. - Click Claim. - + + Click Acquisitions →Claim-Ready Items. + Choose a branch from the drop down menu to claim items that were ordered by this + branch. + Any items that meet the conditions for claiming will appear. + Check the box adjacent to the line items that you wish to claim. Click Claim selected items. + Select a claim type from the drop down menu. Entering a note is optional. + Click Claim. +
Export Single Attribute List You can export ISBNs, ISSNs, or UPCs as a file from the list of line item(s). A list of ISBNs, for example, could be uploaded to vendor websites when placing orders. - - From a selection list or purchase order, check the boxes of the line items with attributes - that you wish to export. - Click Actions → Export Single Attribute List. - Choose the line item attribute that you would like to export from the drop down list of - attributes. - Click Export List. - Save the file to your computer. - Open the file. Choose a program to open the file. The following is an example of an ISBN in - a spreadsheet. - + + From a selection list or purchase order, check the boxes of the line items with attributes + that you wish to export. + Click Actions → Export Single Attribute List. + Choose the line item attribute that you would like to export from the drop down list of + attributes. + Click Export List. + Save the file to your computer. + Open the file. Choose a program to open the file. The following is an example of an ISBN in + a spreadsheet. +
Funds @@ -179,27 +179,27 @@ their attendant copies. Apply funds to individual copies - - Click the Copies link on the line item. - To apply a fund to an individual item, click the drop down arrow in the Fund field. + + Click the Copies link on the line item. + To apply a fund to an individual item, click the drop down arrow in the Fund field. A yellow fund name indicates that the balance in the fund has dropped to the warning percent that was entered in the admin module. A red fund name indicates that the balance in the fund has dropped to the stop percent that was entered in the admin module. Funds that have been closed out will no longer appear on the drop down list. - - + + Apply funds to copies via batch updates to line items You can apply funds to all copies on a line item(s) from the Actions menu on the selection list or the purchase order. - - Check the boxes of the line items with copies to which you would like to apply funds. - Click Actions →Apply Funds to Selected Items. - Select the fund that you wish to apply to the copies. - Click Submit. - + + Check the boxes of the line items with copies to which you would like to apply funds. + Click Actions →Apply Funds to Selected Items. + Select the fund that you wish to apply to the copies. + Click Submit. +
@@ -212,16 +212,16 @@ Create a blanket invoice You can create a blanket invoice for purchases that are not attached to a purchase order. - - Click Acquisitions → Create invoice. - Enter the invoice information in the top half of the screen. - Select a charge type from the drop down menu. - Select a fund from the drop down menu. - Enter a Title/Description of the resource. - Enter the amount that you were billed. - Enter the amount that you paid. - Save the invoice. - + + Click Acquisitions → Create invoice. + Enter the invoice information in the top half of the screen. + Select a charge type from the drop down menu. + Select a fund from the drop down menu. + Enter a Title/Description of the resource. + Enter the amount that you were billed. + Enter the amount that you paid. + Save the invoice. + Create an invoice for a purchase order @@ -230,41 +230,41 @@ the number of items invoiced, billed, and paid for each line item. With the exception of fields with drop down menus, no limitations on the data that you enter exist. - - Open a purchase order. - Click Create Invoice. - Enter a Vendor Invoice ID. This number may be listed on the paper invoice sent from your - vendor. - Choose a Receive Method from the drop down menu. + + Open a purchase order. + Click Create Invoice. + Enter a Vendor Invoice ID. This number may be listed on the paper invoice sent from your + vendor. + Choose a Receive Method from the drop down menu. Only paper invoicing is available in the 2.0 release. Electronic invoicing may be available in future releases. - - The Provider is generated from the purchase order and is entered by default. - Enter a note. - Select a payment method from the drop down menu. - The Invoice Date is entered by default as the date that you create the invoice. You can - change the date by clicking in the field. A calendar drops down. - Enter an Invoice Type. - The Shipper defaults to the provider that was entered in the purchase order. - Enter a Payment Authorization. - The Receiver defaults to the branch at which your workstation is registered. You can + + The Provider is generated from the purchase order and is entered by default. + Enter a note. + Select a payment method from the drop down menu. + The Invoice Date is entered by default as the date that you create the invoice. You can + change the date by clicking in the field. A calendar drops down. + Enter an Invoice Type. + The Shipper defaults to the provider that was entered in the purchase order. + Enter a Payment Authorization. + The Receiver defaults to the branch at which your workstation is registered. You can change the receiver by selecting an org unit from the drop down menu. The bibliographic line items are listed in the next section of the invoice. Along with the title and author of the line items is a summary of copies ordered, received, invoiced, claimed, and cancelled. You can also view the amounts estimated, encumbered, and paid for each line item. Finally, each line item has a line item ID and links to the selection list (if used) and the purchase - order. - Enter the number of items that were invoiced, the amount that the organization was billed, - and the amount that the organization paid. - You have the option to add charge types if applicable. Charge types are additional charges + order. + Enter the number of items that were invoiced, the amount that the organization was billed, + and the amount that the organization paid. + You have the option to add charge types if applicable. Charge types are additional charges that can be selected from the drop down menu. Common charge types include taxes and - handling fees. - You have three options for saving an invoice. You can click Save, which saves the changes + handling fees. + You have three options for saving an invoice. You can click Save, which saves the changes that you have made, but keeps the invoice open. You can click Save and Prorate, which enables you to save the invoice and prorate any additional charges, such as taxes, across funds, if multiple funds have been used to pay the invoice. You also can click Save and - Close. Choose this option when you have completed the invoice. - + Close. Choose this option when you have completed the invoice. + You can re-open a closed invoice by clicking the link, Re-open invoice. This link appears at the bottom of a closed invoice. @@ -276,12 +276,12 @@ order #1 is retrieved, and the invoice is created. To receive the items on purchase order #2, simply link the invoice to the purchase order. You do not need to recreate it. - - Open a purchase order. - Click Link Invoice. - Enter the Invoice # and the Provider of the invoice to which you wish to link. - Click Link. - + + Open a purchase order. + Click Link Invoice. + Enter the Invoice # and the Provider of the invoice to which you wish to link. + Click Link. + View an invoice @@ -308,35 +308,35 @@ Add alerts to a line item Alerts are pop up messages that appear when an item is received. Alerts can be printed on the line item worksheet. - - Click the Notes link on the line item. - Click the New Alert drop down button. - Choose an alert code from the drop down menu. - Add additional comments if desired. - Click Create. The alert will display on the screen. - Click Return to return to the line item. When you return to the line item, - a flag will appear to indicate that an alert is on the line item. - + + Click the Notes link on the line item. + Click the New Alert drop down button. + Choose an alert code from the drop down menu. + Add additional comments if desired. + Click Create. The alert will display on the screen. + Click Return to return to the line item. When you return to the line item, + a flag will appear to indicate that an alert is on the line item. + Add copies to a line item Use the Copies link to add copy information to a line item. You can add copies to line items on a selection list or a purchase order. - - Click the Copies link on a line item. - Enter the number of items that you want to order in Item Count, and click Go. The number - of items that you want to order will display below. - If desired, apply a Distribution Formula from the drop down list. Distribution formulas tell - the ILS how many copies should be distributed to each location. - The owning branch and shelving location populate with entries from the distribution - formula. Click Apply. - Look back at the top gray row of text boxes above the distribution formula. Each text box in + + Click the Copies link on a line item. + Enter the number of items that you want to order in Item Count, and click Go. The number + of items that you want to order will display below. + If desired, apply a Distribution Formula from the drop down list. Distribution formulas tell + the ILS how many copies should be distributed to each location. + The owning branch and shelving location populate with entries from the distribution + formula. Click Apply. + Look back at the top gray row of text boxes above the distribution formula. Each text box in this row corresponds to the columns below. Changes made here will be applied to all - copies below. Click Batch Update. - Click Save Changes. - Click Return to return to the selection list or purchase order. - Add the item’s price to the line item in the Estimated Price field. - + copies below. Click Batch Update. + Click Save Changes. + Click Return to return to the selection list or purchase order. + Add the item’s price to the line item in the Estimated Price field. + Add notes to a line item @@ -344,17 +344,17 @@ to the line item. Notes can be internal or can be made available to providers. Notes appear in a pop up box when an item is received. Notes can be printed on line item worksheets, which can be printed and placed in books for processing. - - Click the Notes link on the line item. - Click the New Note drop down button. - Enter a note. - You have the option to make this note available to your provider. Click the - check box adjacent to Note is vendor-public. - Click Create. The note will appear on the screen. - Click Return to return to the line item. When you return to the line item, + + Click the Notes link on the line item. + Click the New Note drop down button. + Enter a note. + You have the option to make this note available to your provider. Click the + check box adjacent to Note is vendor-public. + Click Create. The note will appear on the screen. + Click Return to return to the line item. When you return to the line item, a number in parentheses adjacent to notes indicates how many notes are attached - to the item. - + to the item. + Holdings maintenance @@ -365,12 +365,12 @@ Link to invoice Use the Link to invoice menu item to link the line item to an invoice that already exists in the ILS. - - Click Actions → Link to Invoice. - A pop up box appears. Enter an invoice number. - Enter a provider. The field will auto-complete. - Click Link. - + + Click Actions → Link to Invoice. + A pop up box appears. Enter an invoice number. + Enter a provider. The field will auto-complete. + Click Link. + Update barcodes @@ -392,11 +392,11 @@ details about the line item, including alerts and notes, and distribution of the copies. This worksheet could placed in a book that is sent to cataloging or processing. - - From a selection list or purchase order, click the worksheet link on the line item. - The line item worksheet appears. - To print the worksheet, click the Print Page link in the top right corner. - + + From a selection list or purchase order, click the worksheet link on the line item. + The line item worksheet appears. + To print the worksheet, click the Print Page link in the top right corner. + Link line items to the catalog @@ -407,14 +407,14 @@ enables you to search for a matching record and link to it from the selection list or purchase order. When you import the record from the purchase order, the record will overlay the linked record in the catalog. - - From the line item, click Link to catalog. - In the text box that pops up, search terms, such as ISBN and title, are entered by default. - Click Search. - Result(s) appear. Click the link to View MARC, or Select the record to link it to the record on - the selection list or purchase order. - The screen will reload, and the line item displays with a catalog link. The records are linked. - + + From the line item, click Link to catalog. + In the text box that pops up, search terms, such as ISBN and title, are entered by default. + Click Search. + Result(s) appear. Click the link to View MARC, or Select the record to link it to the record on + the selection list or purchase order. + The screen will reload, and the line item displays with a catalog link. The records are linked. +
@@ -445,22 +445,22 @@ extra copy of six titles. Your copy/hold ratio report includes the record ID of each title. You can save the record IDs into a CSV file, upload the file into the ILS, and create a purchase order for the items. - - Create a CSV file with the record ID of each catalog record in the first + + Create a CSV file with the record ID of each catalog record in the first column of the spreadsheet. You can create this CSV file from a spreadsheet generated by a report, as suggested in the aforementioned example. You can also copy and paste record IDs from the catalog record into the CSV file. Record IDs are auto-generated digits associated with each record. They are found in the Record Summary that appears at the top of each record. - - Save the CSV file to your computer. - Click Acquisitions → Load Catalog Record IDs. - Click Load More Terms. - The screen will display the number of terms (record IDs) that have been loaded. - Click Retrieve Records. The records will appear as line items to which you can add copies, + + Save the CSV file to your computer. + Click Acquisitions → Load Catalog Record IDs. + Click Load More Terms. + The screen will display the number of terms (record IDs) that have been loaded. + Click Retrieve Records. The records will appear as line items to which you can add copies, notes, and pricing information. Use the Actions menu to save these items to a selection list or - purchase order. - + purchase order. + Load MARC Order Records @@ -468,46 +468,46 @@ saved on your computer into the ILS. You can add the records to a selection list and/or to a purchase order. You can both create and activate purchase orders in one step from this interface. Also, from this interface, you can load bibs and items into the catalog. - - Click Acquisitions → Load MARC Order Records - If you want to upload the MARC records to a new purchase order, then click the check box - adjacent to Create Purchase Order. - If you want to activate the purchase order at the time of creation, then click the check box - adjacent to Activate Purchase Order. - If you want to load bibs and items into the catalog, then click the check box adjacent to - Load Bibs and Items into the ILS. - Enter the name of the Provider. The text will auto-complete. - Select an org unit from the drop down menu. The context org unit is the org unit that + + Click Acquisitions → Load MARC Order Records + If you want to upload the MARC records to a new purchase order, then click the check box + adjacent to Create Purchase Order. + If you want to activate the purchase order at the time of creation, then click the check box + adjacent to Activate Purchase Order. + If you want to load bibs and items into the catalog, then click the check box adjacent to + Load Bibs and Items into the ILS. + Enter the name of the Provider. The text will auto-complete. + Select an org unit from the drop down menu. The context org unit is the org unit that "owns" the bib record. You should select a physical location rather than a political or administrative org unit as the context org unit. For example, the Smith County Library System is funding purchase of a copy of Gone with the Wind. The system owns the bib record, but it cannot receive the physical item. The acquisitions librarian will choose a physical branch of that system, a processing center or an individual branch, to receive the - item. - If you want to upload the records to a selection list, you can select a list from the drop down - menu, or type in the name of the selection list that you want to create. - Click Browse to search for the file of bibliographic records. - Click Upload. - A summary of the items that have been processed will appear. - Click the links that appear to view the purchase order or the selection list. - + item. + If you want to upload the records to a selection list, you can select a list from the drop down + menu, or type in the name of the selection list that you want to create. + Click Browse to search for the file of bibliographic records. + Click Upload. + A summary of the items that have been processed will appear. + Click the links that appear to view the purchase order or the selection list. + MARC Federated Search The MARC Federated Search enables you to import bibliographic records into a selection list or purchase order from a Z39.50 source. - - Click Acquisitions → MARC Federated Search. - Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is - checked by default. Click Submit. - A list of results will appear. Click the Copies link to add copy information to the line item. - Click the Notes link to add notes or line item alerts to the line item. - Enter a price in the Estimated Price field. - You can save the line item(s) to a selection list by checking the box on the line item and + + Click Acquisitions → MARC Federated Search. + Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is + checked by default. Click Submit. + A list of results will appear. Click the Copies link to add copy information to the line item. + Click the Notes link to add notes or line item alerts to the line item. + Enter a price in the Estimated Price field. + You can save the line item(s) to a selection list by checking the box on the line item and clicking Actions →Save Items to Selection List. You can also create a purchase order from the line item(s) by checking the box on the line item and clicking Actions → Create - Purchase Order. - + Purchase Order. +
@@ -525,13 +525,13 @@ Activate a purchase order Before you can active a purchase order, the following criteria must be met: - - The field, Activate Order?, is located in the top half of the purchase order. The answer - adjacent to this field must be Yes. - Each line item must contain an estimated price. If the Activate Order? field in the top half + + The field, Activate Order?, is located in the top half of the purchase order. The answer + adjacent to this field must be Yes. + Each line item must contain an estimated price. If the Activate Order? field in the top half of the purchase order reads, No: The lineitem has no price (ACQ_LINEITEM_NO_PRICE), - then simply enter a price in the estimated price field, tab out of the field, and click Reload. - When the above criteria have been met, proceed with the following: + then simply enter a price in the estimated price field, tab out of the field, and click Reload. + When the above criteria have been met, proceed with the following: . Look at the Activate Order? field in the top half of the purchase order. Click the hyperlinked Activate Order. When you activate the order, the bibliographic records and copies will be imported into the catalog, and the funds associated with the purchases will be @@ -543,43 +543,43 @@ You can add charges, taxes, fees, or discounts to a purchase order. These additional charges will be reflected in the amounts that are estimated and encumbered on the purchase order. - - Open or create a purchase order. - Click New charge. - Select a charge type from the drop down menu. - Select a fund from the drop down menu. - Enter a Title/Description, Author, and Note if applicable. - Enter an estimated cost. - Add another new charge, or click Save New Charges. - + + Open or create a purchase order. + Click New charge. + Select a charge type from the drop down menu. + Select a fund from the drop down menu. + Enter a Title/Description, Author, and Note if applicable. + Enter an estimated cost. + Add another new charge, or click Save New Charges. + Discounts are not consistently supported in the 2.0 release. Add notes to a purchase order You can add notes to each purchase order. These can be viewed by staff and/or by the provider. By default, notes are only visible to staff. - - Open a purchase order. - In the top half of the purchase order, you see a Notes field. The number of notes that are - attached to the purchase order is hyperlinked in parentheses next to the Notes field. - Click the hyperlinked number. - Click New Note. - Enter the note. If you wish to make it available to the provider, click the check box adjacent - to Note is vendor-public. - Click Create. - + + Open a purchase order. + In the top half of the purchase order, you see a Notes field. The number of notes that are + attached to the purchase order is hyperlinked in parentheses next to the Notes field. + Click the hyperlinked number. + Click New Note. + Enter the note. If you wish to make it available to the provider, click the check box adjacent + to Note is vendor-public. + Click Create. + Create a purchase order - - Click Acquisitions → Create Purchase Order. - A pop-up box appears. Select an owning library from the drop down menu. - Enter a provider in the box. The text will auto complete. - Check the box adjacent to Prepayment Required. - Click Save. - The purchase order has been created. You can now create a new charge type or add a brief - record. - + + Click Acquisitions → Create Purchase Order. + A pop-up box appears. Select an owning library from the drop down menu. + Enter a provider in the box. The text will auto complete. + Check the box adjacent to Prepayment Required. + Click Save. + The purchase order has been created. You can now create a new charge type or add a brief + record. + The Total Estimated is the sum of the prices. The Total Encumbered is the total estimated that is encumbered when the purchase order is activated. The Total Spent column automatically updates when the items are invoiced. @@ -588,30 +588,30 @@ Mark ready for order After an item has been added to a selection list or purchase order, you can mark it ready for order. This step is optional but may be useful to individual workflows. - - If you want to mark part of a selection list ready for selector, then you can check the box(es) + + If you want to mark part of a selection list ready for selector, then you can check the box(es) of the line item(s) that you wish to mark ready for selector. If you want to mark the entire - list ready for selector, then skip to step 2. - Click Actions → Mark Ready for Order. - A pop up box will appear. Choose to mark the selected line items or all line items. - Click Go. - The screen will refresh. The line item will be highlighted gray, and the status will change to - order-ready. - + list ready for selector, then skip to step 2. + Click Actions → Mark Ready for Order. + A pop up box will appear. Choose to mark the selected line items or all line items. + Click Go. + The screen will refresh. The line item will be highlighted gray, and the status will change to + order-ready. + Name a purchase order A new purchase order is given the purchase order ID as a default name. However, you can change that name to any grouping of letters or numbers. You can reuse purchase order names as long as a name is never used twice in the same year. - - Open or create a purchase order. - The Name of the purchase order is in the top left column of the purchase order. The - hyperlinked number is an internal ID number that Evergreen has assigned. - To change this number, click on the hyperlinked ID. - Enter a new purchase order number in the pop up box. - Click OK. - + + Open or create a purchase order. + The Name of the purchase order is in the top left column of the purchase order. The + hyperlinked number is an internal ID number that Evergreen has assigned. + To change this number, click on the hyperlinked ID. + Enter a new purchase order number in the pop up box. + Click OK. + Print purchase orders @@ -619,22 +619,22 @@ note to a line item, the note will only appear in the Notes column on the printed purchase order if you make the note vendor-public. Currently, no notes appear in the Notes to the Vendor section of the printed purchase order. - - Open a purchase order. - Click Actions → Print Purchase Order. - + + Open a purchase order. + Click Actions → Print Purchase Order. + Split order by line items You can create a purchase order with multiple line items, and then split the purchase order so that each line item is on separate purchase orders.When a purchase order is in the status of pending, a link to split order by Lineitems appears in the bottom left corner of the top half of the screen. - - Click Split Order by Lineitems. - A pop up box will confirm that you want to split the purchase order. Click OK to continue. - The items will display by default as a virtual combined purchase order. Future - enhancements will allow you to activate the purchase order for each item from this screen. - + + Click Split Order by Lineitems. + A pop up box will confirm that you want to split the purchase order. Click OK to continue. + The items will display by default as a virtual combined purchase order. Future + enhancements will allow you to activate the purchase order for each item from this screen. + View On-Order Purchase Orders @@ -647,13 +647,13 @@ View EDI messages on a purchase order You can view electronic messages from your vendor about a specific purchase order. - - Open a purchase order. - In the top half of the purchase order, you see an EDI Messages field. The number of + + Open a purchase order. + In the top half of the purchase order, you see an EDI Messages field. The number of messages that are attached to the purchase order is hyperlinked in parentheses next to the - EDI Messages field. - Click the hyperlinked number to view the messages. - + EDI Messages field. + Click the hyperlinked number to view the messages. + View Purchase Order History @@ -699,17 +699,17 @@ Users may wish to begin their acquisitions process by searching line items and catalog records. This ensures that they do not purchase an item that the library already owns or is on another selection list or purchase order. - - Choose the object that you would like to search from the drop down menu. - Next, refine your search by choosing the specific fields that you would like to search. Click + + Choose the object that you would like to search from the drop down menu. + Next, refine your search by choosing the specific fields that you would like to search. Click Add Search Term to add more fields. Search terms are ANDed together. Click the red X at the end of each row to delete search terms. Some search terms will be disabled depending - on your choice of items to search. - After you have added search term(s), click Search or click the Enter key. A list of results - appears. - If you want to edit your search, click the Reveal Search button in the top right corner of the - results screen to display your search. - + on your choice of items to search. + After you have added search term(s), click Search or click the Enter key. A list of results + appears. + If you want to edit your search, click the Reveal Search button in the top right corner of the + results screen to display your search. +
Selection Lists @@ -724,12 +724,12 @@ Federated Search. In each of these interfaces, you will find the Add to Selection List field. Enter the name of the selection list that you want to create in that field. Selection lists can also be created through the My Selection Lists interface: - - Click Acquisitions → My Selection Lists. - Click the New Selection List drop down arrow. - Enter the name of the selection list in the box that appears. - Click Create. - + + Click Acquisitions → My Selection Lists. + Click the New Selection List drop down arrow. + Enter the name of the selection list in the box that appears. + Click Create. + Add items to a selection list @@ -742,73 +742,73 @@ Cloning selection lists enables you to copy one selection list into a new selection list. You can maintain both copies of the list, or you can delete the previous list. - - Click Acquisitions → My Selection Lists. - Check the box adjacent to the list that you want to clone. - Click Clone Selected. - Enter a name into the box that appears, and click Clone. - + + Click Acquisitions → My Selection Lists. + Check the box adjacent to the list that you want to clone. + Click Clone Selected. + Enter a name into the box that appears, and click Clone. + Merge selection lists You can merge two or more selection lists into one selection list. - - Click Acquisitions → My Selection Lists. - Check the boxes adjacent to the selection lists that you want to merge, and click Merge - Selected. - Choose the Lead Selection List from the drop down menu. This is the list to which the items - on the other list(s) will be transferred. - Click Merge. - + + Click Acquisitions → My Selection Lists. + Check the boxes adjacent to the selection lists that you want to merge, and click Merge + Selected. + Choose the Lead Selection List from the drop down menu. This is the list to which the items + on the other list(s) will be transferred. + Click Merge. + Delete selection lists You can delete selection lists that you do not want to save. You will not be able to retrieve these items through the General Search after you have deleted the list. You must delete all line items from a selection list before you can delete the list. - - Click Acquisitions → My Selection Lists. - Check the box adjacent to the selection list(s) that you want to delete. - Click Delete Selected. - + + Click Acquisitions → My Selection Lists. + Check the box adjacent to the selection list(s) that you want to delete. + Click Delete Selected. + Mark Ready for Selector After an item has been added to a selection list or purchase order, you can mark it ready for selector. This step is optional but may be useful to individual workflows. - - If you want to mark part of a selection list ready for selector, then you can check the box(es) + + If you want to mark part of a selection list ready for selector, then you can check the box(es) of the line item(s) that you wish to mark ready for selector. If you want to mark the entire - list ready for selector, then skip to step 2. - Click Actions → Mark Ready for Selector. - A pop up box will appear. Choose to mark the selected line items or all line items. - Click Go. - The screen will refresh. The marked line item(s) will be highlighted pink, and the status - changes to selector-ready. - + list ready for selector, then skip to step 2. + Click Actions → Mark Ready for Selector. + A pop up box will appear. Choose to mark the selected line items or all line items. + Click Go. + The screen will refresh. The marked line item(s) will be highlighted pink, and the status + changes to selector-ready. + Convert selection list to purchase order Use the Actions menu to convert a selection list to a purchase order. - - From a selection list, click Actions → Create Purchase Order. - A pop up box will appear. - Select the ordering agency from the drop down menu. - Enter the provider. - Check the box adjacent to prepayment required if prepayment is required. - Choose if you will add All Lineitems or Selected Lineitems to your purchase order. - Check the box if you want to Import Bibs and Create Copies in the catalog. - Click Submit. - + + From a selection list, click Actions → Create Purchase Order. + A pop up box will appear. + Select the ordering agency from the drop down menu. + Enter the provider. + Check the box adjacent to prepayment required if prepayment is required. + Choose if you will add All Lineitems or Selected Lineitems to your purchase order. + Check the box if you want to Import Bibs and Create Copies in the catalog. + Click Submit. +
View/Place Orders - - Open a bib record. - Click Actions for this Record → View/Place Orders. - Click Add to Selection List, or click Create Purchase Order. - + + Open a bib record. + Click Actions for this Record → View/Place Orders. + Click Add to Selection List, or click Create Purchase Order. +
diff --git a/2.0/stafftasks/kcls_acquisitions.xml b/2.0/stafftasks/kcls_acquisitions.xml index c1f48c0d6d..a64fe0d37a 100644 --- a/2.0/stafftasks/kcls_acquisitions.xml +++ b/2.0/stafftasks/kcls_acquisitions.xml @@ -7,368 +7,371 @@ Find or Create the Record For adds: - - Search for title in the catalog. - Click on the title link. - Right-click on the Bib Call # at the top of screen and copy the call number. - Go to Marc Edit on Actions for this Record menu. (You can set Marc Edit to be your default if you choose). - Confirm correct ISBN/UPC is in top position. If not, move it to top. This can be done in the Flat Text Editor. Then copy/paste the fields where - you need them to go. - + + Search for title in the catalog. + Click on the title link. + Right-click on the Bib Call # at the top of screen and copy the call number. + Go to Marc Edit on Actions for this Record menu. (You can set Marc Edit to be your default if you choose). + Confirm correct ISBN/UPC is in top position. If not, move it to top. This can be done in the Flat Text Editor. Then copy/paste the fields where + you need them to go. + For new orders: - - For print orders, search for title in OCLC. If the record is in OCLC: - Update holdings in OCLC. - Confirm correct ISBN/UPC is in top position. If not, move it to top. - Export it into Evergreen using the ACQMASTERMACRO OCLC macro (do not overlay). - Search for title in the catalog. Click on the title link. - For non-print orders OR if the record is not in OCLC, create a brief record: - - Select Create New Marc Record on the Cataloging menu. - Click the Load button. This will bring up a blank Marc record. - Enter your short record information. Use tab or mouse to move from one field to the next. Click on the Help button to see shortcut keys. - Enter the date in TWO places. Enter the date in the 260ǂc AND enter in the Date1 box at top of record. - Add a row at the end of the record (put cursor in last row and type Ctrl+Enter). Type in 998 and 2 spaces. Type a “d” (the “d” should be blue). + + For print orders, search for title in OCLC. If the record is in OCLC: + Update holdings in OCLC. + Confirm correct ISBN/UPC is in top position. If not, move it to top. + Export it into Evergreen using the ACQMASTERMACRO OCLC macro (do not overlay). + Search for title in the catalog. Click on the title link. + For non-print orders OR if the record is not in OCLC, create a brief record: + + Select Create New Marc Record on the Cataloging menu. + Click the Load button. This will bring up a blank Marc record. + Enter your short record information. Use tab or mouse to move from one field to the next. Click on the Help button to see shortcut keys. + Enter the date in TWO places. Enter the date in the 260ǂc AND enter in the Date1 box at top of record. + Add a row at the end of the record (put cursor in last row and type Ctrl+Enter). Type in 998 and 2 spaces. Type a “d” (the “d” should be blue). Then type the letter code - that corresponds to the material type in lowercase (for example, book=a). - Click the Create Record button. Reload if needed. - - - + that corresponds to the material type in lowercase (for example, book=a). + Click the Create Record button. Reload if needed. + + + if you need to go back and edit the short record after clicking Create Record, remember that the 901 field must be the last field in the record. All fields following the 901 will be deleted when you save the record. Create the Order - - From the catalog record, click View/Place Orders on the Actions for this Record menu. - Click on the Create Purchase Order button. - Enter the following as shown below: - - Ordering Agency = PR - Enter Provider code (type slowly) - Uncheck “Prepayment Required” check box (unless valid). - Confirm “All Line Items” button is selected. - - - Click Submit. - If you get a dialog box about prepayment being required even though you unchecked the box, click on OK to proceed anyway. - If you get this error, click OK and Reload. - Click on Copies. - On the Copies screen: - - Enter item count and click Go. - Enter shelving location in 2nd drop down in Batch Update row. - Enter fund in 4th drop down in Batch Update row. - Enter Circ Modifier in 5th drop down in Batch Update row. - For NEW orders, enter ON ORDER call number (format specific) in last box in Batch Update row. - For ADDS, paste in the call number from the bib record. - Click Batch Update. - Enter Distribution Formula and click Apply. - Click Save Changes. - Click Return. - - - Click Notes. Add a note for format (for example, paperback, library binding, etc.). Check the box in the note to make it vendor public. It will print on + + From the catalog record, click View/Place Orders on the Actions for this Record menu. + Click on the Create Purchase Order button. + Enter the following as shown below: + + Ordering Agency = PR + Enter Provider code (type slowly) + Uncheck “Prepayment Required” check box (unless valid). + Confirm “All Line Items” button is selected. + + + Click Submit. + If you get a dialog box about prepayment being required even though you unchecked the box, click on OK to proceed anyway. + If you get this error, click OK and Reload. + Click on Copies. + On the Copies screen: + + Enter item count and click Go. + Enter shelving location in 2nd drop down in Batch Update row. + Enter fund in 4th drop down in Batch Update row. + Enter Circ Modifier in 5th drop down in Batch Update row. + For NEW orders, enter ON ORDER call number (format specific) in last box in Batch Update row. + For ADDS, paste in the call number from the bib record. + Click Batch Update. + Enter Distribution Formula and click Apply. + Click Save Changes. + Click Return. + + + Click Notes. Add a note for format (for example, paperback, library binding, etc.). Check the box in the note to make it vendor public. It will print on PO or be transmitted to vendor electronically. Enter another note for cataloging instructions (for example, CAT A) but do not check the vendor public box. Enter other notes as needed. - - Click Return. - Enter the item price in the Estimated Price box. - Click Reload. - Click on Activate Order link. - Select Print Purchase Order (if not an EDI account) from the P.O. Actions drop down menu. - If order has copies for suppressed libraries or Reference items, click the Catalog link next to the line item number (or go back to Bib Record tab). + + Click Return. + Enter the item price in the Estimated Price box. + Click Reload. + Click on Activate Order link. + Select Print Purchase Order (if not an EDI account) from the P.O. Actions drop down menu. + If order has copies for suppressed libraries or Reference items, click the Catalog link next to the line item number (or go back to Bib Record tab). Select Holdings Maintenance from the Actions for this Record menu. Edit the items/apply templates. Example: Suppressed library Example: Adult Reference - - If order has a hold(s), click the Catalog link next to the line item number (or go back to Bib Record tab) and place hold(s). - + + If order has a hold(s), click the Catalog link next to the line item number (or go back to Bib Record tab) and place hold(s). +
Receiving Print Materials - - + + From the Cataloging menu Select Search the Catalog. - - + + Select your title. (If you cannot find the record linked to the order, use the Acquisitions General Search to search by line number – see #1 under Alternative Workflow section.) - - + + Click on Actions for this Record Select View/Place Orders. - - + + Verify the Line number and Purchase Order number matches numbers on packing slip/invoice. Click on Purchase Order Number link. - - + + Purchase Order will display. (Purchase Order Status & Line Item Status = on-order) - - + + Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - - + + Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - - + + Click on worksheet link. Print worksheet. Click Go Back. - - + + Switch tabs back to Bib Record tab. Catalog record and if it is a new title, update call number when possible. - - + + Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes. - - + + Or go to Holdings Maintenance to replace barcodes. - - + + Apply call number to all copies and replace barcodes. - - + + Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly flagged, and click Modify Copies. - - + + For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or by selecting Holdings Maintenance on the Line Item Actions drop down menu.) - - + + Receiving Print Materials - Alternative Workflow You can also receive in Acquisitions on one tab and then search the catalog by title on another tab. - - From the Acquisitions menu Select General Search + + From the Acquisitions menu Select General Search - To search by ISBN:Search for “line items” matching “all” of the following terms: “LIA – ISBN” is “[enter/scan ISBN].” + To search by ISBN:Search for “line items” matching “all” of the following terms: “LIA – ISBN” is + “[enter/scan ISBN].” To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing slip/invoice].” - - Click Search. - Click Purchase Order number link. - Purchase Order will display. (Purchase Order Status & Line Item Status = on-order) - Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format + + Click Search. + Click Purchase Order number link. + Purchase Order will display. (Purchase Order Status & Line Item Status = on-order) + Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see - Partial Receipts section. - Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - Click on worksheet link. Print worksheet. Click Go Back. - Open a new tab and search the catalog by title. Select title. - Catalog record and if it is a new title, update call number when possible. - Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes. - Or go to Holdings Maintenance to replace barcodes. - Apply call number to all copies and replace barcodes. - Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly - flagged, and click Modify Copies. - For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or - by selecting Holdings Maintenance on the Line Item Actions drop down menu.) - + Partial Receipts section. + Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” + Click on worksheet link. Print worksheet. Click Go Back. + Open a new tab and search the catalog by title. Select title. + Catalog record and if it is a new title, update call number when possible. + Switch tabs back to Related Lineitems tab. Click on the Line Item Actions drop down menu Select Update Barcodes. + Or go to Holdings Maintenance to replace barcodes. + Apply call number to all copies and replace barcodes. + Click Edit then Re-barcode – the Copy Editor box will appear. Apply templates as needed, confirm suppressed libraries and Reference items are correctly + flagged, and click Modify Copies. + For new orders, go to Holdings Maintenance and delete ON ORDER call numbers. (You can go to Holdings Maintenance by switching tabs back to Bib Record tab or + by selecting Holdings Maintenance on the Line Item Actions drop down menu.) + Receiving Print Materials - Partial Receipts First Shipment: - - From the Purchase Order screen, click Notes link. Check for any earlier partial receipts. - If first shipment, then click Copies link. - Click “Mark Received” for number of copies in hand (start with top copy). Click Return. - If you have most of the copies in hand, you can also select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the - missing copy/copies (starting with top copy). Click Return. - Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL). - Catalog as usual. - To replace barcodes and apply down call number, you must use Holdings Maintenance. Currently it is not possible to replace barcodes using Update Barcodes - in Line Item Actions drop down menu. - Remember to flag title on invoice and change no. of copies on worksheet. - + + From the Purchase Order screen, click Notes link. Check for any earlier partial receipts. + If first shipment, then click Copies link. + Click “Mark Received” for number of copies in hand (start with top copy). Click Return. + If you have most of the copies in hand, you can also select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the + missing copy/copies (starting with top copy). Click Return. + Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL). + Catalog as usual. + To replace barcodes and apply down call number, you must use Holdings Maintenance. Currently it is not possible to replace barcodes using Update Barcodes + in Line Item Actions drop down menu. + Remember to flag title on invoice and change no. of copies on worksheet. + Next Shipment: - - Check Notes to see how many items were previously received. - If shipment completes the order, click on the Actions drop down menu and select Mark Received. - If shipment does not complete order, click Copies and mark individual copies as received (as shown above). - Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL). - If the shipment completes the order, click on the Line Item Actions drop down menu Select Update Barcodes. Apply down call number and replace barcodes. - Or replace barcodes in Holdings Maintenance. - Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off - locations already received. - + + Check Notes to see how many items were previously received. + If shipment completes the order, click on the Actions drop down menu and select Mark Received. + If shipment does not complete order, click Copies and mark individual copies as received (as shown above). + Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL). + If the shipment completes the order, click on the Line Item Actions drop down menu Select Update Barcodes. Apply down call number and replace barcodes. + Or replace barcodes in Holdings Maintenance. + Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off + locations already received. + Unreceiving Print Materials - - To un-receive an order, go to the Purchase Order screen. + + To un-receive an order, go to the Purchase Order screen. - - To un-receive the complete order, click on Actions drop down menu Select Un-Receive. + + To un-receive the complete order, click on Actions drop down menu Select Un-Receive. - - To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. + + To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. - - If the barcodes have already been replaced, go to Holdings Maintenance and replace real barcode numbers with temporary barcode numbers. To create a temporary barcode use your initials and a number (example: cme1). Start with 1 and then auto-generate as needed. Keep track of last number used to start with the next time so you don’t create duplicate barcodes. + + If the barcodes have already been replaced, go to Holdings Maintenance and replace real barcode numbers with temporary barcode numbers. To create a temporary barcode use your initials and a number (example: cme1). Start with 1 and then auto-generate as needed. Keep track of last number used to start with the next time so you don’t create duplicate barcodes. - - + +
Receiving Non-print Materials - - + + From the Cataloging menu Search the Catalog. - - + + Select your title. - - + + Click on Actions for this Record Select View/Place Orders. Verify that the line number and purchase order number matches numbers on packing slip/invoice. If purchase order number is not printed on packing slip/invoice, write the purchase order number on packing slip/invoice. - - + + Click on purchase order number link. - - + + Purchase Order will display. (Purchase Order status & Line Item status = on-order) - - + + Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - - + + Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - - + + Click on worksheet link. Print worksheet. Click Go Back. - - + + If new title, overlay short record with OCLC record if available. - - + + Switch tabs back to Bib Record tab. Copy TCN. - - + + Search OCLC for record. If found, export using overlay macro. - - + + Reload record to confirm overlay. - - - - + + + + Receiving Non-print Materials - Alternative Workflow - - From the Acquisitions menu Select General Search. + + From the Acquisitions menu Select General Search. - To search by UPC or ISBN: Search for “line items” matching “all” of the following terms: “LIA – UPC” is “[enter/scan UPC] or “LIA - ISBN” is + To search by UPC or ISBN: Search for “line items” matching “all” of the following terms: “LIA – UPC” is “[enter/scan UPC] + or “LIA - ISBN” is “[enter/scan ISBN].” - To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is “[enter your line number from packing + To search by line number: Search for “line items” matching “all” of the following terms: “LI – Lineitem ID” is + “[enter your line number from packing slip/invoice].” - - Click Search. - Click Purchase Order number link. - Purchase Order will display. (Purchase Order status & Line Item status = on-order) - Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format + + Click Search. + Click Purchase Order number link. + Purchase Order will display. (Purchase Order status & Line Item status = on-order) + Check item(s) received against the Line Item details and verify the following: Title/Author, ISBN, Publisher, Pub. Date, Edition, Format (Format is in Notes), Est. Price, - Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. - Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” - Click on worksheet link. Print worksheet. Click Go Back. - If new title, overlay short record with OCLC record if available. - - Switch to second tab and search for title in catalog. Copy TCN. - Search OCLC for record. If found, export using overlay macro. - Reload record to confirm overlay. - - - + Status, Provider, and Copies. If everything checks out with the order, you are ready to receive. If partial receipt, see Partial Receipts section. + Click on Line Item Actions drop down menu Select Mark Received. Purchase Order & Line Item status will change to “received.” + Click on worksheet link. Print worksheet. Click Go Back. + If new title, overlay short record with OCLC record if available. + + Switch to second tab and search for title in catalog. Copy TCN. + Search OCLC for record. If found, export using overlay macro. + Reload record to confirm overlay. + + + Receiving Non-print Materials - Partial Receipts First Shipment: - - From the Purchase Order screen, check Notes for any earlier partial receipts. - If first shipment, then click Copies link. - Click “Mark Received” for number of copies in hand (start with top copy). Click Return. - Or if you have the majority of the copies in hand, you can select Mark Received (on Actions drop down menu) for entire order and then “un-receive” + + From the Purchase Order screen, check Notes for any earlier partial receipts. + If first shipment, then click Copies link. + Click “Mark Received” for number of copies in hand (start with top copy). Click Return. + Or if you have the majority of the copies in hand, you can select Mark Received (on Actions drop down menu) for entire order and then “un-receive” the missing copy/copies (starting - with top copy). Click Return. - Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL). - Remember to flag title on invoice and change no. of copies on worksheet. - + with top copy). Click Return. + Add a note stating how many copies were received (example: receiving 4 copies on 1/1/11 JL). + Remember to flag title on invoice and change no. of copies on worksheet. + Next Shipment: - - Check Notes to see how many items were previously received. - If shipment completes the order, click on the Actions drop down menu and select Mark Received. - If shipment does not complete order, click Copies and mark individual copies as received (as shown above). - Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL). - Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off - locations already received. - + + Check Notes to see how many items were previously received. + If shipment completes the order, click on the Actions drop down menu and select Mark Received. + If shipment does not complete order, click Copies and mark individual copies as received (as shown above). + Add a note stating how many copies were received (example: receiving 2 copies on 1/20/11 JL). + Remember to flag title on invoice and change no. of copies on worksheet. Also highlight locations to receive second shipment on worksheet or cross off + locations already received. + Unreceiving Non-print Materials - - To un-receive an order, go to the Purchase Order screen. - To un-receive the complete order, click on Actions drop down menu Select Un-Receive. - To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. - + + To un-receive an order, go to the Purchase Order screen. + To un-receive the complete order, click on Actions drop down menu Select Un-Receive. + To un-receive a partial order, click on Copies link and click Un-Receive for individual copies. +
diff --git a/2.0/stafftasks/serials-gpls.xml b/2.0/stafftasks/serials-gpls.xml index 39afe0a291..1688b90ad7 100644 --- a/2.0/stafftasks/serials-gpls.xml +++ b/2.0/stafftasks/serials-gpls.xml @@ -29,143 +29,143 @@ Serial Control View or the Serial Control View. Create a copy template - - + + To create a copy template, click Admin → Local Administration → Copy Template Editor. - - + + Enter a Name for the template. - - + + Select an owning library from the Owning lib drop down menu. This organization owns the copy template. A staff member with permissions at that organization can modify the copy template. The menu is populated from the organizations that you created in Admin → Server Administration → Organizational Units. - - + + Click the box adjacent to Circulate? If you want the item to circulate. - - + + Check the box adjacent to Holdable? if patrons can place holds on the item. - - + + Check the box adjacent to OPAC Visible? if you want patrons to be able to see the item in the OPAC after you receive it. - - + + Select a loan duration rule from the drop down menu. - - + + Select a fine level for the item from the drop down menu. - - + + Select a copy Location from the drop down menu. The menu is populated from the copy locations that you created in Admin → Local Administration → Copy Locations. - - + + Select a circ modifier from the drop down box. The menu is populated from the modifiers that you created in Admin → Server Administration → Circulation Modifiers. - - + + Check the box adjacent to Floating? if the item is part of a floating collection. - - + + Check the box adjacent to Deposit? if patrons must place a deposit on the copy before they can use it. - - + + Check the box adjacent to Reference? if the item is a reference item. - - + + If the item is in mint condition, then check the box adjacent to Mint Condition? - - + + Enter age protection rules in the Age Protect field. Age protection allows you to control the extent to which an item can circulate after it has been received. For example, you may want to protect new copies of a serial so that only patrons who check out the item at your branch can use it. - - + + Enter a message in the Alert Message field. This message will appear every time the item is checked out to a patron. - - + + Enter a code from the MARC fixed fields if you want to control the circulation based on the item type in the Circ as Type field. - - + + Enter a deposit amount if patrons must place a deposit on the copy before they can use it. - - + + Enter the price of the item. - - + + Enter the ID of the copy status in the Status field. A list of copy statuses and their IDs can be found in Admin → Server Administration → Copy Status. - - + + Click Save. - - + + Fine level and loan duration are required fields in the Copy Template Editor. Edit a copy template You can make changes to an existing copy template. Changes that you make to a copy template will apply to any items that you receive after you edited the template. - - + + To edit a copy template, click your cursor in the row that you want to edit. The row will turn blue. - - + + Double-click. The copy template will appear, and you can edit the fields. - - + + After making changes, click Save. - - + + From the copy template interface, you can delete copy templates that have never been used. @@ -181,13 +181,13 @@ Add new subscriptions to a serials record that exists in the catalog. Create a subscription - - + + Click New Subscription. - - + + Select an owning library. The owning library indicates the organizational unit(s) whose staff can use this subscription. This menu is populated with the shortnames that you created for your libraries in the organizational units tree in Admin → @@ -195,39 +195,39 @@ The rule of parental inheritance applies to this list. For example, if a system is made the owner of a subscription, then users, with appropriate permissions, at the branches within the system could also use this subscription. - - + + Enter the date that the subscription begins in the start date. Recommended practice is that you select the date from the drop down calendar although you can manually enter a date. Owning library and start date are required fields in the new subscription pop up box. - - + + Enter the date that the subscription ends in the end date. Recommended practice is to select a date from the drop down calendar, but you can manually enter a date, also. - - + + Enter the difference between the nominal publishing date of an issue and the date that you expect to receive your copy in the Expected Date Offset. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter -2 days into this field. - - + + Click Save. - - + + After you save the subscription, it will appear in a list with a hyperlinked ID number. Use the drop down menu at the top of the screen to view subscriptions at other organizations. - - + + Manage a subscription @@ -245,47 +245,47 @@ the number of copies that should be sent to each branch. Create a distribution - - + + Click the Distributions tab. - - + + Click New Distribution. - - + + Enter a name for the distribution in the Label field. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. - - + + Select a holding library from the drop down menu. The holding library is the branch that will receive the copies. - - + + Select a copy template from the Receive Unit Template drop down menu. This menu is populated with the copy templates that you created in Copy Template Editor. Label, Holding Library, and Receive Unit Template are required fields in the new distribution pop up box. - - + + Ignore the fields, Unit Label Prefix and Unit Label Suffix. These fields are not functional in Alternate Serial Control View. - - + + Click Save. The distribution will appear in a list in the Distributions tab in the Subscription Details. - - + + Edit a distribution @@ -308,29 +308,29 @@ displays adjacent to the title. You can choose one of two ways to create a stream: New Stream or Create Many Streams. The New Stream button allows you to create one new stream and assign it a routing label. - - Click New Stream - Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print from in 2.0. This field is optional. - Click Save. - + + Click New Stream + Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print from in 2.0. This field is optional. + Click Save. + The Create Many Streams button allows you to create multiple streams at once, but it does not allow you to add a routing label when you create the stream. - - + + Click Create Many Streams. - - + + Enter the number of streams that you want to create in the How many? Field. - - + + Click Create. - - + + Edit a stream @@ -344,213 +344,213 @@ In 2.0, it is not possible to create a caption and pattern and apply it to multiple subscriptions. However, you can re-use patterns if you copy and paste to and from the pattern code field in the Captions and Patterns tab. Create a Caption and Pattern - - + + Open the Subscription Details. - - + + Click the Captions and Patterns tab. - - + + Click Add Caption and Pattern. - - + + In the Type drop down box, select the MARC tag to which you would like to add data. - - + + In the Pattern Code drop down box, you can enter a JSON representation of the 85X tag by hand, or you can click the Wizard to enter the information in a user-friendly format. - - + + The Caption and Pattern that you create is Active by default, but you can deactivate a caption and pattern at a later time by unchecking the box. - - + + A subscription may have multiple captions and patterns listed in the subscripiton details, but only one Caption and Pattern can be active at any time. If you want to add multiple patterns, e.g. for Basic and Supplement, Click Add Caption and Pattern. Use the Pattern Code Wizard The Pattern Code Wizard enables you to create the caption of the item and add its publication information. The Wizard is composed of five pages of questions. You can use the Next and Previous navigation buttons in the top corners to flip between pages. - - + + To add a pattern code, click Wizard. - - + + Page 1: Enumerations - - + + To add an enumeration, check the box adjacent to Use enumerations?. The enumerations conform to $a-$h of the 853,854, and 855 MARC tags. - - + + A field for the First level will appear. Enter the enumeration for the first level. A common first level enumeration is volume, or “v.” - - + + Click Add Enumeration. - - + + A field for the Second level will appear. Enter the enumeration for the second level. A common first level enumeration is number, or “no.” - - + + Enter the number of bibliographic units per next higher level. This conforms to $u in the 853, 854, and 855 MARC tags. - - + + Choose the enumeration scheme from the drop down menu. This conforms to $v in the 853, 854, and 855 MARC tags. You can add up to six levels of enumeration. - - + + Add Alternate Enumeration if desired. - - + + When you have completed the enumerations, click Next. - - -
- + + + + Page 2: Calendar - - + + To use months, seasons, or dates in your caption, check the box adjacent to Use calendar changes? - - + + Identify the point in the year at which the highest level enumeration caption changes. - - + + In the Type drop down menu, select the points during the year at which you want the calendar to restart. - - + + In the Point drop down menu, select the specific time at which you would like to change the calendar - - + + To add another calendar change, click Add Calendar Change. There are no limits on the number of calendar changes that you can add. - - + + When you have finished the calendar changes, click Next. - - - - + + + + Page 3: Chronology - - + + To add chronological units to the captions, check the box adjacent to Use chronology captions? - - + + Choose a chronology for the first level. If you want to display the terms, “year” and “month” next to the chronology caption in the catlaog, then check the box beneath Display in holding field? - - + + To include additional levels of chronology, click Add Chronology Caption. Each level that you add must be smaller than the previous level. - - + + After you have completed the chronology caption, click Next. - - - - + + + + Page 4: Compress and Expand Captions - - + + Select the appropriate option for compressing or expanding your captions in the catalog from the compressibility and expandability drop down menu. The entries in the drop down menu correspond to the indicator codes and the subfield $w in the 853 tag. Compressibility and expandability correspond to the first indicator in the 853 tag. - - + + Choose the appropriate caption evaluation from the drop down menu. - - + + Choose the frequency of your publication from the drop down menu. For irregular frequencies, you may wish to select use number of issues per year, and enter the total number of issues that you receive each year. However, in the . 0 release, recommended practice is that you use only regular frequencies. Planned development will create an additional step to aid in the creation of irregular frequencies. - - + + Click Next. - - - - + + + + Page 5: Finish Captions and Patterns - - + + To complete the wizard, click Create Pattern Code. - - + + Return to Subscription Details. - - + + Confirm that the box adjacent to Active is checked. Click Save Changes. The row is now highlighted gray instead of orange. - - - - + + + + @@ -559,115 +559,115 @@ Confirm that the box adjacent to Active is checked. Click Save Changes. The ro The Issuances tab enables you to manually create an issue in the ILS. The ILS will use the initial issue that you manually create to predict future issues. Create an issuance - - + + Click the Issuances tab in the Subscription Details. - - + + Click New Issuance. - - + + The Subscription, Creator, and Editor fields contain subscription and user IDs, respectively. These fields are disabled because Evergreen automatically fills in these fields. - - + + Enter a name for this issuance in the Label field. There are no limits on the number of characters that can be entered in this field. You may want to enter the month and year of the publication in hand. - - + + Enter the Date Published of the issuance that you are editing. Recommended practice is that you select the date from the drop down calendar although you can manually enter a date. If you are creating one manual issue before automatically predicting more issues, then this date should be the date of the most current issue before the prediction starts. - - + + Select a Caption/Pattern from the drop down menu. The numbers in the drop down menu correspond to the IDs of the caption/patterns that you created. - - + + The Holding Type appears by default and corresponds to the Type that you selected when you created the Caption/Pattern. - - + + In the holding code area of the New Issuance dialog, click Wizard. The Wizard enables you to add holdings information. - - + + Enter the volume of the item in hand in the v. field. - - + + Enter the number of the item in hand in the no. field. - - + + Enter the year of publication in the Year field. - - + + Enter the month of publication in the Month field if applicable. You must enter the calendar number of the month rather than the name of the month. For example, enter 12 if the item in hand was published in December. - - + + Enter the day of publication in the day field if applicable. - - + + Click Compile to generate the holdings code. - - + + Click Save. The newly generated issuance will appear in a list in the Issuances tab of the Subscription Details. - - + + Generate item predicitions After you manually create the first issue, Evergreen will predict future issuances. Use the Generate Predictions functionality to predict future issues. - - + + Click Subscription Details → Issuances → Generate Predictions. - - + + Choose the length of time for which you want to predict issues. If you select the radio button to predict until end of subscription, then Evergreen will predict issues until the end date that you created when you created the subscription. See simplesect . 1 for more information. If you do not have an end date, select the radio button to predict a certain number of issuances, and enter a number in the field. - - + + Click Generate. - - + + Evergreen will predict a run of issuances and copies. The prediction will appear in a list. - - + + You can delete the first, manual issuance by clicking the check box adjacent to the issuance and clicking Delete Selected. - - + + Receiving @@ -676,123 +676,123 @@ You can delete the first, manual issuance by clicking the check box adjacent to Simple Batch Receiving Follow these steps to receive items in batch in a simple interface. - - + + The Batch Receive interface displays issues that have not yet been received. The earliest expected issue appears at the top of the list. - - + + In the right lower corner, you see a check box to Create Units for Received Items. If you do not check this box, then you will receive items in simple mode. - - + + Click Next. - - + + In simple mode, the distributions that you created are displayed. They are marked received by default. If you hover over the branch name, you can view the name of the distribution and its stream. - - + + You can receive and add a note to each item individually, or you can perform these actions on all of the distributions and streams at once. To do so, look above the line, and enter the note that you want to apply to all copies and confirm that the box to Receive? is checked. - - + + Click Apply. The note should appear in the note field in each distribution. In 2.0, the note field is only displayed in the current screen. - - + + Then click Receive Selected Items. - - + + The received items are cleared from the screen. - - + + Advanced Batch Receiving Follow these steps to receive items in batch in a simple interface. - - + + The Batch Receive interface displays issues that have not yet been received. The earliest expected issue appears at the top of the list. - - + + If you want to barcode each copy, display it in the catalog, and make it holdable, then check the box adjacent to Create Units for Received Items in the lower right side of the screen. - - + + This will allow you to utilize the copy templates and input additional information about the copy: - - + + Barcode – You can scan printed barcodes into the barcode field for each copy, or you can allow the system to auto-generate barcodes. …To auto-generate barcodes, check the box adjacent to Auto-generate?, and enter the first barcode into the barcode field in the first row of the table. Then press the Tab key. The remaining barcode fields will automatically populate with the next barcodes in sequence, including check digits. - - + + Circ Modifiers - The circ modifiers drop down menu is populated with the circulation modifiers that you created in Admin → Server Administration → Circulation Modifiers. If you entered a circ modifier in the copy template that you created for this subscription, then it will appear by default in the distributions. - - + + Call Number – Enter a call number. Any item with a barcode must also have a call number. - - + + Note – Add a note. There are no limits on the number of characters that can be entered in this field. The note only displays in this screen. - - + + Copy Location – The copy location drop down menu is populated with the copy locations that you created in Admin → Local Administration → Copy Location Editor. If you entered a copy location in the copy template that you created for this subscription, then it will appear by default in the distributions. - - + + Price - If you entered a price in the copy template that you created for this subscription, then it will appear by default in the distributions. You can also manually enter a price if you did not include one in the copy template. - - + + Receive? – The boxes in the Receive? Column are checked by default. Uncheck the box if you do not want to receive the item. Evergreen will retain the unreceived copies and will allow you to receive them at a later time. - - - - + + + + When you are ready to receive the items, click Receive Selected Items. - - + + The items that have been received are cleared from the Batch Receive interface. The remaining disabled item is an unreceived item. - - + + If the items that you received have a barcode, a copy template that was set to OPAC Visible, and are assigned a shelving location that is OPAC Visible, then you can view the received items in the catalog. Notice that the Holdings Summary has been updated to reflect the most recent addition to the holdings. - - + + @@ -806,48 +806,48 @@ If the items that you received have a barcode, a copy template that was set to O The Subscriptions tab enables you to view and manage subscriptions. Create a subscription - - + + Click the Subscriptions tab. - - + + Select the branch that will own the subscription. - - + + Right-click or click Actions for Selected Row, and click Add Subscription. - - + + Enter the date that the subscription begins in the start date, and click Apply. You must enter the date in YYYY-MM-DD format. - - + + Enter the date that the subscription ends in the end date. This field is optional. - - + + Enter the difference between the nominal publishing date of an issue and the date that you expect to receive your copy in the Expected Date Offset. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter -2 days into this field. - - + + When finished, click Create Subscription(s) in the bottom right corner of the screen. - - + + A confirmation message appears. Click OK. - - + + You can add notes to the subscription by clicking Subscription Notes. These notes are currently viewable only in the staff client by clicking on the Subscription Notes button. @@ -861,68 +861,68 @@ notes are currently viewable only in the staff client by clicking on the Subscri Distributions indicate the branches that should receive copies of a serial. Distributions work together with streams to indicate the number of copies that should be sent to each branch. Create a distribution - - + + Click the distributions link beneath the subscription. Right click or click Actions for Selected Rows, and click Add distribution. - - + + Apply a new label to the distribution. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. - - + + Apply a prefix to the spine label if desired. This information will display in Serial Control View when the items are received, but it does not print on the spine label in . 0. - - + + Apply a suffix to the spine label if desired. This information will display in Serial Control View when the items are received, but it does not print on the spine label in . 0. - - + + The holding library is filled in by default and is the library to which you attached the subscription. - - + + The Legacy Record Entry contains the MFHD records that are attached to the bib record if the owning library is identical to the distribution’s holding library. A distribution can thus be an extension of an MFHD record. Select the MFHD record from the drop down menu. - - + + The Receive Call Number field is empty until you receive the first item. When you receive the first item, you are prompted to enter a call number. That call number will populate this drop down menu. - - + + The Bind Call Number field is empty until you bind the first item. When you receive the first item, you are prompted to enter a call number. That call number will populate this drop down menu. - - + + Receive Unit Template – The template that should be applied to copies when they are received. Select a template from the drop down menu. - - + + Bind Unit Template - The template that should be applied to copies when they are bound. Select a template from the drop down menu. - - + + When finished, click Create Distribution(s) in the bottom right corner of the screen. - - + + A confirmation message appears. Click OK. - - + + You can add notes to the distribution by clicking Distribution Notes. These notes are currently viewable only in the staff client by clicking on the Distribution Notes button. @@ -937,44 +937,44 @@ notes are currently viewable only in the staff client by clicking on the Distrib In our example, the Apex Branch should receive copies, so we created a distribution to that branch. The Apex Branch should receive two copies, so we will create two streams to that branch. Create a stream - - + + Click the Distributions tab. - - + + Check the boxes to Show Dist. and Show Groups to view distributions and streams. - - + + Select the Streams link beneath the distribution that you created for that branch. Right click or click Actions for Selected Row → Add Stream. - - + + Click the stream that is created. - - + + Enter a routing label so that the copy could be read by specific users or departments before the copy is shelved. The routing label appears during receiving and could be added to routing lists; it is not viewable by the public. Routing lists do not print in . 0. This field is optional. - - + + Click Modify Stream(s) in the bottom right corner of the screen. - - + + The data in the Basic Summary, Supplement Summary, and Index Summary are automatically generated by the ILS when you create a caption and pattern and a holdings statement. You can create additional textual holdings manually by editing the Textual Holdings field. Edit a stream - - + + To edit a stream, select the stream in the tree on the left side of the screen. You can edit the following category: @@ -985,30 +985,30 @@ Routing Label – The label given to an issue to direct it to the people or depa
- - + + The Basic Summary displays the distribution ID, the Textual Holdings, and the Generated Holdings. The OPAC uses data in legacy records, the generated coverage field, and the textual holdings fields to display holdings information. - - + + The distribution ID and the Generated Coverage are created by Evergreen. - - + + Textual Holdings – Enter any additional holdings information in this field, and it will display in the OPAC as Additional Volume Information. - - + + Then click Modify Basic Summary to save your changes. Your changes will appear in the OPAC view. - - - - + + + + @@ -1017,53 +1017,53 @@ Then click Modify Basic Summary to save your changes. Your changes will appear In 2.0, it is not possible to create a caption and pattern and apply it to multiple subscriptions. However, you can re-use patterns if you copy and paste to and from the pattern code field in the Captions and Patterns tab. Create a caption and pattern - - + + Click the Subscriptions tab. - - + + Beneath the subscription, click Captions and Patterns, and right-click or click Actions for Selected Row → Add Caption/Pattern. - - + + The ID and Creation Date will fill in automatically. - - + + Click the Unset entry beneath Type. A drop down menu will appear. Choose the type of caption and pattern that you want to create, and click Apply. - - + + Click the Unset entry beneath Active. A drop down menu will appear. Choose Yes if you want to activate the caption and pattern. Click Apply. - - + + Click the Unset entry beneath the Pattern Code (temporary) field if you want to create the pattern code by hand. If you want to create it automatically, click Pattern Code Wizard in the lower right corner. - - + + Follow the steps for using the pattern code wizard. - - + + Click Apply. - - + + Click Create Caption and Pattern(s). - - + + Edit a caption and pattern @@ -1079,53 +1079,53 @@ Click Create Caption and Pattern(s). The Issuances tab enables you to manually create an issue in the ILS. The ILS will use the initial issue that you manually create to predict future issues. Create an issuance - - + + Click the Subscriptions tab. - - + + Beneath the subscription, click Issuances, and right-click or click Actions for Selected Row → Add Issuance. - - + + The fields in the first column will fill in automatically after you have created the issuance. - - + + Click the Unset link in the Holding Code field, and manually enter a holding code. Click Apply. - - + + Click the Unset link in the Caption/Pattern field. Select a caption/pattern from the drop down menu. Click Apply. - - + + Enter the Date Published of the issuance that you are editing. Enter the date in YYYY-MM-DD format. If you are creating one manual issue before automatically predicting more issues, then this date should be the date that you want to enter before the prediction starts. Click Apply. - - + + Click in the Issuance Label field to name the issuance. There are no limits on the number of characters that can be entered in this field. You may want to enter the month and year of the publication in hand. Click Apply. - - + + Click Create Issuance in the lower right corner to save your changes. - - + + A confirmation message appears. Click OK. - - + + Edit an issuance @@ -1134,56 +1134,56 @@ A confirmation message appears. Click OK. Generate item predictions - - + + Open the Subscriptions tab. - - + + Right-click or click Actions for Selected Row → Make predictions. - - + + A pop up box will aks you how many items you want to predict. Enter the number, and click OK. - - + + A confirmation message will appear. Click OK. - - + + Click the Issuances link to view the predicted issues. - - + + Receiving Receive items in the Items tab. From this interface, you can receive items, edit item attributes, and delete items. Receive Items - - + + To receive items, click the Receive radio button. In the top half of the screen, the items that have yet to be received are displayed. In the bottom half of the screen, recently received items are displayed. - - + + Select the branch that will receive the items from the drop down box. - - + + Select the issue that you want to receive. - - + + Select the current working unit. Click Set Current Unit, located in the lower right corner of the screen. A drop down menu will appear. @@ -1204,45 +1204,45 @@ If you want the item to be received or bound into an existing item, select Recen - - + + Click Receive/Move Selected. - - + + Enter a barcode and call number if prompted to do so. - - + + A message confirming receipt of the item appears. Click OK. - - + + The screen refreshes. In the top half of the screen, the item displays a received date. In the bottom half of the screen, the item that you have just received is now at the top of the list of the received items. - - + + After receiving items, you can view the updated holdings in the OPAC. In this example, the legacy MFHD record and the items recently received in the serial control view display together in the MFHD statement. Edit Item Attributes In this pop up box, you can view the Item ID, Status, Distribution, and Shelving ID. These are generated by Evergreen. However, you may need to edit an item’s Date Expected or Received. - - + + To edit item attributes, select the item(s) that you want to edit, and click Actions for Selected Rows → Edit Item Attributes. - - + + Edit the attributes that appear. When you are finished, click Modify Item(s). - - + + Delete Items @@ -1252,33 +1252,33 @@ Edit the attributes that appear. When you are finished, click Modify Item(s). Bind Items The binding mode applies the binding template, which is defined in the distribution (see simplesect 2 for more information), to units that should be bound. - - + + Select the the branch that will receive the items from the drop down box. - - + + To bind items, click the Bind radio button. Items that have been received will appear in the top half of the screen. - - + + Select the current working unit. - - + + Select the issues that you want to bind, and click Receive/Move Selected. - - + + In the bottom half of the screen, you can view the items that you have bound together. - - + + If you want to view all items, including those that have not been received, in the top half of the screen, click the check box adjacent to Show All. @@ -1287,53 +1287,53 @@ Edit the attributes that appear. When you are finished, click Modify Item(s).
MFHD Record You can manually create MFHD statements. - - + + Create an MFHD record - - + + Open a serial record, and in the bottom right corner above the copy information, click Add MFHD Record. You can also add the MFHD statement by clicking Actions for this Record →MFHD Holdings →Add MFHD Record. - - + + A message will confirm that you have created the MFHD Record. Click OK. - - + + Click Reload in the top left corner of the record. - - + + The Holdings Summary will appear. Click Edit Holdings in the right corner. - - + + Click Edit Record. - - + + The MFHD window will pop up. Enter holdings information. Click Save MFHD. - - + + Close the MFHD window. - - + + Click Reload in the top left corner of the record. The Holdings Summary will reflect the changes to the MFHD statement. - - + +
The following permissions enable you to control serials’ functions. Although you can assign each permission to users in the Admin module, it is recommended that either all serials permissions be assigned to an individual, or that they should be assigned to individuals in the following groups. -- 2.11.0