From: Galen Charlton Date: Fri, 4 Sep 2020 21:22:17 +0000 (-0400) Subject: LP#1848524: ... and swap in the Antora docs X-Git-Url: https://old-git.evergreen-ils.org/?a=commitdiff_plain;h=09e82b974346f2137411b3e4406c1b0eb0ee540a;p=evergreen%2Fpines.git LP#1848524: ... and swap in the Antora docs Signed-off-by: Galen Charlton --- diff --git a/docs-antora/.gitignore b/docs-antora/.gitignore deleted file mode 100644 index 504afef81f..0000000000 --- a/docs-antora/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -node_modules/ -package-lock.json diff --git a/docs-antora/README.adoc b/docs-antora/README.adoc deleted file mode 100644 index eac6cfb4c3..0000000000 --- a/docs-antora/README.adoc +++ /dev/null @@ -1,188 +0,0 @@ -= Antora Docs build procedure - -:idseparator: - - -== Using generate_docs.pl - -This tool should perform all of the steps in "Doing it Manually", automatically. This tool requires some command line arguments and also requires some prerequisites. - -=== Installing Node - -Be sure and have Node installed. - -see https://github.com/nvm-sh/nvm#installation-and-update[Installing Node] - -[source,bash] ----- -wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash ----- - -=== Antora pre-reqs - -Once Node is installed, follow the Antora prereqs - -Summarized from https://docs.antora.org/antora/2.3/install/linux-requirements/[Antora pre-reqs] - -[source,bash] ----- -$ nvm install --lts ----- - -=== Install Ansible - -There is one peice of the puzzle using Ansible at the moment. Better get that installed. - -[source,bash] ----- -$ sudo apt-get install ansible ----- - -=== Run generate_docs.pl - -This tool does the rest of the work. You will be requried to supply these things: - -[cols=2*] -|=== - -|base-url -|[eg: http//examplesite.org] - -|tmp-space -|[Writable path where we stage antora UI repo and the antora HTML, eg: ../../tmp] - -|html-output -|[Path where you want the generated HTML files to go, eg: /var/www/html] - -|antora-ui-repo -|[git link to a repo, could be community repo: https://gitlab.com/antora/antora-ui-default.git] - -|antora-version -|[target version of antora, eg: 2.1] - -|=== - -Example: - -[source,bash] ----- -$ cd Evergreen/docs-antora -$ ./generate_docs.pl \ ---base-url http//examplesite.org/prod \ ---tmp-space ../../tmp \ ---html-output /var/www/html/prod \ ---antora-ui-repo https://git.evergreen-ils.org/eg-antora.git \ ---antora-version 2.3 - ----- - -NOTE: This tool will create two folders within the temp space folder path: "staging" and "antora-ui". These folders will be erased and re-created with each execution. - - - -== Doing it all manually - -[source,bash] ----- -$ git clone git://git.evergreen-ils.org/working/Evergreen.git -$ git clone git://git.evergreen-ils.org/eg-antora.git -$ cd Evergreen -$ git checkout collab/blake/LP1848524_antora_ize_docs ----- - -First we have to install antora: -Summarized from -https://docs.antora.org/antora/2.1/install/install-antora/ - -[source,bash] ----- -$ cd docs-antora -# (we want to install into directory as opposed to globally) -$ npm i @antora/cli@2.1 @antora/site-generator-default@2.1 ----- - - -Now, install the ui pre-reqs building -lifted from: -https://docs.antora.org/antora-ui-default/set-up-project/ - -[source,bash] ----- -$ cd ../../eg-antora -$ npm install -$ npm gulp-bundle ----- - -At this point you should find a file in: - -NOTE: build/ui-bundle.zip - -Now you can build the website. But you may want to edit the file: - -NOTE: docs-antora/site.yml - -Because the output folder for the website is defaulted to - -NOTE: /var/www/html/prod - -And the default web URL is: - -NOTE: http://localhost/prod - -Build: - -[source,bash] ----- -$ cd ../Evergreen/docs-antora -antora site.yml ----- - -If all went well - then you will have the site built in the output folder that was configured in site.yml! - -Interesting reading related to Antora and AsciiDoc and AsciiDoctor - -NOTE: https://asciidoctor.org/docs/asciidoc-asciidoctor-diffs/ - -NOTE: https://blog.anoff.io/2019-02-15-antora-first-steps/ - -NOTE: https://owncloud.org/news/owncloud-docs-migrating-antora-pt-1-2/ - - -== Search stuff - -First you need to have ansible installed - -NOTE: If you want to manually edit the file, you don't need to install ansible - -[source,bash] ----- -$ sudo apt-get -y install ansible ----- - -Now, let's run through the antora-lunr procedure: - -NOTE: Lifted from the base install notes from the https://github.com/Mogztter/antora-lunr[ git repo] - - -[source,bash] ----- -$ ansible-playbook setup_lunr.yml - ----- - -This should have edited this file: node_modules/@antora/site-generator-default/lib/generate-site.js -as outlined in the git repo notes - -Now, install the lunr bits (from docs-antora folder) - -[source,bash] ----- -$ npm i antora-lunr ----- - -And now, you can re-generate the site but this time with the search box: - -[source,bash] ----- -$ DOCSEARCH_ENABLED=true DOCSEARCH_ENGINE=lunr antora site.yml ----- - diff --git a/docs-antora/antora.yml b/docs-antora/antora.yml deleted file mode 100644 index d1febd680f..0000000000 --- a/docs-antora/antora.yml +++ /dev/null @@ -1,19 +0,0 @@ -name: docs -title: Evergreen docs -version: 'latest' -nav: -- modules/ROOT/nav.adoc -- modules/installation/nav.adoc -- modules/admin_initial_setup/nav.adoc -- modules/using_staff_client/nav.adoc -- modules/sys_admin/nav.adoc -- modules/local_admin/nav.adoc -- modules/acquisitions/nav.adoc -- modules/cataloging/nav.adoc -- modules/serials/nav.adoc -- modules/circulation/nav.adoc -- modules/reports/nav.adoc -- modules/opac/nav.adoc -- modules/development/nav.adoc -- modules/api/nav.adoc -- modules/appendix/nav.adoc diff --git a/docs-antora/check_docs_meta_title.sh b/docs-antora/check_docs_meta_title.sh deleted file mode 100644 index 1ba0a3f8fd..0000000000 --- a/docs-antora/check_docs_meta_title.sh +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/bash - -# This script will search a website and gather up all of the for each page -# the results will land in out.csv -# This is a nice aid to help us find pages that do not have the "right" headings - -wget --spider -r -l inf -w .25 -nc -nd $1 -R bmp,css,gif,ico,jpg,jpeg,js,mp3,mp4,pdf,png,PNG,JPG,swf,txt,xml,xls,zip 2>&1 | tee wglog - -rm out.csv -cat wglog | grep '^--' | awk '{print $3}' | sort | uniq | while read url; do { - -printf "%s* Retreiving title for: %s$url%s " "$bldgreen" "$txtrst$txtbld" "$txtrst" -printf ""${url}","`curl -# ${url} | sed -n -E 's!.*(.*).*!\1!p'`" , " >> out.csv -printf " " -}; done - diff --git a/docs-antora/generate_docs.pl b/docs-antora/generate_docs.pl deleted file mode 100644 index 0ca827272e..0000000000 --- a/docs-antora/generate_docs.pl +++ /dev/null @@ -1,233 +0,0 @@ -#!/usr/bin/perl -# --------------------------------------------------------------- -# Copyright © 2020 MOBIUS -# Blake Graham-Henderson -# -# This program is free software; you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation; either version 2 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# --------------------------------------------------------------- - - -use Getopt::Long; -use Cwd; -use File::Path; -use Data::Dumper; - -my $base_url; -my $tmp_space; -my $html_output; -my $antoraui_git; -my $antora_version; -my $help; - - - -GetOptions ( -"base-url=s" => \$base_url, -"tmp-space=s" => \$tmp_space, -"html-output=s" => \$html_output, -"antora-ui-repo=s" => \$antoraui_git, -"antora-version=s" => \$antora_version, -"help" => \$help -); - -sub help -{ - print < '.$file) or $ret=0; - binmode(OUTPUT, ":utf8"); - print OUTPUT "$contents\n"; - close(OUTPUT); - return $ret; -} - -sub replace_yml -{ - my $replacement = shift; - my $yml_path = shift; - my $file = shift; - my @path = split(/\//,$yml_path); - my @lines = @{read_file($file)}; - my $depth = 0; - my $ret = ''; - while(@lines[0]) - { - my $line = shift @lines; - if(@path[0]) - { - my $preceed_space = $depth * 2; - my $exp = '\s{'.$preceed_space.'}'; - $exp = '[^\s#]' if $preceed_space == 0; - # print "testing $exp\n"; - if($line =~ m/^$exp.*/) - { - if($line =~ m/^\s*@path[0].*/) - { - $depth++; - if(!@path[1]) - { - # print "replacing '$line'\n"; - my $t = @path[0]; - $line =~ s/^(.*?$t[^\s]*).*$/\1 $replacement/g; - # print "now: '$line'\n"; - } - shift @path; - } - } - } - $line =~ s/[\n\t]*$//g; - $ret .= "$line\n"; - } - - return $ret; -} - -sub exec_system_cmd -{ - my $cmd = shift; - print "executing $cmd\n"; - system($cmd) == 0 - or die "system @args failed: $?"; -} - -sub read_file -{ - my $file = shift; - my $trys=0; - my $failed=0; - my @lines; - #print "Attempting open\n"; - if(-e $file) - { - my $worked = open (inputfile, '< '. $file); - if(!$worked) - { - print "******************Failed to read file*************\n"; - } - binmode(inputfile, ":utf8"); - while (!(open (inputfile, '< '. $file)) && $trys<100) - { - print "Trying again attempt $trys\n"; - $trys++; - sleep(1); - } - if($trys<100) - { - #print "Finally worked... now reading\n"; - @lines = ; - close(inputfile); - } - else - { - print "Attempted $trys times. COULD NOT READ FILE: $file\n"; - } - close(inputfile); - } - else - { - print "File does not exist: $file\n"; - } - return \@lines; -} - -exit; \ No newline at end of file diff --git a/docs-antora/modules/ROOT/_attributes.adoc b/docs-antora/modules/ROOT/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/ROOT/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/ROOT/nav.adoc b/docs-antora/modules/ROOT/nav.adoc deleted file mode 100644 index 48447d9e3b..0000000000 --- a/docs-antora/modules/ROOT/nav.adoc +++ /dev/null @@ -1,2 +0,0 @@ -* xref:shared:about_this_documentation.adoc[Introduction] -** xref:shared:about_evergreen.adoc[About Evergreen] diff --git a/docs-antora/modules/ROOT/pages/_attributes.adoc b/docs-antora/modules/ROOT/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/ROOT/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/ROOT/pages/index.adoc b/docs-antora/modules/ROOT/pages/index.adoc deleted file mode 100644 index 6105a13baf..0000000000 --- a/docs-antora/modules/ROOT/pages/index.adoc +++ /dev/null @@ -1,16 +0,0 @@ -= Evergreen Documentation -ifndef::env-site,env-github[] -include::_attributes.adoc[] -endif::[] -// Settings -:idprefix: -:idseparator: - - - -== Topic Manuals == - -Browse all the documentation topics using the left sidebar. Or, try one of -these smaller topic manuals: - -xref:acq:ROOT:index.adoc[Acquisitions Manual] - diff --git a/docs-antora/modules/acquisitions/_attributes.adoc b/docs-antora/modules/acquisitions/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/acquisitions/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/acquisitions/assets/images/media/2_10_Lineitem_Paid.png b/docs-antora/modules/acquisitions/assets/images/media/2_10_Lineitem_Paid.png deleted file mode 100644 index b5e9303017..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/2_10_Lineitem_Paid.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled2.jpg b/docs-antora/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled2.jpg deleted file mode 100644 index f363c00988..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled2.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled4.jpg b/docs-antora/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled4.jpg deleted file mode 100644 index 3b5d647b01..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled4.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Electronic_invoicing1.jpg b/docs-antora/modules/acquisitions/assets/images/media/Electronic_invoicing1.jpg deleted file mode 100644 index 91923bd225..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Electronic_invoicing1.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice1.jpg b/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice1.jpg deleted file mode 100644 index 120894cf1b..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice1.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice2.jpg b/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice2.jpg deleted file mode 100644 index 266610b8c2..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice2.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice4.jpg b/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice4.jpg deleted file mode 100644 index 60f7eb1043..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice4.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice5.jpg b/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice5.jpg deleted file mode 100644 index bd9a45b7e5..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice5.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice7.jpg b/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice7.jpg deleted file mode 100644 index 77785d506b..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice7.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Return_to_line_item1.jpg b/docs-antora/modules/acquisitions/assets/images/media/Return_to_line_item1.jpg deleted file mode 100644 index ff0209be72..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Return_to_line_item1.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice1.jpg b/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice1.jpg deleted file mode 100644 index 3fe5aa3b93..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice1.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice2.jpg b/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice2.jpg deleted file mode 100644 index e4832f6817..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice2.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice3.jpg b/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice3.jpg deleted file mode 100644 index 1ab71c6e59..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice3.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice5.jpg b/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice5.jpg deleted file mode 100644 index 181c27bfce..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice5.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions1.jpg b/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions1.jpg deleted file mode 100644 index 5081a23337..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions1.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions2.jpg b/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions2.jpg deleted file mode 100644 index afc195b847..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions2.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions3.jpg b/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions3.jpg deleted file mode 100644 index 1190fb1dd5..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions3.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions4.jpg b/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions4.jpg deleted file mode 100644 index 1b8d00fd3d..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions4.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions5.jpg b/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions5.jpg deleted file mode 100644 index fdeb99c15c..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions5.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions6.jpg b/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions6.jpg deleted file mode 100644 index 90dde3c635..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions6.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/Zero_Copies1.jpg b/docs-antora/modules/acquisitions/assets/images/media/Zero_Copies1.jpg deleted file mode 100644 index b95610d131..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/Zero_Copies1.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_brief_record-2.png b/docs-antora/modules/acquisitions/assets/images/media/acq_brief_record-2.png deleted file mode 100644 index 264213ce31..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_brief_record-2.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_brief_record.png b/docs-antora/modules/acquisitions/assets/images/media/acq_brief_record.png deleted file mode 100644 index 25fdefe7ba..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_brief_record.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_blanket.png b/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_blanket.png deleted file mode 100644 index 227727ee2b..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_blanket.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_link.png b/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_link.png deleted file mode 100644 index 6bc7c1b3dd..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_link.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_view-2.png b/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_view-2.png deleted file mode 100644 index abac114908..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_view-2.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_view.png b/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_view.png deleted file mode 100644 index 616380cee3..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_invoice_view.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_marc_search-2.png b/docs-antora/modules/acquisitions/assets/images/media/acq_marc_search-2.png deleted file mode 100644 index f991a6d423..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_marc_search-2.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_marc_search.png b/docs-antora/modules/acquisitions/assets/images/media/acq_marc_search.png deleted file mode 100644 index 391ae435a2..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_marc_search.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_clone.png b/docs-antora/modules/acquisitions/assets/images/media/acq_selection_clone.png deleted file mode 100644 index d4183b5cb1..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_clone.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_create.png b/docs-antora/modules/acquisitions/assets/images/media/acq_selection_create.png deleted file mode 100644 index f248d0ae9e..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_create.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_mark_ready.png b/docs-antora/modules/acquisitions/assets/images/media/acq_selection_mark_ready.png deleted file mode 100644 index 3eb558659e..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_mark_ready.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_merge.png b/docs-antora/modules/acquisitions/assets/images/media/acq_selection_merge.png deleted file mode 100644 index e1f740c246..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_selection_merge.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_upload_library_settings.png b/docs-antora/modules/acquisitions/assets/images/media/acq_upload_library_settings.png deleted file mode 100644 index f33d348843..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_upload_library_settings.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/acq_workflow.jpg b/docs-antora/modules/acquisitions/assets/images/media/acq_workflow.jpg deleted file mode 100644 index 1143cfa017..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/acq_workflow.jpg and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/display_copy_count_1.JPG b/docs-antora/modules/acquisitions/assets/images/media/display_copy_count_1.JPG deleted file mode 100644 index ab4ac66ddf..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/display_copy_count_1.JPG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/display_copy_count_2.JPG b/docs-antora/modules/acquisitions/assets/images/media/display_copy_count_2.JPG deleted file mode 100644 index 368f36f09c..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/display_copy_count_2.JPG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/load_marc_order_records.png b/docs-antora/modules/acquisitions/assets/images/media/load_marc_order_records.png deleted file mode 100644 index 599777658d..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/load_marc_order_records.png and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/patronrequests_requestform.PNG b/docs-antora/modules/acquisitions/assets/images/media/patronrequests_requestform.PNG deleted file mode 100644 index fbf7d03cdb..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/patronrequests_requestform.PNG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/patronrequests_requestgrid.PNG b/docs-antora/modules/acquisitions/assets/images/media/patronrequests_requestgrid.PNG deleted file mode 100644 index 5a7afdbafb..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/patronrequests_requestgrid.PNG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_1.JPG b/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_1.JPG deleted file mode 100644 index 93b34b4cfe..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_1.JPG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_2.JPG b/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_2.JPG deleted file mode 100644 index e3913d6c61..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_2.JPG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_3.JPG b/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_3.JPG deleted file mode 100644 index f0b046fd36..0000000000 Binary files a/docs-antora/modules/acquisitions/assets/images/media/po_name_detection_3.JPG and /dev/null differ diff --git a/docs-antora/modules/acquisitions/nav.adoc b/docs-antora/modules/acquisitions/nav.adoc deleted file mode 100644 index 54d2a5cda4..0000000000 --- a/docs-antora/modules/acquisitions/nav.adoc +++ /dev/null @@ -1,8 +0,0 @@ -* xref:acquisitions:introduction.adoc[Acquisitions] -** xref:acquisitions:selection_lists_po.adoc[Selection Lists and Purchase Orders] -** xref:acquisitions:vandelay_acquisitions_integration.adoc[Load MARC Order Records] -** xref:acquisitions:invoices.adoc[Invoices] -** xref:acquisitions:purchase_requests_management.adoc[Managing patron purchase requests] -** xref:acquisitions:purchase_requests_patron_view.adoc[Placing purchase requests from a patron record] -** xref:acquisitions:blanket.adoc["Blanket" Orders] - diff --git a/docs-antora/modules/acquisitions/pages/_attributes.adoc b/docs-antora/modules/acquisitions/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/acquisitions/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/acquisitions/pages/blanket.adoc b/docs-antora/modules/acquisitions/pages/blanket.adoc deleted file mode 100644 index bfc6db9c60..0000000000 --- a/docs-antora/modules/acquisitions/pages/blanket.adoc +++ /dev/null @@ -1,39 +0,0 @@ -= "Blanket" Orders = -:toc: - -"Blanket" orders allow staff to invoice an encumbered amount multiple times, paying off the charge over a period of time. The work flow supported by this development assumes staff does not need to track the individual contents of the order, only the amounts encumbered and invoiced in bulk. - -== Example == - -. Staff creates PO with a Direct Charge of "Popular Fiction 2015" and a charge type of "Blanket Order". - -. The amount entered for the charge equals the total amount expected to be charged over the duration of the order. - -. When a shipment of "Popular Fiction" items arrive, staff creates an invoice from the "Popular Fiction 2015" PO page and enters the amount billed/paid for the received shipment under the "Popular Fiction 2015" charge in the invoice. - -. When the final shipment arrives, staff select the _Final invoice for Blanket Order_ option on the invoice screen to mark the PO as _received_ and drop any remaining encumbrances to $0. - - .. Alternatively, if the PO needs to be finalized without creating a final invoice, staff can use the new _Finalize Blanket Order_ option on the PO page. - -== More details about blanket orders == - -* Any direct charge using a _blanket_ item type will create a long-lived charge that can be invoiced multiple times. - -* Such a charge is considered open until its purchase order is "finalized" (received). - -* "Finalizing" a PO changes the PO's state to _received_ (assuming there are no pending lineitems on the PO) and fully dis-encumbers all _blanket_ charges on the PO by setting the fund_debit amount to $0 on the original fund_debit for the charge. - -* Invoicing a _blanket_ charge does the following under the covers: - - .. Create an invoice_item to track the payment - - .. Create a new fund_debit to implement the payment whose amount matches the invoiced amount. - -* Subtract the invoiced amount from the fund_debit linked to the original _blanket_ po_item, thus reducing the amount encumbered on the charge as a whole by the invoiced amount. - -* A PO can have multiple blanket charges. E.g. you could have a blanket order for "Popular Fiction 2015" and a second charge for "Pop Fiction 2015 Taxes" to track / pay taxes over time on a blanket charge. - -* A PO can have a mix of lineitems, non-blanket charges, and blanket charges. - -* A _blanket_ Invoice Item Type cannot also be a _prorate_ type, since it's nonsensical. Blanket items are encumbered, whereas prorated items are only paid at invoice time and never encumbered. - diff --git a/docs-antora/modules/acquisitions/pages/introduction.adoc b/docs-antora/modules/acquisitions/pages/introduction.adoc deleted file mode 100644 index 54397cfd4a..0000000000 --- a/docs-antora/modules/acquisitions/pages/introduction.adoc +++ /dev/null @@ -1,26 +0,0 @@ -= Acquisitions = -:toc: - -== Initial Configuration == - -Before beginning to use Acquisitions, the following must be configured by an administrator: - -* Cancel/Suspend Reasons (optional) -* Claiming (optional) -* Currency Types (defaults exist) -* Distribution Formulas (optional) -* EDI Accounts (optional) -* Exchange Rates (defaults exist) -* Funds and Fund Sources -* Invoice Types (defaults exist) and Invoice Payment Methods -* Line Item Features (optional) -* Merge Overlay Profiles and Record Match Sets -* Providers - -More details can be found in the Staff Client System Administration manual. - -== Acquisitions Workflow == - -The following diagram shows how the workflow functions in Evergreen. One of the differences in this process you should notice is that when creating a selection list on the vendor site, libraries will be downloading and importing the vendor bibs and item records. - -image::media/acq_workflow.jpg[workflow diagram] diff --git a/docs-antora/modules/acquisitions/pages/invoices.adoc b/docs-antora/modules/acquisitions/pages/invoices.adoc deleted file mode 100644 index 96d222bd53..0000000000 --- a/docs-antora/modules/acquisitions/pages/invoices.adoc +++ /dev/null @@ -1,332 +0,0 @@ -= Invoices = -:toc: - -== Introduction == - -indexterm:[acquisitions,invoices] - -You can create invoices for purchase orders, individual line items, and blanket purchases. You can also link existing invoices to purchase order. - -You can invoice items before you receive the items if desired. You can also -reopen closed invoices, and you can print all invoices. - -== Creating invoices and adding line items == -You can add specific line items to an invoice from the PO or acquisitions -search results screen. You can also search for relevant line items from within -the invoice interface. In addition, you can add all line items from an entire -Purchase order to an invoice or you can create a blanket invoice for items that are not -attached to a purchase order. - -=== Creating a blanket invoice === - -You can create a blanket invoice for purchases that are not attached to a purchase order. - -. Click _Acquisitions_ -> _Create invoice_. -. Enter the invoice information in the top half of the screen. -. To add charges for materials not attached to a purchase order, click _Add -Charge..._ This functionality may also be used to add shipping, tax, and other fees. -. Select a charge type from the drop-down menu. -+ -[NOTE] -New charge types can be added via _Administration_ -> _Acquisitions -Administration_ -> _Invoice Item Types_. -+ -. Select a fund from the drop-down menu. -. Enter a _Title/Description_ of the resource. -. Enter the amount that you were billed. -. Enter the amount that you paid. -. Save the invoice. - -image::media/acq_invoice_blanket.png[Blanket invoice] - -=== Adding line items from a Purchase Order or search results screen to an invoice === - -You can create an invoice or add line items to an invoice directly from a -Purchase Order or an acquisitions search results screen. - -. Place a checkmark in the box for selected line items from the Purchase Order' or acquisitions search results page. -. If you are creating a new invoice, click _Actions_ -> _Create Invoice From -Selected Line Items_. Enter the invoice information in the top half of the -screen. -. If you are adding the line items to an existing invoice, click _Actions_ -> -_Link Selected Line Items to Invoice_. Enter the Invoice # and Provider and -then click the _Link_ button. -. Evergreen automatically enters the number of items that was ordered in -the # Invoiced and # Paid fields. Adjust these quantities as needed. -. Enter the amount that the organization was billed. This entry will -automatically propagate to the Paid field. -. You have the option to add charge types if applicable. Charge types are -additional charges that can be selected from the drop-down menu. Common charge -types include taxes and handling fees. -. You have four options for saving an invoice. - -- Click _Save_ to save the changes you have made while staying in the current -invoice. -- Click _Save & Clear_ to save the changes you have made and to replace the -current invoice with a new invoice so that you can continue invoicing items. -- Click _Prorate_ to save the invoice and prorate any additional charges, such -as taxes, across funds, if multiple funds have been used to pay the invoice. - -+ -[NOTE] -Prorating will only be applied to charge types that have the _Prorate?_ flag set -to true. This setting can be adjusted via _Administration_ -> -_Acquisitions Administration_ -> _Invoice Item Types_. -+ - -- Click _Close_. Choose this option when you have completed the invoice. This -option will also save any changes that have been made. Funds will be disencumbered when the invoice is closed. - -. You can re-open a closed invoice by clicking the link, _Re-open invoice_. This -link appears at the bottom of a closed invoice. - -=== Search for line items from an invoice === - -indexterm:[acquisitions,lineitems,searching for] -indexterm:[acquisitions,invoices,searching for lineitems] - -You can open an invoice, search for line items from -the invoice, and add your search results to a new or existing invoice. This -feature is especially useful when you want to populate an invoice with line -items from multiple purchase orders. - -In this example, we'll add line items to a new invoice: - -indexterm:[acquisitions,lineitems,adding] - -. Click _Acquisitions_ -> _Create Invoice_. -. An invoice summary appears at the top of the invoice and includes the number -of line items on the invoice and the expected cost of the items. This number -will change as we add line items to the invoice. -. Enter the invoice details (optional). If you do not enter the invoice -details, then Evergreen will populate the _Provider_ and _Receiver_ fields with -information from the line items. -+ -NOTE: If you do not want to display the details, click _Hide Details_. -+ -image::media/Search_for_line_items_from_an_invoice1.jpg[Search_for_line_items_from_an_invoice1] -+ -. Click the _Search_ tab to add line items to an invoice. -. Select your search criteria from the drop-down menu. -. On the right side of the screen, _Limit to Invoiceable Items_ is checked by -default. Invoiceable items are those that are on order, have not been -cancelled, and have not yet been invoiced. Evergreen also filters out items -that have already been added to an invoice. Finally, if this box is checked, -and if your entered the invoice details at the top of the screen, then Evergreen -will filter your search for items that have the same provider as the one that -you entered. If you have not entered the invoice details, then Evergreen -removes this limit. -. Sort by title (optional). By default, results are listed by line item -number. Check this box to sort by ascending title. -. Building the results list progressively (optional). By default, new search -results will replace previous results on the screen. Check this box for the -search results list to build with each subsequent search. This option is useful -for libraries that might search for line items by scanning an ISBN. Several -ISBNs can be scanned and then the entire result set can be selected and moved -to the invoice in a batch. -. Click _Search_. -+ -image::media/Search_for_line_items_from_an_invoice2.jpg[Search_for_line_items_from_an_invoice2] -+ -. Use the _Next_ button to page through results, or select a line item(s), and -click _Add Selected Items to Invoice_. -.The rows that you selected are highlighted, and the invoice summary at the -top of the screen updates. -+ -image::media/Search_for_line_items_from_an_invoice3.jpg[Search_for_line_items_from_an_invoice3] -+ -. Click the _Invoice_ tab to see the updated invoice. -. Evergreen automatically enters the number of items that was ordered in the -# Invoiced and # Paid fields. Adjust these quantities as needed. -. Enter the amount that the organization was billed. This entry will -automatically propagate to the Paid field. The _Per Copy_ field calculates the -cost of each copy by dividing the amount that was billed by the number of -copies for which the library paid. - -image::media/Search_for_line_items_from_an_invoice5.jpg[Search_for_line_items_from_an_invoice5] - -=== Create an invoice for a purchase order === - -You can create an invoice for all of the line items on a purchase order. With -the exception of fields with drop-down menus, no limitations on the data that you enter exist. - -. Open a purchase order. -. Click _Create Invoice_. -. Enter a Vendor Invoice ID. This number may be listed on the paper invoice -sent from your vendor. -. Choose a Receive Method from the drop-down menu. The system will default to -_Paper_. -. The Provider is generated from the purchase order and is entered by default. -. Enter a note (optional). -. Select a payment method from the drop-down menu (optional). -. The Invoice Date is entered by default as the date that you create the -invoice. You can change the date by clicking in the field. A calendar drops -down. -. Enter an Invoice Type (optional). -. The Shipper defaults to the provider that was entered in the purchase order. -. Enter a Payment Authorization (optional). -. The Receiver defaults to the branch at which your workstation is registered. -You can change the receiver by selecting an org unit from the drop-down menu. -+ -[NOTE] -The bibliographic line items are listed in the next section of the invoice. -Along with the _title_ and _author_ of the line items is a _summary of copies -ordered, received, invoiced, claimed,_ and _cancelled_. You can also view the -_amounts estimated, encumbered,_ and _paid_ for each line item. Finally, each -line item has a _line item ID_ and links to the _selection list_ (if used) and -the _purchase order_. -+ -. Evergreen automatically enters the number of items that was ordered in the -# Invoiced and # Paid fields. Adjust these quantities as needed. -. Enter the amount that the organization was billed. This entry will -automatically propagate to the Paid field. The _Per Copy_ field calculates the -cost of each copy by dividing the amount that was billed by the number of -copies for which the library paid. -. You have the option to add charge types if applicable. Charge types are -additional charges that can be selected from the drop-down menu. Common charge -types include taxes and handling fees. -. You have four options for saving an invoice. - -- Click _Save_ to save the changes you have made while staying in the current -invoice. -- Click _Save & Clear_ to save the changes you have made and to replace the -current invoice with a new invoice so that you can continue invoicing items. -- Click _Prorate_ to save the invoice and prorate any additional charges, such -as taxes, across funds, if multiple funds have been used to pay the invoice. - -+ -[NOTE] -Prorating will only be applied to charge types that have the Prorate? flag set -to true. This setting can be adjusted via _Administration_ -> -_Acquisitions Administration_ -> _Invoice Item Types_. -+ - -- Click _Close_. Choose this option when you have completed the invoice. This -option will also save any changes that have been made. Funds will be disencumbered when the invoice is closed. - -. You can re-open a closed invoice by clicking the link, _Re-open invoice_. This -link appears at the bottom of a closed invoice. - -=== Link an existing invoice to a purchase order === - -You can use the link invoice feature to link an existing invoice to a purchase -order. For example, an invoice is received for a shipment with items on -purchase order #1 and purchase order #2. When the invoice arrives, purchase -order #1 is retrieved, and the invoice is created. To receive the items on -purchase order #2, simply link the invoice to the purchase order. You do not -need to recreate it. - -. Open a purchase order. -. Click _Link Invoice_. -. Enter the Invoice # and the Provider of the invoice to which you wish to link. -. Click _Link_. - -image::media/acq_invoice_link.png[Link Invoice] - -== Electronic Invoicing == - -indexterm:[acquisitions,invoices,electronic] - -Evergreen can receive electronic invoices from providers. To -access an electronic invoice, you must: - -. Configure EDI for your provider. -. Evergreen will receive invoices electronically from the provider. -. Click _Acquisitions_ -> _Open Invoices_ to view a list of open invoices, or -use the _General Search_ to retrieve invoices. Click a hyperlinked invoice -number to view the invoice. - -image::media/Electronic_invoicing1.jpg[Electronic_invoicing1] - -== View an invoice == - -You can view an invoice in one of four ways: view open invoices; view invoices -on a purchase order; view invoices by searching specific invoice fields; view -invoices attached to a line item. - -. To view open invoices, click _Acquisitions_ -> _Open invoices_. This opens -the Acquisitions Search screen. The default fields search for open invoices. -Click _Search_. -+ -image::media/acq_invoice_view.png[Open Invoice Search] -+ -. To view invoices on a purchase order, open a purchase order and click the -_View Invoices_ link. The number in parentheses indicates the number of -invoices that are attached to the purchase order. -+ -image::media/acq_invoice_view-2.png[View Invoices from PO] -+ -. To view invoices by searching specific invoice fields, see the section on -searching the acquisitions module. -. To view invoices for a line item, see the section on line item invoices. - -== Receive Items From an Invoice == - -This feature enables users to receive items from an invoice. Staff can receive individual copies, or they can receive items in batch. - -=== Receive Items in Batch (List Mode) === - -In this example, we have created a purchase order, added line items and copies, and activated the purchase order. We will create an invoice from the purchase order, receive items, and invoice them. We will receive the items in batch from the invoice. - -1) Retrieve a purchase order. - -2) Click *Create Invoice*. - -image::media/Receive_Items_From_an_Invoice1.jpg[Receive_Items_From_an_Invoice1] - -3) The blank invoice appears. In the top half of the invoice, enter descriptive information about the invoice. In the bottom half of the invoice, enter the number of items for which you were invoiced, the amount that you were billed, and the amount that you paid. - - -image::media/Receive_Items_From_an_Invoice2.jpg[Receive_Items_From_an_Invoice2] - - -4) Click *Save*. You must choose a save option before you can receive items. - - -5) The screen refreshes. In the top right corner of the screen, click *Receive Items*. - - -6) The *Acquisitions Invoice Receiving* screen opens. By default, this screen enables users to receive items in batch, or *Numeric Mode*. You can select the number of copies that you want to receive; you are not receiving specific copies in this mode. - - -7) Select the number of copies that you want to receive. By default, the number that you invoiced will appear. In this example, we will receive one copy of each title. - - -NOTE: You cannot receive fewer items than 0 (zero) or more items than the number that you ordered. - - -8) Click *Receive Selected Copies*. - - -image::media/Receive_Items_From_an_Invoice4.jpg[Receive_Items_From_an_Invoice4] - - -9) When you are finished receiving items, close the screen. You can repeat this process as you receive more copies. - - - -=== Receive Specific Copies (Numeric Mode) === - -In this example, we have created a purchase order, added line items and copies, and activated the purchase order. We will create an invoice from the purchase order, receive items, and invoice them. We will receive specific copies from the invoice. This function may be useful to libraries who purchase items that have been barcoded by their vendor. - - -1) Complete steps 1-5 in the previous section. - -2) The *Acquisitions Invoice Receiving* screen by default enables user to receive items in batch, or *Numeric Mode*. Click *Use List Mode* to receive specific copies. - -3) Select the check boxes adjacent to the copies that you want to receive. Leave unchecked the copies that you do not want to receive. - -4) Click *Receive Selected Copies*. - -image::media/Receive_Items_From_an_Invoice5.jpg[Receive_Items_From_an_Invoice5] - - -The screen will refresh. Copies that have not yet been received remain on the screen so that you can receive them when they arrive. - - -5) When all copies on an invoice have been received, a message confirms that no copies remain to be received. - -6) The purchase order records that all items have been received. - -image::media/Receive_Items_From_an_Invoice7.jpg[Receive_Items_From_an_Invoice7] - diff --git a/docs-antora/modules/acquisitions/pages/purchase_requests_management.adoc b/docs-antora/modules/acquisitions/pages/purchase_requests_management.adoc deleted file mode 100644 index 2de688c5c9..0000000000 --- a/docs-antora/modules/acquisitions/pages/purchase_requests_management.adoc +++ /dev/null @@ -1,133 +0,0 @@ -= Managing patron purchase requests = -:toc: - -== Introduction == - -indexterm:[purchase requests] - -Patron Requests can be used to track purchase suggestions from patrons in Evergreen. This feature allows purchase requests to be placed on selection lists to integrate with the Acquisitions module. Patron Requests can be accessed through the Acquisitions module under *Acquisitions -> Patron Requests* and through patron accounts under *Other -> Acquisition Patron Requests*. Requests can be placed and managed through both interfaces. - -== Place a Patron Request == - -. Go to *Acquisitions -> Patron Requests*. This interface is scoped by Patron Home Library and will default to the library your workstation is registered to. -.. Requests can also be placed directly through a patron account, in which case the interface will scope to the patron ID. -+ -image::media/patronrequests_requestgrid.PNG[Patron Requests Grid] -+ -. Click *Create Request* and a modal with the patron request form will appear. -. Create the request by filling out the following information: -.. _User Barcode_ (required): enter the barcode of the user that is placing the request -.. _User ID_: this field will populate automatically when the User Barcode is entered -.. _Request Date/Time_: this field will populate automatically -.. _Need Before Date/Time_: if applicable, set the date and time after which the patron is no longer interested in receiving this title -.. _Place Hold?_: check this box to place a hold on this title for this patron. Holds are placed when the bib and item record are created in the catalog as part of the acquisitions process. -.. _Pickup Library_: pickup library for the hold. This field will default to the patron’s home library is the pickup library is not selected in the patron account. -.. _Notify by Email When Hold is Ready_ and _Notify by Phone When Hold is Ready_: preferences set in patron account will be used or can be set manually here. -.. _Request Type_ (required): type of material requested -.. _ISxN_ -.. _UPC_ -.. _Title_ -.. _Volume_ -.. _Author_ -.. _Publisher_ -.. _Publication Location_ -.. _Publication Date_ -.. _Article Title_: option available if Request Type is “Articles” -.. _Article Pages_: option available if Request Type is “Articles” -.. _Mentioned In_ -.. _Other Info_ -. Click *Save* at the bottom of the form. - -image::media/patronrequests_requestform.PNG[Patron Requests Form] - - -== Actions for Requests == - -After placing a Patron Request, a variety of actions can be taken by selecting the request, or right-clicking, and selecting Actions within either *Acquisitions -> Patron Requests* or through the patron account under *Other -> Acquisition Patron Requests*: - -* *Edit Request* - make changes to the request via the original request form. Edits can be made when the status of a request is New. -* *View Request* - view a read-only version of the request form -* *Retrieve Patron* - retrieve the account of the patron who placed the request -* *Add Request to Selection List* - add the request to a new or existing Selection List in the Acquisitions module. The bibliographic information in the request will generate the MARC order record. From the selection list, the request will be processed through the acquisitions module and the status of the request itself will be updated accordingly. -* *View Selection List* - view the Selection List a request has been added to (this option will be active only if the request is on a selection list) -* *Set Hold on Requests* - allows you to indicate that a hold should be placed on the requested title, without needing to go in and edit the request. You can set a hold as long as the status of the request is New or Pending. -* *Set No Hold on Requests* - allows you to indicate that a hold should not be placed on the requested title, without needing to go in and edit the request individually. -* *Cancel Requests* - cancel the request and select a cancellation reason - -== Administration == - -=== Request Status === - -Patron Requests will use the following statuses: - -* *New* - This is the initial state for a newly created acquisition request. This is the only state from which a request is editable. -* *Pending* - This is the state after a request is added to a selection list. -* *Ordered, Hold Not Placed* - This is the state when an associated purchase order has been created and the request's Place Hold flag is false. -* *Ordered, Hold Placed* - This is the state when the request's Place Hold flag is true, an associated purchase order has been created, and the bibliographic record and item for the request have been created in the catalog as part of the acquisitions process.. -* *Received* - This is the state when the line item on the linked purchase order has been marked as received. -* *Fulfilled* - This is the state when an associated hold request has been fulfilled. -* *Canceled* - This is the state when the acquisition request has been canceled. - -=== Notifications/Action Triggers === - -The following email notifications are included with Evergreen, but are disabled by default. The notices can be enabled through the *Notifications/Action Triggers* interface under *Administration -> Local Administration*. The existing notices could also be modified to place a message in the *Patron Message Center*. Any enabled notifications related to holds placed on requests will also be sent to patrons. - -* Email Notice: Acquisition Request created -* Email Notice: Acquisition Request Rejected -* Email Notice: Patron Acquisition Request marked On-Order -* Email Notice: Patron Acquisition Request marked Cancelled -* Email Notice: Patron Acquisition Request marked Received - -=== Permissions === - -This feature includes one new permission and makes use of several existing permissions. The following permissions are required to manage patron requests: - -* CLEAR_PURCHASE_REQUEST -** A new permission that allows users to clear completed requests -** This permission has been added to the stock Acquisitions permission group -** user_request.update will still be required with this sort of action -** The stock permission mappings for the Acquisitions group will be changed to include this permission -* CREATE_PICKLIST -** Will allow the staff user to create a selection list. -* VIEW_USER -** Permission depth will apply to requests. If a user tries to view a patron request that is beyond the scope of their permissions, a permission denied message will appear with a prompt to log in with different credentials. -* STAFF_LOGIN -* user_request.create -* user_request.view -* user_request.update -** This is checked when updating a request or canceling a request -* user_request.delete - -== Placing purchase requests from a patron record == - -indexterm:[patrons, purchase requests] - -Patrons may wish to suggest titles for your Library to purchase. You can track these requests within Evergreen, -whether or not you are using the acquisitions module for other purposes. This section describes how you can record -these requests within a patron's record. - -. Retrieve the patron's record. - -. Select Other --> Acquisition Patron Requests. This takes you to the Aquisition Patron Requests Screen. CTRL+click or scrollwheel click to open this in a new browser tab. - -. The Acquisition Patron Requests Screen will show any other requests that this patron has made. You may sort the requests by clicking on the column headers. - -. To show canceled requests, click the _Show Canceled Requests_ checkbox. - -. To add the request, click the _Create Request_ button. -+ -NOTE: You will need the CREATE_PURCHASE_REQUEST permission to add a request. -+ -. The request type field is required. Every other field is optional, although it is recommended that you enter as much information about the -request as possible. - -. The _Pickup Library_ and _User ID_ fields will be filled in automatically. - -. _Request Date/Time_ and _User Barcode_ will be automatically recorded when the request is saved. - -. _Notify by Email When Hold is Ready_ and _Notify by Phone When Hold is Ready_ will pull in preferences from the patron account if left blank, or can be set manually here. - -. You have the option to automatically place a hold for the patron if your library decides to purchase the item. If you'd like Evergreen to -generate this hold, check the _Place Hold_ box. - -. When you have finished entering information about the request, click the _Save_ button. diff --git a/docs-antora/modules/acquisitions/pages/selection_lists_po.adoc b/docs-antora/modules/acquisitions/pages/selection_lists_po.adoc deleted file mode 100644 index 462d260611..0000000000 --- a/docs-antora/modules/acquisitions/pages/selection_lists_po.adoc +++ /dev/null @@ -1,323 +0,0 @@ -= Selection Lists and Purchase Orders = -:toc: - -== Selection Lists == - -Selection lists allow you to create, manage, and save lists of items -that you may want to purchase. To view your selection list, click -*Acquisitions* -> *My Selection Lists*. Use the general search to view selection lists created by other users. - -=== Create a selection list === - -Selection lists can be created in four areas within the module. Selection lists can be created when you xref:#brief_records[Add Brief Records], Upload MARC Order Records, or find records through the xref:#marc_federated_search[MARC Federated Search]. In each of these interfaces, you will find the Add to Selection List field. Enter the name of the selection list that you want to create in that field. - -Selection lists can also be created through the My Selection Lists interface: - -. Click *Acquisitions* -> *My Selection Lists*. -. Click the New Selection List drop down arrow. -. Enter the name of the selection list in the box that appears. -. Click Create. - -image::media/acq_selection_create.png[create selection list] - -=== Add items to a selection list === - -You can add items to a selection list in one of three ways: xref:#brief_records[add a brief record]; upload MARC order records; add records through a xref:#marc_federated_search[federated search]; or use the View/Place Orders menu item in the catalog. - -=== Clone selection lists === - -Cloning selection lists enables you to copy one selection list into a new selection list. You can maintain both copies of the list, or you can delete the previous list. - -. Click *Acquisitions* -> *My Selection Lists*. -. Check the box adjacent to the list that you want to clone. -. Click Clone Selected. -. Enter a name into the box that appears, and click Clone. - -image::media/acq_selection_clone.png[clone selection list] - -=== Merge selection lists === - -You can merge two or more selection lists into one selection list. - - -. Click *Acquisitions* -> *My Selection Lists*. -. Check the boxes adjacent to the selection lists that you want to merge, and click Merge Selected. -. Choose the Lead Selection List from the drop down menu. This is the list to which the items on the other list(s) will be transferred. -. Click Merge. - -image::media/acq_selection_merge.png[merge selection list] - -=== Delete selection lists === - -You can delete selection lists that you do not want to save. You will not be able to retrieve these items through the General Search after you have deleted the list. You must delete all line items from a selection list before you can delete the list. - - -. Click *Acquisitions* -> *My Selection Lists*. -. Check the box adjacent to the selection list(s) that you want to delete. -. Click Delete Selected. - -=== Mark Ready for Selector === - -After an item has been added to a selection list or purchase order, you can mark it ready for selector. This step is optional but may be useful to individual workflows. - - -. If you want to mark part of a selection list ready for selector, then you can check the box(es) of the line item(s) that you wish to mark ready for selector. If you want to mark the entire list ready for selector, then skip to step 2. -. Click *Actions* -> *Mark Ready for Selector*. -. A pop up box will appear. Choose to mark the selected line items or all line items. -. Click Go. -. The screen will refresh. The marked line item(s) will be highlighted pink, and the status changes to selector-ready. - -image::media/acq_selection_mark_ready.png[mark ready] - -=== Convert selection list to purchase order === - -Use the Actions menu to convert a selection list to a purchase order. - - -. From a selection list, click *Actions* -> *Create Purchase Order*. -. A pop up box will appear. -. Select the ordering agency from the drop down menu. -. Enter the provider. -. Check the box adjacent to prepayment required if prepayment is required. -. Choose if you will add All Lineitems or Selected Lineitems to your purchase order. -. Check the box if you want to Import Bibs and Create Copies in the catalog. -. Click Submit. - - -[#purchase_orders] -== Purchase Orders == - -Purchase Orders allow you to keep track of orders and, if EDI is enabled, communicate with your provider. -To view purchase orders, click -*Acquisitions* -> *Purchase Orders*. - -=== Naming your purchase order === - -You can give your purchase order a name. - -When creating a purchase order or editing an existing purchase order, the purchase order name must be unique for the ordering agency. Evergreen will display a warning dialog to users, if they attempt to create or edit purchase order names that match the names of already existing purchase orders at the same ordering agency. The *Duplicate Purchase Order Name Warning Dialog* includes a link that will open the matching purchase order in a new tab. - -Purchase Order Names are case sensitive. - -*Duplicate PO Name Detection When Creating a New Purchase Order* - -image::media/po_name_detection_1.JPG[PO Name Detection 1] - -When a duplicate purchase order name is detected during the creation of a new purchase order, the user may: - -* Click *View PO* to view the purchase order with the matching name. The purchase order will open in a new tab. -* Click *Cancel* to cancel the creation of the new purchase order. -* Within the _Name (optional)_ field, enter a different, unique name for the new purchase order. - -If the purchase order name is unique for the ordering agency, the user will continue filling in the remaining fields and click *Save*. - -If the purchase order name is not unique for the ordering agency, the Save button will remain grayed out to the user until the purchase order is given a unique name. - -*Duplicate PO Name Detection When Editing the Name of an Existing Purchase Order* - -To change the name of an existing purchase order: - -. Within the purchase order, the _Name_ of the purchase order is a link (located at the top left-hand side of the purchase order). Click the PO Name. -. A new window will open, where users can rename the purchase order. -. Enter the new purchase order name. -. Click *OK*. - -image::media/po_name_detection_2.JPG[PO Name Detection 2] - -If the new purchase order name is unique for the ordering agency, the purchase order will be updated to reflect the new name. -If the purchase order name is not unique for the ordering agency, the purchase order will not be updated with the new name. Instead, the user will see the *Duplicate Purchase Order Name Warning Dialog* within the purchase order. - -image::media/po_name_detection_3.JPG[PO Name Detection 3] - -When a duplicate purchase order name is detected during the renaming of an existing purchase order, the user may: - -* Click *View PO* to view the purchase order with the matching name. The purchase order will open in a new tab. -* Repeat the steps to change the name of an existing purchase order and make the name unique. - -=== Activating your purchase order === - -When the appropriate criteria have been met the Activate Order button will appear and you can proceed with the following: - -. Click the button Activate Order. -. When you activate the order the bibliographic records and copies will be imported into the catalogue using the Vandelay interface, if not previously imported. See How to Load Bibliographic Records and Items into the Catalogue for instructions on using the Vandelay interface. -. The funds associated with the purchases will be encumbered. - -After you click *Activate Order*, you will be presented with the record import interface for records that are not already in the catalog. Once you complete entering in the parameters for the record import interface, the progress screen will appear. As of Evergreen 2.9, this progress screen consists of a progress bar in the foreground, and a tally of the following in the background of the bottom-left corner: - -* Lineitems processed -* Vandelay Records processed -* Bib Records Merged/Imported -* ACQ Copies Processed -* Debits Encumbered -* Real Copies Processed - -==== Activate Purchase Order without loading items ==== - -It is possible to activate a purchase order without loading items. Once the purchase order has been activated without loading items, it is not possible to load the items. This feature should only be used in situations where the copies have already been added to the catalogue, such as: - -* Cleaning up pre-acquisitions backlog -* Direct purchases that have already been catalogued - -To use this feature, click the Activate Without Loading Items button. - -==== Activate Purchase Order with Zero Copies ==== - -By default, a purchase order cannot be activated if a line item on the -purchase order has zero copies. To activate a purchase order with line -items that have zero copies, check the box *Allow activation with -zero-copy lineitems*. - -image::media/Zero_Copies1.jpg[Zero_Copies1] - -=== Line item statuses === - -The purchase orders interface keeps track of various statuses that your -line items might be in. This section lists some of the statuses you might -see when looking at purchase orders. - -==== Canceled and Delayed Items ==== - -In the purchase order interface, you can easily -differentiate between canceled and delayed items. Each label begins -with *Canceled* or *Delayed*. To view the list, click *Administration* --> *Acquisitions Administration* -> *Cancel Reasons*. - -The cancel/delay reason label is displayed as the line item status in the list of line items or as the copy status in the list of copies. - -image::media/2_7_Enhancements_to_Canceled2.jpg[Canceled2] - - -image::media/2_7_Enhancements_to_Canceled4.jpg[Canceled4] - -A delayed line item can now be canceled. You can mark a line item as delayed, and if later, the order cannot be filled, you can change the line item's status to canceled. When delayed line items are canceled, the encumbrances are deleted. - -Cancel/delay reasons now appear on the worksheet and the printable purchase order. - -[NOTE] -======================== -When all the copies of a line item are canceled through the Acquisitions interface, -the parent lineitem is also canceled. The cancel reason will be calculated based -on the settings of: - -. The cancel reason for the last copy to be canceled copy if the cancel reason's -_Keep Debits_ setting is true. -. The cancel reason for any other copy on the line item if the cancel reason's -_Keep Debits_ setting is true. -. The cancel reason for the last copy to be canceled if no copies on the line -item have a cancel reason where _Keep Debits_ is true. -======================== - - -==== Paid PO Line Items ==== - -Purchase Order line items are marked as "Paid" in red text when all non-cancelled copies on the line item have been invoiced. - -image::media/2_10_Lineitem_Paid.png[Paid Lineitem] - - -[#brief_records] -== Brief Records == - -Brief records are short bibliographic records with minimal information that are often used as placeholder records until items are received. Brief records can be added to selection lists or purchase orders and can be imported into the catalog. You can add brief records to new or existing selection lists. You can add brief records to new, pending or on-order purchase orders. - -=== Add brief records to a selection list === - -. Click *Acquisitions* -> *New Brief Record*. You can also add brief records to an existing selection list by clicking the Actions menu on the selection list and choosing Add Brief Record. -. Choose a selection list from the drop down menu, or enter the name of a new selection list. -. Enter bibliographic information in the desired fields. -. Click Save Record. - -image::media/acq_brief_record.png[] - -=== Add brief records to purchase orders === - -You can add brief records to new or existing purchase orders. - -. Open or create a purchase order. See the section on xref:#purchase_orders[purchase orders] for more information. -. Click Add Brief Record. -. Enter bibliographic information in the desired fields. Notice that the record is added to the purchase order that you just created. -. Click Save Record. - -image::media/acq_brief_record-2.png[] - -[#marc_federated_search] -== MARC Federated Search == - -The MARC Federated Search enables you to import bibliographic records into a selection list or purchase order from a Z39.50 source. - -. Click *Acquisitions* -> *MARC Federated Search*. -. Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is checked by default. Click Submit. -+ -image::media/acq_marc_search.png[search form] -+ -. A list of results will appear. Click the "Copies" link to add copy information to the line item. See the xref:#line_items[section on Line Items] for more information. -. Click the Notes link to add notes or line item alerts to the line item. See the xref:#line_items[section on Line Items] for more information. -. Enter a price in the "Estimated Price" field. -. You can save the line item(s) to a selection list by checking the box -on the line item and clicking *Actions* -> *Save Items to Selection -List*. You can also create a purchase order from the line item(s) by -checking the box on the line item and clicking Actions -> Create -Purchase Order. - -image::media/acq_marc_search-2.png[line item] - -[#line_items] -== Line Items == - -=== Return to Line Item === - -This feature enables you to return to a specific line item on a selection list, -purchase order, or invoice after you have navigated away from the page that -contained the line item. This feature is especially useful when you must -identify a line item in a long list. After working with a line item, you can -return to your place in the search results or the list of line items. - -To use this feature, select a line item, and then, depending on the location of -the line item, click *Return* or *Return to search*. Evergreen will take you -back to the specific line item in your search and highlight the line item with a -colored box. - -For example, you retrieve a selection list, find a line item to examine, and -click the *Copies* link. After editing the copies, you click *Return*. -Evergreen takes you back to your selection list and highlights the line item -that you viewed. - -image::media/Return_to_line_item1.jpg[Return_to_line_item1] - -This feature is available in _General Search Results_, _Purchase Orders_, and -_Selection Lists_, whenever any of the following links are available: - -* Selection List -* Purchase Order -* Copies -* Notes -* Worksheet - -This feature is available in Invoices whenever any of the following links are -available: - -* Title -* Selection List -* Purchase Order - -=== Display a Count of Existing Copies on Selection List and Purchase Order Lineitems === - -When displaying Acquisitions lineitems within the Selection List and Purchase Order interfaces, Evergreen displays a count of existing catalog copies on the lineitem. The count of existing catalog copies refers to the number of copies owned at the ordering agency and / or the ordering agency's child organization units. - -The counts display for lineitems that have a direct link to a catalog record. Generally, this includes lineitems created as "on order" based on an existing catalog record and lineitems where "Load Bibs and Items" has been applied. - -The count of existing copies does not include copies that are in either a Lost or a Missing status. - -The existing copy count displays in the link "bar" located below the Order Identifier within the lineitem. - -If no existing copies are found, a "0" (zero) will display in plain text. - -If the existing copy count is greater than zero, then the count will display in bold and red on the lineitem. - -image::media/display_copy_count_1.JPG[Display Copy Count 1] - -The user may also hover over the existing copy count to view the accompanying tooltip. - -image::media/display_copy_count_2.JPG[Display Copy Count 2] - - diff --git a/docs-antora/modules/acquisitions/pages/vandelay_acquisitions_integration.adoc b/docs-antora/modules/acquisitions/pages/vandelay_acquisitions_integration.adoc deleted file mode 100644 index 7819cee30d..0000000000 --- a/docs-antora/modules/acquisitions/pages/vandelay_acquisitions_integration.adoc +++ /dev/null @@ -1,223 +0,0 @@ -= Load MARC Order Records = -:toc: - -== Introduction == - -The Acquisitions Load MARC Order Record interface enables you to add MARC -records to selection lists and purchase orders and upload the records into the -catalog. You can both create and activate purchase orders in one step from this -interface. You can also load bibs and items into the catalog. - -Leveraging the match sets available in the cataloging MARC batch Import -interface, you can also utilize record matching mechanisms to prevent the -creation of duplicate records. - -For detailed instructions on record matching and importing, see -the cataloging manual. - -== Basic Upload Options == -. Click *Acquisitions* -> *Load MARC Order Records*. -. If you want to upload the MARC records to a new purchase order, then -check _Create Purchase Order_. -. If you want to activate the purchase order at the time of creation, then -check _Activate Purchase Order_. -. Enter the name of the *Provider*. The text will auto-complete. -. Select an org unit from the drop down menu. The context org unit is the org -unit responsible for placing and managing the order. It defines what org unit -settings (eg copy locations) are in scope, what fiscal year to use, who is -allowed to view/modify the PO, where the items should be delivered and the EDI -SAN. In the case of a multi-branch system uploading records for multiple -branches, choosing the system is probably best. Single branch libraries or -branches responsible for their own orders should probably select the branch. -. If you want to upload the records to a selection list, you can select a list -from the drop down menu, or type in the name of the selection list that you -want to create. -. Select a *Fiscal Year* from the dropdown menu that matches the fiscal year -of the funds that will be used for the order. If no fiscal year is selected, the -system will use the organizational unit's default fiscal year stored in the -database. If not fiscal year is set, the system will default to the current -calendar year. - -image::media/load_marc_order_records.png[Acquisitions MARC upload screen] - - -== Record Matching Options == -Use the options below the horizontal rule for the system to check for matching -records before importing an order record. - -. Create a queue to which you can upload your records, or add you records to an existing queue -. Select a *Record Match Set* from the drop-down menu. -. Select a *Merge Profile.* Merge profiles enable you to specify which tags -should be removed or preserved in incoming records. -. Select a *Record Source* from the drop-down menu. -. If you want to automatically import records on upload, select one or more of -the following options. - .. *Import Non-Matching Records* - import any records that don't have a match - in the system. - .. *Merge on Exact Match (901c)* - use only for records that will match on - the 901c field. - .. *Merge on Single Match* - import records that only have one match in the - system. - .. *Merge on Best Match* - If more than one match is found in the catalog for - a given record, Evergreen will attempt to perform the best match as defined - by the match score. -. To only import records that have a quality equal to or greater than the -existing record, enter a *Best/Single Match Minimum Quality Ratio*. Divide the -incoming record quality score, as determined by the match set's quality -metrics, by the record quality score of the best match that exists in the -catalog. If you want to ensure that the inbound record is only imported when it -has a higher quality than the best match, then you must enter a ratio that is -higher than 1, such as 1.1. If you want to bypass all quality restraints, enter -a 0 (zero) in this field. -. Select an *Insufficient Quality Fall-Through Profile* if desired. This field -enables you to indicate that if the inbound record does not meet the -configured quality standards, then you may still import the record using an -alternate merge profile. This field is typically used for selecting a merge -profile that allows the user to import holdings attached to a lower quality -record without replacing the existing (target) record with the incoming record. -This field is optional. -. If your order records contain holdings information, by default, Evergreen -will load them as acquisitions copies. (Note: These can be overlayed with real copies -during the MARC batch importing process.) Or you can select *Load Items for -Imported Records* to load them as live copies that display in the catalog. - -image::media/load_marc_order_records.png[Acquisitions MARC upload screen] - - -== Default Upload Settings == - -You can set default upload values by modifying the following settings in -*Administration* -> *Local Administration* -> *Library Settings Editor*: - -- Upload Activate PO -- Upload Create PO -- Upload Default Insufficient Quality Fall-Thru Profile -- Upload Default Match Set -- Upload Default Merge Profile -- Upload Upload Default Min. Quality Ratio -- Upload Default Provider -- Upload Import Non Matching by Default -- Upload Load Items for Imported Records by Default -- Upload Merge on Best Match by Default -- Upload Merge on Exact Match by Default -- Upload Merge on Single Match by Default - -image::media/acq_upload_library_settings.png[Acq upload settings in Library Settings Editor] - - -== Sticky Settings == - -If the above default settings are not implemented, the selections/values used -in the following fields will be sticky and will automatically populate the -fields the next time the *Load MARC Order Records* screen is pulled up: - -- Create Purchase Order -- Activate Purchase Order -- Context Org Unit -- Record Match Set -- Merge Profile -- Import Non-Matching Records -- Merge on Exact Match (901c) -- Merge on Single Match -- Merge on Best Match -- Best/Single Match Minimum Quality Ratio -- Insufficient Quality Fall-Through Profile -- Load Items for Imported Records - -== Use Cases for MARC Order Upload form == - -You can add items to a selection list or purchase order and ignore the record -matching options, or you can use both acquisitions and cataloging functions. In -these examples, you will use both functions. - -*Example 1* -Using the Acquisitions MARC Batch Load interface, upload MARC records to a -selection list and import queue, and match queued records with existing catalog -records. - -In this example, an acquisitions librarian has received a batch of MARC records -from a vendor. She will add the records to a selection list and a Vandelay -record queue. - -A cataloger will later view the queue, edit the records, and import them into -the catalog. - -. Click *Acquisitions -> Load MARC Order Records* -. Add MARC order records to a *Selection list* and/or a *Purchase Order.* -Check the box to create a purchase order if desired. -. Select a *Provider* from the drop-down menu, or begin typing the code for the provider, and the field will auto-fill. -. Select a *Context Org Unit* from the drop down-menu, or begin typing the code -for the context org unit, and the field will auto-fill. -. Select a *Selection List* from the drop down menu, or begin typing the name -of the selection list. You can create a new list, or the field will auto-fill. -. Create a new record import queue, or upload the records to an existing -queue. -. Select a *Record Match Set*. -. Browse your computer to find the MARC file, and click *Upload*. -+ -image::media/Vandelay_Integration_into_Acquisitions1.jpg[Vandelay_Integration_into_Acquisitions1] -+ -. The processed items appear at the bottom of the screen. -+ -image::media/Vandelay_Integration_into_Acquisitions2.jpg[Vandelay_Integration_into_Acquisitions2] -. You can click the link(s) to access the selection list or the import queue. -Click the link to *View Selection List*. -. Look at the first line item. The line item has not yet been linked to the -catalog, but it is linked to a record import queue. Click the link to the -*queue* to examine the MARC record. -+ -image::media/Vandelay_Integration_into_Acquisitions3.jpg[Vandelay_Integration_into_Acquisitions3] -. The batch import interface opens in a new tab. The bibliographic records -appear in the queue. Records that have matches are identified in the queue. You -can edit these records and/or import them into the catalog, completing the -process. - -image::media/Vandelay_Integration_into_Acquisitions4.jpg[Vandelay_Integration_into_Acquisitions4] - -*Example 2*: Using the Acquisitions MARC Batch Load interface, upload MARC -records to a selection list, and use the Vandelay options to import the records -directly into the catalog. The Vandelay options will enable you to match -incoming records with existing catalog records. - -In this example, a librarian will add MARC records to a selection list, create -criteria for matching incoming and existing records, and import the matching -and non-matching records into the catalog. - -. Click *Acquisitions* -> *Load MARC Order Records* -. Add MARC order records to a *Selection list* and/or a *Purchase Order.* -Check the box to create a purchase order if desired. -. Select a *Provider* from the drop down menu, or begin typing the code for the -provider, and the field will auto-fill. -. Select a *Context Org Unit* from the drop down menu, or begin typing the code for the context org unit, and the field will auto-fill. -. Select a *Selection List* from the drop down menu, or begin typing the name -of the selection list. You can create a new list, or the field will auto-fill. -. Create a new record import queue, or upload the records to an existing queue. -. Select a *Record Match Set*. -. Select *Merge Profile* -> *Match-Only Merge*. -. Check the boxes adjacent to *Import Non-Matching Records* and *Merge on Best -Match*. -. Browse your computer to find the MARC file, and click *Upload*. -+ -image::media/Vandelay_Integration_into_Acquisitions5.jpg[Vandelay_Integration_into_Acquisitions5] -+ -. Click the link to *View Selection List* Line items that do not match -existing catalog records on title and ISBN contain the link, *link to catalog*. -This link indicates that you could link the line item to a catalog record, but -currently, no match exists between the line item and catalog records. Line -items that do have matching records in the catalog contain the link, *catalog*. -+ -image::media/Vandelay_Integration_into_Acquisitions6.jpg[Vandelay_Integration_into_Acquisitions6] -+ -. Click the *catalog* link to view the line item in the catalog. - -*Permissions to use this Feature* - -IMPORT_MARC - Using batch importer to create new bib records requires the -IMPORT_MARC permission (same as open-ils.cat.biblio.record.xml.import). If the -permission fails, the queued record will fail import and be stamped with a new -"import.record.perm_failure" import error - -IMPORT_ACQ_LINEITEM_BIB_RECORD_UPLOAD - This allows interfaces leveraging -the batch importer, such as Acquisitions, to create a higher barrier to entry. -This permission prevents users from creating new bib records directly from the -ACQ vendor MARC file upload interface. diff --git a/docs-antora/modules/admin/_attributes.adoc b/docs-antora/modules/admin/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/admin/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets1.jpg b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets1.jpg deleted file mode 100644 index 9ecdc58489..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets2.jpg b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets2.jpg deleted file mode 100644 index 4bea42d834..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets4.jpg b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets4.jpg deleted file mode 100644 index b563a56400..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets4.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets5.jpg b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets5.jpg deleted file mode 100644 index a853f4b56c..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets5.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets6.jpg b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets6.jpg deleted file mode 100644 index 646a4fa9ce..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets6.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets_Fields.png b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets_Fields.png deleted file mode 100644 index 999f791dfa..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets_Fields.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets_Fields_Edit.png b/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets_Fields_Edit.png deleted file mode 100644 index e956a11c67..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Control_Sets_Fields_Edit.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Authority_Server_Admin_Menu.png b/docs-antora/modules/admin/assets/images/media/Authority_Server_Admin_Menu.png deleted file mode 100644 index fe01149f63..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Authority_Server_Admin_Menu.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search1.jpg b/docs-antora/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search1.jpg deleted file mode 100644 index 6cbf623590..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search2.jpg b/docs-antora/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search2.jpg deleted file mode 100644 index c17abdb305..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Barcode_Check_In.png b/docs-antora/modules/admin/assets/images/media/Barcode_Check_In.png deleted file mode 100644 index 454cc4103d..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Barcode_Check_In.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Barcode_Checkout_Item_Barcode.png b/docs-antora/modules/admin/assets/images/media/Barcode_Checkout_Item_Barcode.png deleted file mode 100644 index bf2592e4bc..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Barcode_Checkout_Item_Barcode.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Barcode_Checkout_Patron_Barcode.png b/docs-antora/modules/admin/assets/images/media/Barcode_Checkout_Patron_Barcode.png deleted file mode 100644 index 6710a71a8a..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Barcode_Checkout_Patron_Barcode.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Barcode_Item_Status.png b/docs-antora/modules/admin/assets/images/media/Barcode_Item_Status.png deleted file mode 100644 index 59e2e45836..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Barcode_Item_Status.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Barcode_OPAC_Staff_Place_Hold.png b/docs-antora/modules/admin/assets/images/media/Barcode_OPAC_Staff_Place_Hold.png deleted file mode 100644 index 116eaa50b9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Barcode_OPAC_Staff_Place_Hold.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_21.jpg b/docs-antora/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_21.jpg deleted file mode 100644 index 825b46c2f6..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_21.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_22.jpg b/docs-antora/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_22.jpg deleted file mode 100644 index 1d3f8b621b..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_22.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Core_Source_1.jpg b/docs-antora/modules/admin/assets/images/media/Core_Source_1.jpg deleted file mode 100644 index a53710cfe4..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Core_Source_1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHClosedDatesEditorAddClosing.png b/docs-antora/modules/admin/assets/images/media/ECHClosedDatesEditorAddClosing.png deleted file mode 100644 index b549dd15af..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHClosedDatesEditorAddClosing.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHClosingSnowDay.png b/docs-antora/modules/admin/assets/images/media/ECHClosingSnowDay.png deleted file mode 100644 index 0ba89918c2..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHClosingSnowDay.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHEditClosing.png b/docs-antora/modules/admin/assets/images/media/ECHEditClosing.png deleted file mode 100644 index d2f67e06d4..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHEditClosing.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHEditClosingModal.png b/docs-antora/modules/admin/assets/images/media/ECHEditClosingModal.png deleted file mode 100644 index 62d8083cf3..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHEditClosingModal.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingConstruction.png b/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingConstruction.png deleted file mode 100644 index 27d4686cfb..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingConstruction.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingDetailed.png b/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingDetailed.png deleted file mode 100644 index c85b6460ee..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingDetailed.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingDone.png b/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingDone.png deleted file mode 100644 index 2515a78947..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingDone.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingMultipleDays.png b/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingMultipleDays.png deleted file mode 100644 index 62083b0fd3..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ECHLibraryClosingMultipleDays.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Expanding_the_Work_Log1.jpg b/docs-antora/modules/admin/assets/images/media/Expanding_the_Work_Log1.jpg deleted file mode 100644 index 9e899af734..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Expanding_the_Work_Log1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Expanding_the_Work_Log2.jpg b/docs-antora/modules/admin/assets/images/media/Expanding_the_Work_Log2.jpg deleted file mode 100644 index af3a9b3674..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Expanding_the_Work_Log2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Fiscal_Rollover1.jpg b/docs-antora/modules/admin/assets/images/media/Fiscal_Rollover1.jpg deleted file mode 100644 index cb4b17366b..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Fiscal_Rollover1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location1.jpg b/docs-antora/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location1.jpg deleted file mode 100644 index 28d0402886..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location2.jpg b/docs-antora/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location2.jpg deleted file mode 100644 index d3a90756f0..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Org_Unit_Prox_Adj1.png b/docs-antora/modules/admin/assets/images/media/Org_Unit_Prox_Adj1.png deleted file mode 100644 index 25e3377fdc..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Org_Unit_Prox_Adj1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Org_Unit_Prox_Adj2.png b/docs-antora/modules/admin/assets/images/media/Org_Unit_Prox_Adj2.png deleted file mode 100644 index 9ccf7e5fa6..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Org_Unit_Prox_Adj2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg b/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg deleted file mode 100644 index 7be1b6f554..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group2.png b/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group2.png deleted file mode 100644 index bc9613a422..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg b/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg deleted file mode 100644 index fd84093379..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging1.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging1.png deleted file mode 100644 index 06fa12c6a5..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging11.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging11.png deleted file mode 100644 index 8f6de4c8a9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging11.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging12.jpg b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging12.jpg deleted file mode 100644 index cf012f0ff4..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging12.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging13.jpg b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging13.jpg deleted file mode 100644 index 541fc26811..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging13.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging2.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging2.png deleted file mode 100644 index b0ed58d237..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging3.jpg b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging3.jpg deleted file mode 100644 index f03f42e5a9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging3.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging4.jpg b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging4.jpg deleted file mode 100644 index ef001919d4..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging4.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging5.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging5.png deleted file mode 100644 index 295a5c4a8e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging6.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging6.png deleted file mode 100644 index 5e153b18db..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging6.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging7.jpg b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging7.jpg deleted file mode 100644 index cf012f0ff4..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging7.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging8.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging8.png deleted file mode 100644 index 3d00d42f1c..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging8.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging9.png b/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging9.png deleted file mode 100644 index 4130295545..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/SMS_Text_Messaging9.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Saved_Catalog_Searches_2_21.jpg b/docs-antora/modules/admin/assets/images/media/Saved_Catalog_Searches_2_21.jpg deleted file mode 100644 index ce22623f15..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Saved_Catalog_Searches_2_21.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/Saved_Catalog_Searches_2_22.jpg b/docs-antora/modules/admin/assets/images/media/Saved_Catalog_Searches_2_22.jpg deleted file mode 100644 index 426e3994bf..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/Saved_Catalog_Searches_2_22.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/User_Activity_Types1A.jpg b/docs-antora/modules/admin/assets/images/media/User_Activity_Types1A.jpg deleted file mode 100644 index 69d6ab4d39..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/User_Activity_Types1A.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/User_Activity_Types2A.jpg b/docs-antora/modules/admin/assets/images/media/User_Activity_Types2A.jpg deleted file mode 100644 index 27a6ce60d7..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/User_Activity_Types2A.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/acq_marc_search-2.png b/docs-antora/modules/admin/assets/images/media/acq_marc_search-2.png deleted file mode 100644 index f991a6d423..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/acq_marc_search-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/acq_marc_search.png b/docs-antora/modules/admin/assets/images/media/acq_marc_search.png deleted file mode 100644 index 391ae435a2..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/acq_marc_search.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/auth_browse_infra1.png b/docs-antora/modules/admin/assets/images/media/auth_browse_infra1.png deleted file mode 100644 index a68f8afdee..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/auth_browse_infra1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/auth_browse_infra2.png b/docs-antora/modules/admin/assets/images/media/auth_browse_infra2.png deleted file mode 100644 index e08b9c5a67..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/auth_browse_infra2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/autorenew_circdur.PNG b/docs-antora/modules/admin/assets/images/media/autorenew_circdur.PNG deleted file mode 100644 index eab0ae240a..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/autorenew_circdur.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/autorenew_itemsout.PNG b/docs-antora/modules/admin/assets/images/media/autorenew_itemsout.PNG deleted file mode 100644 index 9b044ccabe..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/autorenew_itemsout.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/autorenew_norenewnotice.PNG b/docs-antora/modules/admin/assets/images/media/autorenew_norenewnotice.PNG deleted file mode 100644 index e4449ddd2c..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/autorenew_norenewnotice.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/autorenew_renewnotice.PNG b/docs-antora/modules/admin/assets/images/media/autorenew_renewnotice.PNG deleted file mode 100644 index 311eb1fee0..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/autorenew_renewnotice.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/back_to_results.png b/docs-antora/modules/admin/assets/images/media/back_to_results.png deleted file mode 100644 index b460a349e3..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/back_to_results.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/best_hold_sort_order1.jpg b/docs-antora/modules/admin/assets/images/media/best_hold_sort_order1.jpg deleted file mode 100644 index 6aba665e93..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/best_hold_sort_order1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/best_hold_sort_order2.jpg b/docs-antora/modules/admin/assets/images/media/best_hold_sort_order2.jpg deleted file mode 100644 index d8e22d6991..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/best_hold_sort_order2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/blu-ray.png b/docs-antora/modules/admin/assets/images/media/blu-ray.png deleted file mode 100644 index d44dfe8b38..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/blu-ray.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/book.png b/docs-antora/modules/admin/assets/images/media/book.png deleted file mode 100644 index 2800684510..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/book.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-1.png b/docs-antora/modules/admin/assets/images/media/booking-create-bookable-1.png deleted file mode 100644 index 7ddded0c87..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-2.png b/docs-antora/modules/admin/assets/images/media/booking-create-bookable-2.png deleted file mode 100644 index df9a3a47b7..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-3.png b/docs-antora/modules/admin/assets/images/media/booking-create-bookable-3.png deleted file mode 100644 index 49ed80173d..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-4.png b/docs-antora/modules/admin/assets/images/media/booking-create-bookable-4.png deleted file mode 100644 index 6c7128d649..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-4.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-5.png b/docs-antora/modules/admin/assets/images/media/booking-create-bookable-5.png deleted file mode 100644 index c501d47cf3..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-6.png b/docs-antora/modules/admin/assets/images/media/booking-create-bookable-6.png deleted file mode 100644 index 2261a9d6a5..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-bookable-6.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-2.png b/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-2.png deleted file mode 100644 index ff517c5e87..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-3.png b/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-3.png deleted file mode 100644 index d7a7e384f9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-4.png b/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-4.png deleted file mode 100644 index 0f7317e495..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-4.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-5.png b/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-5.png deleted file mode 100644 index 784d95ad00..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype_webclient-1.png b/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype_webclient-1.png deleted file mode 100644 index 6ff2230bbb..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/booking-create-resourcetype_webclient-1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/braille.png b/docs-antora/modules/admin/assets/images/media/braille.png deleted file mode 100644 index 693d937851..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/braille.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/caed_6.jpg b/docs-antora/modules/admin/assets/images/media/caed_6.jpg deleted file mode 100644 index 8f9fe85492..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/caed_6.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/casaudiobook.png b/docs-antora/modules/admin/assets/images/media/casaudiobook.png deleted file mode 100644 index 8352607bfa..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/casaudiobook.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/casmusic.png b/docs-antora/modules/admin/assets/images/media/casmusic.png deleted file mode 100644 index f52327c672..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/casmusic.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/cdaudiobook.png b/docs-antora/modules/admin/assets/images/media/cdaudiobook.png deleted file mode 100644 index 03d710c04c..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/cdaudiobook.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/cdmusic.png b/docs-antora/modules/admin/assets/images/media/cdmusic.png deleted file mode 100644 index be5e341c7d..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/cdmusic.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/closed_dates.png b/docs-antora/modules/admin/assets/images/media/closed_dates.png deleted file mode 100644 index 2839d32157..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/closed_dates.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/coded-value-1.png b/docs-antora/modules/admin/assets/images/media/coded-value-1.png deleted file mode 100644 index 9530027d68..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/coded-value-1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/column_picker_config_widths.png b/docs-antora/modules/admin/assets/images/media/column_picker_config_widths.png deleted file mode 100644 index aca3c5ac07..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/column_picker_config_widths.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/column_picker_dojo.png b/docs-antora/modules/admin/assets/images/media/column_picker_dojo.png deleted file mode 100644 index 5a448efbcf..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/column_picker_dojo.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/column_picker_popup.png b/docs-antora/modules/admin/assets/images/media/column_picker_popup.png deleted file mode 100644 index 87e5168d6a..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/column_picker_popup.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/column_picker_web.png b/docs-antora/modules/admin/assets/images/media/column_picker_web.png deleted file mode 100644 index fff684591c..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/column_picker_web.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/column_picker_web_save.png b/docs-antora/modules/admin/assets/images/media/column_picker_web_save.png deleted file mode 100644 index 0d390be785..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/column_picker_web_save.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copy_status_add.png b/docs-antora/modules/admin/assets/images/media/copy_status_add.png deleted file mode 100644 index 8c01477344..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copy_status_add.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copy_status_delete.png b/docs-antora/modules/admin/assets/images/media/copy_status_delete.png deleted file mode 100644 index 525a84ce72..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copy_status_delete.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copy_status_edit.png b/docs-antora/modules/admin/assets/images/media/copy_status_edit.png deleted file mode 100644 index 9bb3a83386..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copy_status_edit.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copytags1.PNG b/docs-antora/modules/admin/assets/images/media/copytags1.PNG deleted file mode 100644 index aca37bb614..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copytags1.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copytags2.PNG b/docs-antora/modules/admin/assets/images/media/copytags2.PNG deleted file mode 100644 index fa20970097..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copytags2.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copytags3.PNG b/docs-antora/modules/admin/assets/images/media/copytags3.PNG deleted file mode 100644 index 6dd1447a79..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copytags3.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/copytags4.PNG b/docs-antora/modules/admin/assets/images/media/copytags4.PNG deleted file mode 100644 index 8e7cfb5563..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/copytags4.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-2.png b/docs-antora/modules/admin/assets/images/media/create-edi-accounts-2.png deleted file mode 100644 index 59f8ad7e1f..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-3.png b/docs-antora/modules/admin/assets/images/media/create-edi-accounts-3.png deleted file mode 100644 index 6a883b4300..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-4.png b/docs-antora/modules/admin/assets/images/media/create-edi-accounts-4.png deleted file mode 100644 index 8405fee96b..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-4.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-5.png b/docs-antora/modules/admin/assets/images/media/create-edi-accounts-5.png deleted file mode 100644 index 277d799480..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/create-edi-accounts-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/cvmpage_4.jpg b/docs-antora/modules/admin/assets/images/media/cvmpage_4.jpg deleted file mode 100644 index b5a4ff00ae..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/cvmpage_4.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/dvd.png b/docs-antora/modules/admin/assets/images/media/dvd.png deleted file mode 100644 index b7222a870f..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/dvd.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/eaudio.png b/docs-antora/modules/admin/assets/images/media/eaudio.png deleted file mode 100644 index d3f289d70f..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/eaudio.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/ebook.png b/docs-antora/modules/admin/assets/images/media/ebook.png deleted file mode 100644 index e07e467193..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/ebook.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/editrad_2.jpg b/docs-antora/modules/admin/assets/images/media/editrad_2.jpg deleted file mode 100644 index d49200ada6..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/editrad_2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/enter-library-san-2.png b/docs-antora/modules/admin/assets/images/media/enter-library-san-2.png deleted file mode 100644 index 876cca0f07..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/enter-library-san-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/enter-provider-san-1.png b/docs-antora/modules/admin/assets/images/media/enter-provider-san-1.png deleted file mode 100644 index 3f6037d2fe..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/enter-provider-san-1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/enter-provider-san-2.png b/docs-antora/modules/admin/assets/images/media/enter-provider-san-2.png deleted file mode 100644 index acd6f05ee5..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/enter-provider-san-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/equip.png b/docs-antora/modules/admin/assets/images/media/equip.png deleted file mode 100644 index 39484cb44f..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/equip.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/event_def_details.png b/docs-antora/modules/admin/assets/images/media/event_def_details.png deleted file mode 100644 index cfa21b72aa..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/event_def_details.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/event_def_details_2.png b/docs-antora/modules/admin/assets/images/media/event_def_details_2.png deleted file mode 100644 index 6bb189728e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/event_def_details_2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/evideo.png b/docs-antora/modules/admin/assets/images/media/evideo.png deleted file mode 100644 index f8788ea56e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/evideo.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/kit.png b/docs-antora/modules/admin/assets/images/media/kit.png deleted file mode 100644 index 7b76d03f5c..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/kit.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lpbook.png b/docs-antora/modules/admin/assets/images/media/lpbook.png deleted file mode 100644 index 1c2c44a4fc..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lpbook.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-address_alert_staff_view.png b/docs-antora/modules/admin/assets/images/media/lsa-address_alert_staff_view.png deleted file mode 100644 index 4e5e19e964..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-address_alert_staff_view.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_admin.png b/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_admin.png deleted file mode 100644 index cb24319a86..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_admin.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_fields.png b/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_fields.png deleted file mode 100644 index 651f4148d9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_fields.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_multiple.png b/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_multiple.png deleted file mode 100644 index 071d0504a0..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-barcode_completion_multiple.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-1.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-1.png deleted file mode 100644 index f54232ff5b..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-2.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-2.png deleted file mode 100644 index 63a5cb7da3..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-3.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-3.png deleted file mode 100644 index f8ed82a5da..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-3a.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-3a.png deleted file mode 100644 index 724da96d5b..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-3a.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-4.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-4.png deleted file mode 100644 index 99aa0946c1..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-4.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-5.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-5.png deleted file mode 100644 index 076cb31185..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-6.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-6.png deleted file mode 100644 index 9dcdf35263..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-6.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lsa-statcat-8.png b/docs-antora/modules/admin/assets/images/media/lsa-statcat-8.png deleted file mode 100644 index 0d69c9d0cc..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lsa-statcat-8.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lse-1.png b/docs-antora/modules/admin/assets/images/media/lse-1.png deleted file mode 100644 index 1dcbf28426..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lse-1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lse-2.png b/docs-antora/modules/admin/assets/images/media/lse-2.png deleted file mode 100644 index 311eadd49e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lse-2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lse-3.png b/docs-antora/modules/admin/assets/images/media/lse-3.png deleted file mode 100644 index e50598422e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lse-3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lse-4.png b/docs-antora/modules/admin/assets/images/media/lse-4.png deleted file mode 100644 index 286feac106..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lse-4.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/lse-5.png b/docs-antora/modules/admin/assets/images/media/lse-5.png deleted file mode 100644 index f79a001d63..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/lse-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/map.png b/docs-antora/modules/admin/assets/images/media/map.png deleted file mode 100644 index f9f804746f..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/map.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields1.jpg b/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields1.jpg deleted file mode 100644 index 67e5731e65..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields2.jpg b/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields2.jpg deleted file mode 100644 index 71d658b6f0..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields3.png b/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields3.png deleted file mode 100644 index 841868d7d9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields5.jpg b/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields5.jpg deleted file mode 100644 index 6acb2ab3ac..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/marc_import_remove_fields5.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/microform.png b/docs-antora/modules/admin/assets/images/media/microform.png deleted file mode 100644 index 6c4b2e1d6e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/microform.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/modifycde_7.jpg b/docs-antora/modules/admin/assets/images/media/modifycde_7.jpg deleted file mode 100644 index 1354086606..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/modifycde_7.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/multilingual_search1.png b/docs-antora/modules/admin/assets/images/media/multilingual_search1.png deleted file mode 100644 index b88ca9e583..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/multilingual_search1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/multilingual_search2.PNG b/docs-antora/modules/admin/assets/images/media/multilingual_search2.PNG deleted file mode 100644 index 90f3dd4e64..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/multilingual_search2.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/multilingual_search3.PNG b/docs-antora/modules/admin/assets/images/media/multilingual_search3.PNG deleted file mode 100644 index f90c4ac94e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/multilingual_search3.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/music.png b/docs-antora/modules/admin/assets/images/media/music.png deleted file mode 100644 index 132ca40b6f..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/music.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/new_event_def.png b/docs-antora/modules/admin/assets/images/media/new_event_def.png deleted file mode 100644 index 21fb860f32..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/new_event_def.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/noncataloged_type_add.png b/docs-antora/modules/admin/assets/images/media/noncataloged_type_add.png deleted file mode 100644 index 9696573aa0..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/noncataloged_type_add.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/permissions_1.png b/docs-antora/modules/admin/assets/images/media/permissions_1.png deleted file mode 100644 index b0c4e450e7..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/permissions_1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/permissions_1a.png b/docs-antora/modules/admin/assets/images/media/permissions_1a.png deleted file mode 100644 index 9b3f81137a..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/permissions_1a.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/phonomusic.png b/docs-antora/modules/admin/assets/images/media/phonomusic.png deleted file mode 100644 index 0f21dd2862..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/phonomusic.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/phonospoken.png b/docs-antora/modules/admin/assets/images/media/phonospoken.png deleted file mode 100644 index 32341cb19b..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/phonospoken.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/picture.png b/docs-antora/modules/admin/assets/images/media/picture.png deleted file mode 100644 index e523300013..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/picture.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/popbadge1_web_client.PNG b/docs-antora/modules/admin/assets/images/media/popbadge1_web_client.PNG deleted file mode 100644 index 53a35c01ce..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/popbadge1_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/popbadge2_web_client.PNG b/docs-antora/modules/admin/assets/images/media/popbadge2_web_client.PNG deleted file mode 100644 index b2273ee463..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/popbadge2_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/popbadge3_web_client.PNG b/docs-antora/modules/admin/assets/images/media/popbadge3_web_client.PNG deleted file mode 100644 index bb06467104..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/popbadge3_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/profile-5.png b/docs-antora/modules/admin/assets/images/media/profile-5.png deleted file mode 100644 index bdafbca927..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/profile-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/profile-6.png b/docs-antora/modules/admin/assets/images/media/profile-6.png deleted file mode 100644 index 5e3b429aee..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/profile-6.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/profile-7.png b/docs-antora/modules/admin/assets/images/media/profile-7.png deleted file mode 100644 index 26fec660f1..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/profile-7.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/radcatrue_5.jpg b/docs-antora/modules/admin/assets/images/media/radcatrue_5.jpg deleted file mode 100644 index beaeefd194..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/radcatrue_5.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/radcvmcacolumns_3.jpg b/docs-antora/modules/admin/assets/images/media/radcvmcacolumns_3.jpg deleted file mode 100644 index be065174cb..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/radcvmcacolumns_3.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/radmvcolumn_1.jpg b/docs-antora/modules/admin/assets/images/media/radmvcolumn_1.jpg deleted file mode 100644 index 02b00944a6..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/radmvcolumn_1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/receipt1.png b/docs-antora/modules/admin/assets/images/media/receipt1.png deleted file mode 100644 index 1544c726b6..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/receipt1.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/receipt2.png b/docs-antora/modules/admin/assets/images/media/receipt2.png deleted file mode 100644 index 3c53077ae0..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/receipt2.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/score.png b/docs-antora/modules/admin/assets/images/media/score.png deleted file mode 100644 index f7b5c7be42..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/score.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/serial.png b/docs-antora/modules/admin/assets/images/media/serial.png deleted file mode 100644 index ab751d5655..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/serial.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/software.png b/docs-antora/modules/admin/assets/images/media/software.png deleted file mode 100644 index a347513012..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/software.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/storing_z3950_credentials.jpg b/docs-antora/modules/admin/assets/images/media/storing_z3950_credentials.jpg deleted file mode 100644 index fadaa9ac8a..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/storing_z3950_credentials.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/test_event_def.png b/docs-antora/modules/admin/assets/images/media/test_event_def.png deleted file mode 100644 index 313acb96c6..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/test_event_def.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/test_event_def_output.png b/docs-antora/modules/admin/assets/images/media/test_event_def_output.png deleted file mode 100644 index ced6610637..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/test_event_def_output.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/vhs.png b/docs-antora/modules/admin/assets/images/media/vhs.png deleted file mode 100644 index 3cd6780569..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/vhs.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/vid1.PNG b/docs-antora/modules/admin/assets/images/media/vid1.PNG deleted file mode 100644 index ed8955f2af..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/vid1.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/vid2.PNG b/docs-antora/modules/admin/assets/images/media/vid2.PNG deleted file mode 100644 index b22d6383d2..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/vid2.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/vid3.PNG b/docs-antora/modules/admin/assets/images/media/vid3.PNG deleted file mode 100644 index 75ec4d5359..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/vid3.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/vid4.PNG b/docs-antora/modules/admin/assets/images/media/vid4.PNG deleted file mode 100644 index 13690401bc..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/vid4.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/vid5.PNG b/docs-antora/modules/admin/assets/images/media/vid5.PNG deleted file mode 100644 index 1415605e6a..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/vid5.PNG and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/web_client_workstation_registration.png b/docs-antora/modules/admin/assets/images/media/web_client_workstation_registration.png deleted file mode 100644 index 7224672ca9..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/web_client_workstation_registration.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/workstation_admin-1.jpg b/docs-antora/modules/admin/assets/images/media/workstation_admin-1.jpg deleted file mode 100644 index 0406e4afda..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/workstation_admin-1.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/workstation_admin-2.jpg b/docs-antora/modules/admin/assets/images/media/workstation_admin-2.jpg deleted file mode 100644 index da0e056bce..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/workstation_admin-2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/workstation_admin-3.png b/docs-antora/modules/admin/assets/images/media/workstation_admin-3.png deleted file mode 100644 index b6485d4b99..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/workstation_admin-3.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/workstation_admin-4.png b/docs-antora/modules/admin/assets/images/media/workstation_admin-4.png deleted file mode 100644 index 9d260b64bf..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/workstation_admin-4.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/workstation_admin-5.png b/docs-antora/modules/admin/assets/images/media/workstation_admin-5.png deleted file mode 100644 index fe3f3dcf82..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/workstation_admin-5.png and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/media/workstation_admin-6.jpg b/docs-antora/modules/admin/assets/images/media/workstation_admin-6.jpg deleted file mode 100644 index 4de9cf445e..0000000000 Binary files a/docs-antora/modules/admin/assets/images/media/workstation_admin-6.jpg and /dev/null differ diff --git a/docs-antora/modules/admin/assets/images/worklog.png b/docs-antora/modules/admin/assets/images/worklog.png deleted file mode 100644 index 22a1f1b2df..0000000000 Binary files a/docs-antora/modules/admin/assets/images/worklog.png and /dev/null differ diff --git a/docs-antora/modules/admin/pages/Best_Hold_Selection_Sort_Order.adoc b/docs-antora/modules/admin/pages/Best_Hold_Selection_Sort_Order.adoc deleted file mode 100644 index 680f681179..0000000000 --- a/docs-antora/modules/admin/pages/Best_Hold_Selection_Sort_Order.adoc +++ /dev/null @@ -1,56 +0,0 @@ -[#best_hold_selection_sort_order] -= Best-Hold Selection Sort Order = -:toc: - -Best-Hold Selection Sort Order allows libraries to configure customized rules for Evergreen to use to select the best hold to fill at opportunistic capture. When an item is captured for a hold upon check-in, Evergreen evaluates the holds in the system that the item could fill. Evergreen uses a set of rules, or a Best-Hold Selection Sort Order, to determine the best hold to fill with the item. In previous version of Evergreen, there were two sets of rules for Evergreen to use to determine the best hold to fulfill: Traditional and FIFO (First In, First Out). Traditional uses Org Unit Proximity to identify the nearest hold to fill. FIFO follows a strict order of first-in, first-out rules. This feature allows new, custom Best-Hold Selection Sort Orders to be created. Existing Best-Hold Selection Sort Orders can also be modified. - - -== Preconfigured Best-Hold Orders == -Evergreen comes with six preconfigured Best-Hold Selection Sort Orders to choose from: - -* Traditional -* Traditional with Holds-go-home -* Traditional with Holds-always-go-home -* FIFO -* FIFO with Holds-go-home -* FIFO with Holds-always-go-home - -The Holds-go-home and Holds-always-go-home options allow libraries to determine how long they want to allow items to transit outside of the item’s home library, before it must return to its home library to fulfill any holds that are to be picked up there. Libraries can set this time limit in the library setting *Holds: Max foreign-circulation time*. The Library Settings Editor can be found under *Administration -> Local Administration -> Library Settings Editor*. - -== Create a New Best-Hold Selection Sort Order == -To create a new Best-Hold Selection Sort Order, go to *Administration -> Server Administration -> Best-Hold Selection Sort Order*. - -. Click *Create New*. -. Assign your Best-Hold Selection Sort Order a *Name*. -. Next, use the *Move Up* and *Move Down* buttons to arrange the fields in the order that you would like Evergreen to check when looking for the best hold to fill with an item at opportunistic capture. -. Click *Save Changes* to create your custom Best-Hold Selection Sort Order. - -image::media/best_hold_sort_order1.jpg[Best-Hold Selection Sort Order] - - -== Edit an Existing Best-Hold Selection Sort Order == -To edit an existing Best-Hold Selection Sort Order, go to *Administration -> Server Administration -> Best-Hold Selection Sort Order*. - -. Click *Edit Existing*. -. Choose the Best-Hold Selection Sort Order that you would like to edit from the drop down menu. -. Next, use the *Move Up* and *Move Down* buttons to arrange the fields in the new order that you would like Evergreen to check when looking for the best hold to fill with an item at opportunistic capture. -. Click *Save Changes* to save your edits. - -== Choosing the Best-Hold Selection Sort Order == -The Best-Hold Selection Sort Order can be set for an Org Unit in the *Library Settings Editor*. - -To select the Best-Hold Selection Sort Order that your Org Unit will use: - -. Go to *Administration -> Local Administration -> Library Settings Editor*. -. Locate the setting *Holds: Best-hold selection sort order*, and click *Edit*. -. Choose the *Context* org unit for this setting. -. Select the Best-hold selection sort order, or *Value*, from the drop down menu. -. Click *Update Setting*. - -image::media/best_hold_sort_order2.jpg[Library Settings Editor] - - -== Permissions to use this Feature == -To administer the custom Best-Hold Selection Sort Order interface, you need the following permission: - -* ADMIN_HOLD_CAPTURE_SORT diff --git a/docs-antora/modules/admin/pages/MARC_Import_Remove_Fields.adoc b/docs-antora/modules/admin/pages/MARC_Import_Remove_Fields.adoc deleted file mode 100644 index 0776936c68..0000000000 --- a/docs-antora/modules/admin/pages/MARC_Import_Remove_Fields.adoc +++ /dev/null @@ -1,54 +0,0 @@ -= MARC Import Remove Fields = -:toc: - -MARC Import Remove Fields allows staff to configure MARC tags to be automatically removed from bibliographic records when they are imported into Evergreen. This feature allows specific MARC tags to be removed from records that are imported through three different interfaces: - -* Cataloging -> Import Record from Z39.50 -* Cataloging -> MARC Batch Import/Export -* Acquisitions -> Load MARC Order Records - - -== Create a MARC Import Remove Fields profile == -To create a MARC Import Remove Fields profile, go to *Administration -> Server Administration -> MARC Import Remove Fields*. - -. Click *New Field Group*. -. Assign the Field Group a *Label*. This label will appear in the import interfaces. -. Assign an Org Unit *Owner*. -. Check the box next to *Always Apply* if you want Evergreen to apply this Remove Fields profile to all MARC records that are imported through the three affected interfaces. If you do not select *Always Apply*, staff will have the option to choose which Remove Fields profile to use when importing records. -. Click *Save*. -. The profile that you created will now appear in the list of MARC Import Remove Fields. -. Click on the hyperlinked *ID* number. This will bring you into the Remove Fields profile to configure the MARC tags to be removed. -. Click *New Field*. -. In the *Field*, enter the MARC tag to be removed. -. Click *Save*. -. Add *New Fields* until you have configured all the tags needed for this profile. -. Click *Return to Groups* to go back to the list of Remove Field profiles. - - -image::media/marc_import_remove_fields3.png[MARC Remove Fields Profile] - - -== Import Options == -The Label for each of the MARC Import Remove Fields profiles will appear on the three affected import screens. To select a profile, check the box next to the desired Label before importing the records. - -*Cataloging -> Import Record from Z39.50* - -image::media/marc_import_remove_fields1.jpg[Import Record from Z39.50] -{nbsp} - -*Cataloging -> MARC Batch Import/Export* - -image::media/marc_import_remove_fields2.jpg[MARC Batch Import/Export] -{nbsp} - -*Acquisitions -> Load MARC Order Records* - -image::media/marc_import_remove_fields5.jpg[Load MARC Order Records] - - -== Permissions to use this Feature == -The following permissions are required to use this feature: - -* CREATE_IMPORT_TRASH_FIELD -* UPDATE_IMPORT_TRASH_FIELD -* DELETE_IMPORT_TRASH_FIELD diff --git a/docs-antora/modules/admin/pages/MARC_RAD_MVF_CRA.adoc b/docs-antora/modules/admin/pages/MARC_RAD_MVF_CRA.adoc deleted file mode 100644 index ded3e27ccb..0000000000 --- a/docs-antora/modules/admin/pages/MARC_RAD_MVF_CRA.adoc +++ /dev/null @@ -1,214 +0,0 @@ -= MARC Record Attributes = -:toc: - -The MARC Record Attribute Definitions support the ingesting, indexing, searching, filtering, and delivering of bibliographic record attributes. - -To Access the MARC Record Attributes, click *Administration* -> *Server Administration* -> *MARC Record Attributes* - -== Managing Fixed Field Drop-down Context Menus == - -indexterm:[Fixed fields] -indexterm:[MARC editor,configuring] - -The MARC Editor includes Fixed Field Drop-down Context Menus, which make it easier for catalogers to select the right values for fixed fields -in both Bibliographic and Authority records. You can use the MARC Record Attributes interface to modify these dropdowns to make them better -suited for catalogers in your consortium. - -To edit these menus, you can follow these steps: - -. Click *Administration -> Server Administration -> MARC Record Attributes*. -. If there's not already a dropdown for your fixed field, click *New Attr. Definition* and fill out the form using other fixed field -attribute definitions as a model. -. If you can find an attribute definition for your fixed field in the list, click the "Manage" link in the Coded Value Maps column. -. Click *New Map*. -. In the SVF Attribute field, type the name of the Attribute you identified in steps 2-3. -. In the code field, type the actual value that will go into the fixed field (typically 1-4 characters). You can add an option to keep that fixed field empty by typing a space into this field. -. In the value field, type the short description you'd like your catalogers to see in the dropdown menu. -. Optional: add a longer description of this value in the Description field. -. Check the OPAC Visible checkbox. - - - -== Multi Valued Fields and Composite Record Attributes == - -*Multi Valued Fields* and *Composite Record Attributes* expands upon the Record Attribute Definitions feature to include capturing all occurrences of multi-valued elements in a record. *Multi Valued Fields* allows users to say that a bibliographic record contains multiple entries for a particular record attribute. *Composite Record Attributes* supports the application of a more complicated and nested form of structure to a record attribute definition. - -=== Multi Valued Fields === - -Multi Valued Fields allows for the capturing of multi-valued elements of a bibliographic record. Through the use of Multi Valued Fields, Evergreen recognizes that records are capable of storing multiple values. Multi Valued Fields are represented in the Record Attribute Definitions interface by a column named *Multi-valued?*. With *Multi-valued?* set to *True*, Evergreen will recognize the bibliographic records in the database that have multiple values mapping to the record attribute definition; it will also track and search on those values in the catalog. This feature will be particularly handy for bibliographic records representing a Blu-ray / DVD combo pack, since both format types can be displayed in the OPAC (if both formats were cataloged in the record). - -image::media/radmvcolumn_1.jpg[] - -To edit an existing record attribute definition and set the *Multi-valued?* field to *True*: - -. Click *Administration* on the menu bar -. Click *Server Administration*, then click *MARC Record Attributes* -. Double-click on the row of the record attribute definition that needs to be edited -. Select the *Multi-valued?* checkbox -. Click *Save* - -image::media/editrad_2.jpg[] - -=== Composite Record Attributes === - -Composite Record Attributes build on top of Evergreen’s ability to support record attributes that contain multiple entries. The Composite Record Attributes feature enables administrators to take a record attribute definition and apply a more complicated and nested form of structure to that particular record attribute. Two new Record Attribute Definitions columns have been added to facilitate the management of the Composite Record Attributes. The *Composite attribute?* column designates whether or not a particular record attribute definition is also a composite record attribute. The *Coded Value Maps* column contains a *Manage* link in each row that allows users to manage the Coded Value Maps for the record attributes. - -image::media/radcvmcacolumns_3.jpg[] - -=== Coded Value Maps === - -To manage the Coded Value Maps of a particular record attribute definition, click the *Manage* link located under the Coded Value Maps column for that record attribute. This will open the Coded Value Maps interface. What administrators see on the Coded Value Maps screen does not define the structure of the composite record attribute; they must go into the *Composite Attribute Entry Definitions* screen to view this information. - -image::media/cvmpage_4.jpg[] - -Within the Coded Value Maps screen, there is a column named *Composite Definition*. The *Composite Definition* column contains a *Manage* link that allows users to configure and to edit Composite Record Attribute definitions. In order to enable the *Manage* link (i.e. have the *Manage* link display as an option under the *Composite Definition* column), the *Composite attribute?* column (located back in the Record Attributes Definition page) must be set to *True*. - -To edit an existing record attribute definition and set the *Composite attribute?* field to True: - -. Click *Administration* on the menu bar -. Click *Server Administration*, then click *MARC Record Attributes* -. Double-click on the row of the record attribute definition that needs to be edited -. Select the *Composite attribute?* checkbox -. Click *Save* - -image::media/radcatrue_5.jpg[] - -Now that the *Composite attribute?* value is set to *True*, click on the *Manage* link located under the *Coded Value Maps* column for the edited record attribute definition. Back in the Coded Value Maps screen, a *Manage* link should now be exposed under the *Composite Definition* column. Clicking on a specific coded value’s *Manage* link will take the user into the *Composite Attribute Entry Definitions* screen for that specified coded value. - -=== Composite Attribute Entry Definitions === - -The Composite Attribute Entry Definitions screen is where administrators can locally define and edit Composite Record Attributes for specific coded values. For example: administrators can further refine and distinguish the way a “book” should be defined within their database, by bringing together the right combination of attributes together to truly define what a “book” is in their database. - -The top of the Composite Attribute Entry Definitions screen shows a parenthetically defined view of the *Composite Data Expression*. Below the Composite Data Expression is the *Composite Data Tree*. The Composite Data Tree is structured off of Boolean Operators, including the support of NOT operations. This nested form can be as deeply defined as it needs to be within the site’s database. - -image::media/caed_6.jpg[] - -To modify the *Composite Attribute Entry Definition*, any Boolean Operator can be deleted or have a coded value appended to it. The appended coded value can be any number of Coded Value Maps from any other Record Attribute Definition. So, administrators can choose from all the other existing record attribute definitions and create new nested structures to define entirely new data types. - -To modify the *Composite Attribute Entry Definition*: - -. Click *Add Child* for the specific Boolean Operator that needs to be modified, and a new window will open -. Select which *Record Attribute* needs to be represented in the structure under that particular Boolean Operator -. Select the *Attribute Type* from the dropdown options -. Select the *Value* of the Attribute Type from the dropdown options (dropdown options will be based on the Attribute Type selected) -. Click *Submit* -. The *Composite Data Expression* should now include the modification -. Once all modifications have been made, click *Save Changes* on the Composite Attribute Entry Definitions page - -image::media/modifycde_7.jpg[] - -=== Search and Icon Formats === - -==== Search and Icon Formats ==== - -The table below shows all the search and icon formats. In some cases they vary slightly, with the icon format being more restrictive. This is so that things such as a search for "All Books" will include Large Print books yet Large Print books will not show both a "Book" and "Large Print Book" icon. - -In the table below "Icon Format Only" portions of the definition are italicized and in square brackets: [_Icon format only data_] - -The definitions use the <> at the end of this document. - -[width="60%", cols="<,<,<"] -|==== -|*Icon* |*Search Label/Icon Label* |*Definition* -|image:media/blu-ray.png[] | Blu-ray | VR Format:s -|image:media/book.png[] | All books/Book | Item Type: a,t - -Bib Level: a,c,d,m - -NOT: Item Form: a,b,c,f,o,q,r,s _[,d]_ -|image:media/braille.png[] | Braille | Item Type: a - -Item Form: f -|image:media/casaudiobook.png[] | Cassette audiobook | Item Type: i - -SR Format: l -|image:media/casmusic.png[] | Audiocassette music recording | Item Type: j - -SR Format: l -|image:media/cdaudiobook.png[] | CD audiobook | Item Type: i - -SR Format: f -|image:media/cdmusic.png[] | CD music recording | Item Type: j - -SR Format: f -|image:media/dvd.png[] | DVD | VR Format: v -|image:media/eaudio.png[] | E-audio | Item Type: i - -Item Form: o,q,s -|image:media/ebook.png[]| E-book | Item Type: a,t - -Bib Level: a,c,d,m - -Item Form: o,q,s -|image:media/equip.png[] | Equipment, games, toys | Item Type: r -|image:media/evideo.png[] | E-video | Item Type: g - -Item Form: o,q,s -|image:media/kit.png[] | Kit | Item Type: o,p -|image:media/lpbook.png[] | Large print book | Item Type: a,t - -Bib Level: a,c,d,m - -Item Form: d -|image:media/map.png[] | Map | Item Type: e,f -|image:media/microform.png[] | Microform | Item Form: a,b,c -|image:media/music.png[] | All music/Music sound recording (unknown format) | Item Type: j - -_[NOT: SR Format: a,b,c,d,e,f,l]_ -|image:media/phonomusic.png[] | Phonograph music recording | Item Type: j - -SR Format: a,b,c,d,e -|image:media/phonospoken.png[] | Phonograph spoken recording | Item Type: i - -SR Format: a,b,c,d,e -|image:media/picture.png[] | Picture | Item type: k -|image:media/score.png[] | Music score | Item type: c,d -|image:media/serial.png[] | Serials and magazines | Bib Level: b,s -|image:media/software.png[] | Software and video games | Item Type: m -|image:media/vhs.png[] | VHS | VR Format: b -|==== - -[[anchor-2]] -==== Record Types ==== - -This table shows the record types currently used in determining elements of search and icon formats. They are based on a combination of the MARC Record Type (LDR 06) and Bibliographic Level (LDR 07) fixed fields. - -[width="30%", cols="<,<,<"] -|==== -| *Record Type* | *LDR 06* | *LDR 07* -| BKS | a,t | a,c,d,m -| MAP | e,f | a,b,c,d,i,m,s -| MIX | p | c,d,i -| REC | i,j | a,b,c,d,i,m,s -| SCO | c,d | a,b,c,d,i,m,s -| SER | a | b,i,s -| VIS | g,k,r,o | a,b,c,d,i,m,s -|==== - -[[anchor-1]] -===== Fixed Field Types ===== -This table details the fixed field types currently used for determining search and icon formats. See the <> section above for how the system determines them. - -[width="40%", cols="<,<,<,<"] -|==== -| *Label* | *Record Type* | *Tag* | *Position* -|Item Type | ANY | LDR | 06 -|Bib Level | ANY | LDR | 07 -.14+^.^| Item Format .2+^.^| BKS | 006 | 06 -| 008 | 23 -.2+^.^| MAP | 006 | 12 -|008 | 29 -.2+^.^| MIX | 006 | 06 -| 008 | 23 -.2+^.^| REC | 006 | 06 -| 008 | 23 -.2+^.^| SCO | 006 |06 -| 008 | 23 -.2+^.^| SER | 006 | 06 -| 008 | 23 -.2+^.^| VIS | 006 | 12 -| 008 | 29 -| SR Format | ANY | 007s | 03 -| VR Format | ANY | 007v | 04 -|==== - diff --git a/docs-antora/modules/admin/pages/Org_Unit_Proximity_Adjustments.adoc b/docs-antora/modules/admin/pages/Org_Unit_Proximity_Adjustments.adoc deleted file mode 100644 index a091e78e97..0000000000 --- a/docs-antora/modules/admin/pages/Org_Unit_Proximity_Adjustments.adoc +++ /dev/null @@ -1,49 +0,0 @@ -= Org Unit Proximity Adjustments = -:toc: - -== Org Unit Proximity Adjustments == - -Org Unit Proximity Adjustments allow libraries to indicate lending preferences for holds between libraries in -an Evergreen consortium. When a hold is placed in Evergreen, the hold targeter looks for items that can fill -the hold. One factor that the hold targeter uses to choose the best item to fill the hold is the distance, -or proximity, between the capturing library and the pickup library for the request. The proximity is based -on the number of steps through the org tree that it takes to get from one org unit to another. - -image::media/Org_Unit_Prox_Adj1.png[Org Unit Proximity] -Org Unit Proximity between BR1 and BR4 = 4 - -Org Unit Proximity Adjustments allow libraries to customize the distances between org units, which provides -more control over which libraries are looked at when targeting copies to fill a hold. Evergreen can also be -configured to take Org Unit Proximity Adjustments into account during opportunistic capture through the -creation of a custom Best-Hold Selection Sort Order. See documentation xref:#best_hold_selection_sort_order[here] -for more information on Best-Hold Selection Sort Order. - -An Org Unit Proximity Adjustment can be created to tell Evergreen which libraries to look at first for items to fill a hold or which library to look at last. This may be useful for accounting for true transit costs or physical distances between libraries. It can also be used to identify libraries that have special lending agreements or preferences. Org Unit Proximity Adjustments can be created for all holds between two org units, or they can be created for holds on specific Shelving Locations and Circulation Modifiers. - -== Absolute and Relative Adjustments == -Two types of proximity adjustments can be created in Evergreen: Absolute adjustments and Relative adjustments. - -Absolute proximity adjustments allow you to replace the default proximity distance between two org units. An absolute adjustment could be made to tell the hold targeter to look at a specific library or library system first to find an item to fill a hold, before looking elsewhere in the consortium. - -Relative proximity adjustments allows the proximity between org units to be treated as closer or farther from one another than the default distance. A relative proximity adjustment could be used to identify a library that has limited hours or slow transit times to tell the hold targeter to look at that library last for items to fill a hold. - -== Create an Org Unit Proximity Adjustment == -.To create an Org Unit Proximity Adjustment between two libraries: -. In the Administration menu choose *Server Administration -> Org Unit Proximity Adjustments*. -. Click *New OU Proximity Adjustment*. -. Choose an *Item Circ Lib* from the drop down menu. -. Choose a *Hold Request Lib* from the drop down menu. -. If this proximity adjustment applies to a specific shelving location, select the appropriate *Shelving Location* from the drop down menu. -. If this proximity adjustment applies to a specific material type, select the appropriate *Circ Modifier* from the drop down menu. -. If this is an Absolute proximity adjustment, check the box next to *Absolute adjustment?* If you leave the box blank, a relative proximity adjustment will be applied. -. Enter the *Proximity Adjustment* between the *Item Circulating Library* and the *Request Library*. -. Click *Save*. - -image::media/Org_Unit_Prox_Adj2.png[Org Unit Proximity Adjustment] - -This will create a one-way proximity adjustment between Org Units. In this example this adjustment will apply to items requested at by a patron BR4 and filled at BR1. To create the reciprocal proximity adjustment, for items requested at BR1 and filled at BR4, create a second proximity adjustment between the two Org Units. - -== Permissions to use this Feature == -To create Org Unit Proximity Adjustments, you will need the following permission: - -* ADMIN_PROXIMITY_ADJUSTMENT diff --git a/docs-antora/modules/admin/pages/SMS_messaging.adoc b/docs-antora/modules/admin/pages/SMS_messaging.adoc deleted file mode 100644 index 8087a0c085..0000000000 --- a/docs-antora/modules/admin/pages/SMS_messaging.adoc +++ /dev/null @@ -1,125 +0,0 @@ -= SMS Text Messaging = -:toc: - -The SMS Text Messaging feature enables users to receive hold notices via text message. Users can opt-in to this hold notification as their default setting for all holds, or they -can receive specific hold notifications via text message. Users can also send call numbers and item locations via text message. - -[#administrative_setup] -== Administrative Setup == - -You cannot receive text messages from Evergreen by default. You must enable this feature to receive hold notices and item information from Evergreen via text message. - -=== Enable Text Messages === - -. Click *Administration* -> *Local Administration* -> *Library Settings Editor.* -. Select the setting, *Enable features that send SMS text messages.* -. Set the value to *True,* and click *Update Setting.* - -image::media/SMS_Text_Messaging1.png[Library Setting to enable SMS] - -=== Authenticate Patrons === - -By default, you must be logged into your OPAC account to send a text message -from Evergreen. However, if you turn on this setting, you can text message copy -information without having to login to your OPAC account. - -To disable the patron login requirement: - -. Click *Administration* -> *Local Administration* -> *Library Settings Editor.* -. Select the setting, *Disable auth requirement for texting call numbers*. -. Set the value to *True,* and click *Update Setting.* - -image::media/SMS_Text_Messaging2.png[Library Setting to disable SMS auth/login requirement] - -=== Configure SMS Carriers === - -A list of SMS carriers that can transmit text messages to users is available in the staff client. Library staff can edit this list, or add new carriers. - -To add or edit SMS carriers: - -. Click *Administration* -> *Server Administration* -> *SMS Carriers*. -. To add a new carrier, click the *New Carrier* button in the top right corner of the screen. To edit an existing carrier, double click in any white space in the carrier's row. -+ -image::media/SMS_Text_Messaging3.jpg[SMS_Text_Messaging3] -+ -. Enter a (geographical) *Region*. -. Enter the carrier's *Name*. -. Enter an *Email Gateway.* The SMS carrier can provide you with the content for this field. The $number field is converted to the user's phone number when the text message is generated. -. Check the *Active* box to use this SMS Carrier. - -image::media/SMS_Text_Messaging4.jpg[SMS_Text_Messaging4] - -=== Configure Text Message Templates === - -Library staff control the content and format of text messages through the templates in Notifications/Action Triggers. Patrons cannot add free text to their text messages. - -To configure the text of the SMS text message: - -. Click *Administration* -> *Local Administration* -> *Notifications/Action Triggers.* -. Create a new A/T and template, or use or modify an existing template. For example, a default template, "Hold Ready for Pickup SMS Notification," notifies users that the hold is ready for pickup. -+ -image::media/SMS_Text_Messaging5.png[SMS Notification Triggers list] -+ -. You can use the default template, or you can edit the template and add -content specific to your library. Click the hyperlinked name to edit the -Event Environment and Event Parameters. Or double-click the row to edit the -hold notice. -+ -image::media/SMS_Text_Messaging6.png[Hold Ready SMS Trigger Event Definition] - -== Receiving Holds Notices via Text Message == - -You can receive notification that your hold is ready for pickup from a text message that is sent to your mobile phone. - -. Login to your account. -+ -image::media/SMS_Text_Messaging12.jpg[SMS_Text_Messaging12] -+ -. Search the catalog. -. Retrieve a record, and click the *Place Hold* link. -. Select the option to retrieve hold notification via text message. -. Choose an SMS Carrier from the drop down menu. NOTE: You can enter your SMS carrier and phone number into your *Account Preferences* to skip steps five and six. -. Enter a phone number. -. Click *Submit.* - -image::media/SMS_Text_Messaging13.jpg[SMS_Text_Messaging13] - -[[Sending_Copy_Details_via_Text_Message]] -== Sending Copy Details via Text Message == - -You can search the catalog for an item, and, after retrieving results -for the item, click a hyperlink to send the copy information in a text -message. - -. Login to your account in the OPAC. NOTE: If you have disabled the -setting that requires patron login, then you do not have to login to -their accounts to send text messages. See -xref:#administrative_setup[Administrative Setup] for more information. -+ -image::media/SMS_Text_Messaging7.jpg[SMS_Text_Messaging7] -+ -. Search the catalog, and retrieve a title with copies. -. Click the *Text* link next to the call number. -+ -image::media/SMS_Text_Messaging8.png[Screenshot: Link to text copy details via SMS] -+ -. The text of the SMS Text Message appears. -+ -image::media/SMS_Text_Messaging9.png[Screenshot: Text message preview with submit form] -+ -. Choose an SMS Carrier from the drop down menu. NOTE: You can enter -your SMS carrier and phone number into your *Account Preferences* to -skip steps five and six. -. Enter a phone number. -. Click *Submit*. NOTE: Message and data rates may apply. -. The number and carrier are converted to an email address, and the text -message is sent to your mobile phone. The following confirmation message -will appear. -+ -image::media/SMS_Text_Messaging11.png[Screenshot: Confirmation page that SMS message was sent] - -*Permissions to use this Feature* - -ADMIN_SMS_CARRIER - Enables users to add/create/delete SMS Carrier entries. - - diff --git a/docs-antora/modules/admin/pages/_attributes.adoc b/docs-antora/modules/admin/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/admin/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/admin/pages/acquisitions_admin.adoc b/docs-antora/modules/admin/pages/acquisitions_admin.adoc deleted file mode 100644 index d23433c62d..0000000000 --- a/docs-antora/modules/admin/pages/acquisitions_admin.adoc +++ /dev/null @@ -1,961 +0,0 @@ -= Acquisitions Administration = -:toc: - - -== Acquisitions Settings == - -indexterm:[acquisitions,permissions] - -Several setting in the Library Settings area of the Administration module pertain to -functions in the Acquisitions module. You can access these settings by clicking -_Administration -> Local Administration -> Library Settings Editor_. - -* CAT: Delete bib if all copies are deleted via Acquisitions lineitem -cancellation - If you cancel a line item, then all of the on order copies in the -catalog are deleted. If, when you cancel a line item, you also want to delete -the bib record, then set this setting to TRUE. -* Allow funds to be rolled over without bringing the money along - enables you -to move a fund's encumbrances from one year to the next without moving unspent -money. Unused money is not added to the next year's fund and is not available -for use. -* Allows patrons to create automatic holds from purchase requests. -* Default circulation modifier - This modifier would be applied to items that -are created in the acquisitions module -* Default copy location - This copy location would be applied to items that are -created in the acquisitions module -* Fund Spending Limit for Block - When the amount remaining in the fund, -including spent money and encumbrances, goes below this percentage, attempts to -spend from the fund will be blocked. -* Fund Spending Limit for Warning - When the amount remaining in the fund, -including spent money and encumbrances, goes below this percentage, attempts to -spend from the fund will result in a warning to the staff. -* Rollover Distribution Formulae Funds - When set to true, during fiscal -rollover, all distribution formulae will update to use new funds. -* Set copy creator as receiver - When receiving a copy in acquisitions, set the -copy "creator" to be the staff that received the copy -* Temporary barcode prefix - Temporary barcode prefix for items that are created -in the acquisitions module -* Temporary call number prefix - Temporary call number prefix for items that are -created in the acquisitions module - -== Cancel/Delay reasons == - -indexterm:[acquisitions,purchase order,cancellation] -indexterm:[acquisitions,line item,cancellation] - -The Cancel reasons link enables you to predefine the reasons for which a line -item or a PO can be cancelled. A default list of reasons appears, but you can -add custom reasons to this list. Applying the cancel reason will prevent the -item from appearing in a claims list and will allow you to cancel debits -associated with the purchase. Cancel reasons also enable you to delay -a purchase. For example, you could create a cancel reason of 'back ordered,' and -you could choose to keep the debits associated with the purchase. - -=== Create a cancel/delay reason === - -. To add a new cancel reason, click _Administration -> Acquisitions Administration -> -Cancel reasons_. - -. Click _New Cancel Reason_. - -. Select a using library from the drop-down menu. The using library indicates -the organizational units whose staff can use this cancel reason. This menu is -populated with the shortnames that you created for your libraries in the -organizational units tree (See Administration -> Server Administration -> Organizational -Units.) - -. Create a label for the cancel reason. This label will appear when you select a -cancel reason on an item or a PO. - -. Create a description of the cancel reason. This is a free text field and can -comprise any text of your choosing. - -. If you want to retain the debits associated with the cancelled purchase, click -the box adjacent to Keep Debits-> - -. Click _Save_. - -=== Delete a custom cancel/delay reason === - -You can delete custom cancel reason. - -. Select the checkbox for the custom cancel reason that should be deleted. - -. Click the _Delete Selected_ button. - -[TIP] -You cannot select the checkbox for any of the default cancel reasons because the -system expects those reasons to be available to handle EDI order responses. - - -== Claiming == - -indexterm:[acquisitions,claiming] - -Currently, all claiming is manual, but the admin module enables you to build -claim policies and specify the action(s) that users should take to claim items. - -=== Create a claim policy === - -The claim policy link enables you to name the claim policy and specify the -organization that owns it. - -. To create a claim policy, click _Administration -> Acquisitions Administration -> -Claim Policies_. -. Create a claim policy name. No limits exist on the number of characters that -can be entered in this field. -. Select an org unit from the drop-down menu. The org unit indicates the -organizational units whose staff can use this claim policy. This menu is -populated with the shortnames that you created for your libraries in the -organizational units tree (See Administration -> Server Administration -> Organizational -Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. -+ -. Enter a description. No limits exist on the number of characters that can be -entered in this field. -. Click _Save_. - -=== Create a claim type === - -The claim type link enables you to specify the reason for a type of claim. - -. To create a claim type, click _Administration -> Acquisitions Administration -> -Claim types_. -. Create a claim type. No limits exist on the number of characters that can be -entered in this field. -. Select an org unit from the drop-down menu. The org unit indicates the -organizational units whose staff can use this claim type. This menu is populated -with the shortnames that you created for your libraries in the organizational -units tree (See Administration -> Server Administration -> Organizational Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. -+ -. Enter a description. No limits exist on the number of characters that can be -entered in this field. -. Click _Save_. - -=== Create a claim event type === - -The claim event type describes the physical action that should occur when an -item needs to be claimed. For example, the user should notify the vendor via -email that the library is claiming an item. - -. To access the claim event types, click _Administration -> Acquisitions Administration -> -Claim event type_. -. Enter a code for the claim event type. No limits exist on the number of -characters that can be entered in this field. -. Select an org unit from the drop-down menu. The org unit indicates the -organizational units whose staff can use this event type. This menu is populated -with the shortnames that you created for your libraries in the organizational -units tree (See Administration -> Server Administration -> Organizational Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. -+ -. Enter a description. No limits exist on the number of characters that can be -entered in this field. -. If this claim is initiated by the user, then check the box adjacent to Library -Initiated. -+ -[NOTE] -Currently, all claims are initiated by a user. The ILS cannot automatically -claim an issue. -+ -. Click _Save_. - -=== Create a claim policy action === - -The claim policy action enables you to specify how long a user should wait -before claiming the item. - -. To access claim policy actions, click _Administration -> Acquisitions Administration -> -Claim Policy Actions_. - -. Select an Action (Event Type) from the drop-down menu. - -. Enter an action interval. This field indicates how long a user should wait -before claiming the item. - -. In the Claim Policy ID field, select a claim policy from the drop-down menu. - -. Click _Save_. - -[NOTE] -You can create claim cycles by adding multiple claim policy actions to a claim - policy. - -== Currency Types == - -indexterm:[acquisitions,currency types] - -Currency types can be created and applied to funds in the administrative module. -When a fund is applied to a copy or line item for purchase, the item will be -purchased in the currency associated with that fund. - - - -=== Create a currency type === - -. To create a new currency type, click _Administration -> Acquisitions Administration -> -Currency types_. - -. Enter the currency code. No limits exist on the number of characters that can -be entered in this field. - -. Enter the name of the currency type in Currency Label field. No limits exist -on the number of characters that can be entered in this field. - -. Click Save. - - - -=== Edit a currency type === - -. To edit a currency type, click your cursor in the row that you want to edit. -The row will turn blue. - -. Double click. The pop-up box will appear, and you can edit the fields. - -. After making changes, click Save. - -[NOTE] -From the currency types interface, you can delete currencies that have never -been applied to funds or used to make purchases. - -== Distribution Formulas == - -indexterm:[acquisitions,distribution formulas, templates] - -Distribution formulas allow you to specify the number of copies that should be -distributed to specific branches. They can also serve as templates allowing you -to predefine settings for your copies. You can create and reuse formulas as -needed. - -=== Create a distribution formula === - -. Click _Administration -> Acquisitions Administration -> Distribution Formulas_. -. Click _New Formula_. -. Enter a Formula Name. No limits exist on the number of characters that can be -entered in this field. -. Choose a Formula Owner from the drop-down menu. The Formula Owner indicates -the organizational units whose staff can use this formula. This menu is -populated with the shortnames that you created for your libraries in the -organizational units tree (See Administration -> Server Administration -> Organizational -Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. -+ -. Ignore the Skip Count field which is currently not used. -. Click _Save_. -. Click _New Entry_. -. Select an Owning Library from the drop-down menu. This indicates the branch -that will receive the items. This menu is populated with the shortnames that you -created for your libraries in the organizational units tree (See _Administration -> -Server Administration -> Organizational Units_). -. Select/enter any of the following copy details you want to predefine in the -distribution formula. -* Copy Location -* Fund -* Circ Modifier -* Collection Code -. In the Item Count field, enter the number of items that should be distributed -to the branch. You can enter the number or use the arrows on the right side of -the field. -. Click _Apply Changes_. The screen will reload. -. To view the changes to your formula, click Administration -> -Acquisitions Administration -> Distribution Formulas. The item_count will reflect -the entries to your distribution formula. - -[NOTE] -To edit the Formula Name, click the hyperlinked name of the formula in the top -left corner. A pop-up box will enable you to enter a new formula name. - -=== Edit a distribution formula === - -To edit a distribution formula, click the hyperlinked title of the formula. - -== Electronic Data Interchange == -indexterm:[acquisitions,EDI,accounts] -indexterm:[EDI,accounts] - -Many libraries use Electronic Data Interchange (EDI) accounts to send purchase orders and receive invoices - from providers electronically. In Evergreen users can setup EDI accounts and manage EDI messages in - the admin module. EDI messages and notes can be viewed in the acquisitions module. See -also the command line system administration manual, which includes some initial setup steps that are -required for use of EDI. - -=== Entering SANs (Standard Address Numbers) === - -For EDI to work your library must have a SAN and each of your providers must each supply you with their SAN. - -A SAN (Standard Address Number) is a unique 7 digit number that identifies your library. - -==== Entering a Library's SAN ==== - -These steps only need to be done once per library. - -. In Evergreen select _Administration_ -> _Server Administration_ -> _Organizational Units_ -. Find your library in the tree on the left side of the page and click on it to open the settings. -+ -[NOTE] -Multi-branch library systems will see an entry for each branch but should select their system's -top organization unit. -+ -. Click on the _Address_ tab. -. Click on the _Mailing Address_ tab. -. Enter your library's SAN in the field labeled _SAN_. -. Click _Save_. - -image::media/enter-library-san-2.png[Enter Library SAN] - - -==== Entering a Provider's SAN ==== - -These steps need to be repeated for every provider with which EDI is used. - -. In Evergreen select _Administration_ -> _Acquisitions Administration_ -> _Providers_. -. Click the hyperlinked name of the provider you would like to edit. -+ -image::media/enter-provider-san-1.png[Enter Provider SAN] - -. Enter your provider's SAN in the field labeled _SAN_. -. Click _Save_. -+ -image::media/enter-provider-san-2.png[Enter Provider SAN] - -=== Create an EDI Account === - -CAUTION: You *must* create your provider before you create an EDI account for the provider. - -. Contact your provider requesting the following information: -* Host -* Username -* Password -* Path -* Incoming Directory -* Provider's SAN - - -. In Evergreen select _Administration_ -> _Acquisitions Administration_ -> _EDI Accounts_. -. Click _New Account_. A pop-up will appear. -+ -image::media/create-edi-accounts-2.png[Create EDI Account] - -. Fill in the following fields: -* In the _Label_ field, enter a name for the EDI account. -* In the _Host_ field, enter the requisite FTP or SCP information supplied by -your provider. Be sure to include the protocol (e.g. `ftp://ftp.vendorname.com`) -* In the _Username_ field, enter the username supplied by your provider. -* In the _Password_ field, enter the password supplied by your provider. -* Select your library as the _Owner_ from the drop down menu. Multi-branch libraries should select their top level organizational - unit. -* The _Last Activity_ updates automatically with any inbound or outbound communication. -* In the _Provider_ field, enter the code used in Evergreen for your provider. -* In the _Path_ field, enter the path supplied by your provider. The path indicates a directory on -the provider's server where Evergreen will deposit its outgoing order files. -+ -[TIP] -If your vendor requests a specific file extension for EDI purchase orders, -such as `.ord`, enter the name of the directory, followed by a slash, -followed by an asterisk, followed by a period, followed by the extension. -For example, if the vendor requests that EDI purchase orders be sent to -a directory called `in` with the file extension `.ord`, your path would -be `in/*.ord`. -+ -* In the _Incoming Directory_ field, enter the incoming directory supplied by your provider. This indicates -the directory on the vendor’s server where Evergreen will retrieve incoming order responses and invoices. -+ -[NOTE] -Don't worry if your incoming directory is named `out` or `outgoing`. -From your vendor's perspective, this directory is outgoing, because -it contains files that the vendor is sending to Evergreen. However, -from Evergreen's perspective, these files are incoming. -+ -image::media/create-edi-accounts-3.png[Create EDI Account] - -. Click _Save_. -. Click on the link in the _Provider_ field. -+ -image::media/create-edi-accounts-4.png[Create EDI Account] - -. Select the EDI account that has just been created from the _EDI Default_ drop down menu. -+ -image::media/create-edi-accounts-5.png[Create EDI Account] - -. Click _Save_. - -=== EDI Messages === - -indexterm:[EDI,messages] -indexterm:[acquisitions,EDI,messages] - - -The EDI Messages screen displays all incoming and outgoing messages between the -library and its providers. To see details of a particular EDI message, -including the raw EDIFACT message, double click on a message entry. To find a -specific EDI message, the Filter options can be useful. Outside the Admin -interface, EDI messages that pertain to a specific purchase order can be -viewed from the purchase order interface (See _Acquisitions -> Purchase Orders_). - -== Exchange Rates == - -indexterm:[acquisitions,exchange rates] - -Exchange rates define the rate of exchange between currencies. Evergreen will -automatically calculate exchange rates for purchases. Evergreen assumes that the -currency of the purchasing fund is identical to the currency of the provider, -but it provides for two unique situations: If the currency of the fund that is -used for the purchase is different from the currency of the provider as listed -in the provider profile, then Evergreen will use the exchange rate to calculate -the price of the item in the currency of the fund and debit the fund -accordingly. When money is transferred between funds that use different -currency types, Evergreen will automatically use the exchange rate to convert -the money to the currency of the receiving fund. During such transfers, -however, staff can override the automatic conversion by providing an explicit -amount to credit to the receiving fund. - -=== Create an exchange rate === - -. To create a new exchange rate, click _Administration -> Acquisitions Administration -> -Exchange Rates_. - -. Click New Exchange Rate. - -. Enter the From Currency from the drop-down menu populated by the currency -types. - -. Enter the To Currency from the drop-down menu populated by the currency types. - -. Enter the exchange Ratio. - -. Click _Save_. - -=== Edit an exchange rate === - -Edit an exchange rate just as you would edit a currency type. - -== MARC Federated Search == - - -indexterm:[acquisitions,MARC federated search] - -The MARC Federated Search enables you to import bibliographic records into a -selection list or purchase order from a Z39.50 source. - -. Click _Acquisitions -> MARC Federated Search_. -. Check the boxes of Z39.50 services that you want to search. Your local -Evergreen Catalog is checked by default. Click Submit. -+ -image::media/acq_marc_search.png[search form] -+ -. A list of results will appear. Click the _Copies_ link to add copy information -to the line item. See <> for more -information. -. Click the Notes link to add notes or line item alerts to the line item. See -<> for more information. -. Enter a price in the _Estimated Price_ field. -. You can save the line item(s) to a selection list by checking the box on the -line item and clicking _Actions -> Save Items to Selection List_. You can also -create a purchase order from the line item(s) by checking the box on the line -item and clicking _Actions -> Create Purchase Order_. - -image::media/acq_marc_search-2.png[line item] - -== Fund Tags == - -indexterm:[acquisitions,funds,tags] - -You can apply tags to funds so that you can group funds for easy reporting. For -example, you have three funds for children's materials: Children's Board Books, -Children's DVDs, and Children's CDs. Assign a fund tag of 'children's' to each -fund. When you need to report on the amount that has been spent on all -children's materials, you can run a report on the fund tag to find total - expenditures on children's materials rather than reporting on each individual -fund. - -Create a Fund Tag - -. To create a fund tag, click _Administration -> Acquisitions Administration -> Fund Tags_. -. Click _New Fund Tag_. No limits exist on the number of characters that can be -entered in this field. -. Select a Fund Tag Owner from the drop-down menu. The owner indicates the -organizational unit(s) whose staff can use this fund tag. This menu is -populated with the shortnames that you created for your libraries in the -organizational units tree (See Administration -> Server Administration -> Organizational -Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. -+ -. Enter a Fund Tag Name. No limits exist on the number of characters that can be -entered in this field. -. Click _Save_. - -== Funding Sources == - -indexterm:[acquisitions,funding sources] - -Funding sources allow you to specify the sources that contribute monies to your -fund(s). You can create as few or as many funding sources as you need. These -can be used to track exact amounts for accounts in your general ledger. You can - then use funds to track spending and purchases for specific collections. - -=== Create a funding source === - -. To create a new funding source, click _Administration -> Acquisitions Administration -> -Funding Source_. -. Enter a funding source name. No limits exist on the number of characters that -can be entered in this field. -. Select an owner from the drop-down menu. The owner indicates the -organizational unit(s) whose staff can use this funding source. This menu is -populated with the shortnames that you created for your libraries in the -organizational units tree (See Administration -> Server Administration -> Organizational -Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. For example, if a system -is made the owner of a funding source, then users with appropriate permissions -at the branches within the system could also use the funding source. -+ -. Create a code for the source. No limits exist on the number of characters that - can be entered in this field. -. Select a currency from the drop-down menu. This menu is populated from the -choices in the Currency Types interface. -. Click _Save_. - -=== Allocate credits to funding sources === - -. Apply a credit to this funding source. - -. Enter the amount of money that the funding source contributes to the -organization. Funding sources are not tied to fiscal or calendar years, so you -can continue to add money to the same funding source over multiple years, e.g. -County Funding. Alternatively, you can name funding sources by year, e.g. County -Funding 2010 and County Funding 2011, and apply credits each year to the -matching source. - -. To apply a credit, click on the hyperlinked name of the funding source. The -Funding Source Details will appear. - -. Click _Apply Credit_. - -. Enter an amount to apply to this funding source. - -. Enter a note. This field is optional. - -. Click _Apply_. - -=== Allocate credits to funds === - -If you have already set up your funds, then you can then click the Allocate to -Fund button to apply credits from the funding sources to the funds. If you have -not yet set up your funds, or you need to add a new one, you can allocate -credits to funds from the funds interface. See section 1.2 for more information. - -. To allocate credits to funds, click _Allocate to Fund_. - -. Enter the amount that you want to allocate. - -. Enter a note. This field is optional. - -. Click _Apply_. - -=== Track debits and credits === - -You can track credits to and allocations from each funding source. These amounts - are updated when credits and allocations are made in the Funding Source - Details. Access the Funding Source Details by clicking on the hyperlinked name - of the Funding Source. - -== Funds == - -indexterm:[acquisitions,funds] - -Funds allow you to allocate credits toward specific purchases. In the funds -interface, you can create funds; allocate credits from funding sources to funds; - transfer money between funds; and apply fund tags to funds. Funds are created - for a specific year, either fiscal or calendar. These funds are owned by org - units. At the top of the funds interface, you can set a contextual org unit and - year. The drop-down menu at the top of the screen enables you to focus on funds - that are owned by specific organizational units during specific years. - -=== Create a fund === - -. To create a new fund, click _Administration -> Acquisitions Administration -> Funds_. -. Enter a name for the fund. No limits exist on the number of characters that -can be entered in this field. -. Create a code for the fund. No limits exist on the number of characters that -can be entered in this field. -. Enter a year for the fund. This can be a fiscal year or a calendar year. The -format of the year is YYYY. -. Select an org unit from the drop-down menu. The org unit indicates the -organizational units whose staff can use this fund. This menu is populated with -the shortnames that you created for your libraries in the organizational units -tree (See Administration -> Server Administration -> Organizational Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. See section -+ -. Select a currency type from the drop-down menu. This menu is comprised of -entries in the currency types menu. When a fund is applied to a line item or -copy, the price of the item will be encumbered in the currency associated with -the fund. -. Click the Active box to activate this fund. You cannot make purchases from -this fund if it is not active. -. Enter a Balance Stop Percent. The balance stop percent prevents you from -making purchases when only a specified amount of the fund remains. For example, -if you want to spend 95 percent of your funds, leaving a five percent balance in - the fund, then you would enter 95 in the field. When the fund reaches its - balance stop percent, it will appear in red when you apply funds to copies. -. Enter a Balance Warning Percent. The balance warning percent gives you a -warning that the fund is low. You can specify any percent. For example, if you -want to spend 90 percent of your funds and be warned when the fund has only 10 -percent of its balance remaining, then enter 90 in the field. When the fund -reaches its balance warning percent, it will appear in yellow when you apply -funds to copies. -. Check the Propagate box to propagate funds. When you propagate a fund, the ILS -will create a new fund for the following fiscal year with the same parameters -as your current fund. All of the settings transfer except for the year and the -amount of money in the fund. Propagation occurs during the fiscal year close-out -operation. -. Check the Rollover box if you want to roll over remaining funds into the same -fund next year. You should also check this box if you only want to roll over -encumbrances into next year's fund. -. Click _Save_. - -=== Allocate credits from funding sources to funds === - -Credits can be applied to funds from funding sources using the fund interface. -The credits that you apply to the fund can be applied later to purchases. - -. To access funds, click _Administration -> Acquisitions Administration -> Funds_. - -. Click the hyperlinked name of the fund. - -. To add a credit to the fund, click the Create Allocation tab. - -. Choose a Funding Source from the drop-down menu. - -. Enter an amount that you want to apply to the fund from the funding source. - -. Enter a note. This field is optional. - -. Click _Apply_. - -=== Transfer credits between funds === - -The credits that you allocate to funds can be transferred between funds if -desired. In the following example, you can transfer $500.00 from the Young Adult -Fiction fund to the Children's DVD fund. - -. To access funds, click _Administration -> Acquisitions Administration -> Funds_. - -. Click the hyperlinked name of the originating fund. - -. The Fund Details screen appears. Click Transfer Money. - -. Enter the amount that you would like to transfer. - -. From the drop-down menu, select the destination fund. - -. Add a note. This field is optional. - -. Click _Transfer_. - -=== Track balances and expenditures === - -The Fund Details allows you to track the fund's balance, encumbrances, and -amount spent. It also allows you to track allocations from the funding -source(s), debits, and fund tags. - -. To access the fund details, click on the hyperlinked name of the fund that you -created. - -. The Summary allows you to track the following: - -. Balance - The balance is calculated by subtracting both items that have been -invoiced and encumbrances from the total allocated to the fund. -. Total Allocated - This amount is the total amount allocated from the Funding -Source. -. Spent Balance - This balance is calculated by subtracting only the items that -have been invoiced from the total allocated to the fund. It does not include -encumbrances. -. Total Debits - The total debits are calculated by adding the cost of items -that have been invoiced and encumbrances. -. Total Spent - The total spent is calculated by adding the cost of items that -have been invoiced. It does not include encumbrances. -. Total Encumbered - The total encumbered is calculated by adding all -encumbrances. - - -=== Fund reporting === - -indexterm:[acquisitions,funds,reports] -indexterm:[reports,funds] - -A core source, Fund Summary, is available in the reports interface. This -core source enables librarians to easily run a report on fund activity. Fields -that are accessible in this interface include Remaining Balance, Total -Allocated, Total Encumbered, and Total Spent. - - -image::media/Core_Source_1.jpg[Core_Source1] - - - -=== Edit a fund === - -Edit a fund just as you would edit a currency type. - -=== Perform fiscal year close-out operation === - -indexterm:[acquisitions,funds,fiscal rollover] - -The Fiscal Year Close-Out Operation allows you to deactivate funds for the -current year and create analogous funds for the next year. It transfers -encumbrances to the analogous funds, and it rolls over any remaining funds if -you checked the rollover box when creating the fund. - -. To access the year end closeout of a fund, click Administration -> Server -Administration -> Acquisitions -> Funds. - -. Click _Fund Propagation and Rollover_. - -. Check the box adjacent to _Perform Fiscal Year Close-Out Operation_. - -. For funds that have the "Rollover" setting enabled, if you want to move the -fund's encumbrances to the next year without moving unspent money, check the -box adjacent to _Limit Fiscal Year Close-out Operation to Encumbrances_. -+ -[NOTE] -The _Limit Fiscal Year Close-out Operation to Encumbrances_ will only display -if the _Allow funds to be rolled over without bringing the money along_ Library -Setting has been enabled. This setting is available in the Library Setting -Editor accessible via _Administration_ -> _Local Administration_ -> _Library -Settings Editor_. -+ -image::media/Fiscal_Rollover1.jpg[Fiscal_Rollover1] - -. Notice that the context org unit reflects the context org unit that you -selected at the top of the Funds screen. - -. If you want to perform the close-out operation on the context org unit and its -child units, then check the box adjacent to Include Funds for Descendant Org -Units. - -. Check the box adjacent to dry run if you want to test changes to the funds -before they are enacted. Evergreen will generate a summary of the changes that -would occur during the selected operations. No data will be changed. - -. Click _Process_. - -. Evergreen will begin the propagation process. Evergreen will make a clone of -each fund, but it will increment the year by 1. - -== Invoice menus == - -indexterm:[acquisitions,invoices] - -Invoice menus allow you to create drop-down menus that appear on invoices. You -can create an invoice item type or invoice payment method. - -=== Invoice item type === - -The invoice item type allows you to enter the types of additional charges that -you can add to an invoice. Examples of additional charge types might include -taxes or processing fees. Charges for bibliographic items are listed separately -from these additional charges. A default list of charge types displays, but you -can add custom charge types to this list. Invoice item types can also be used -when adding non-bibliographic items to a purchase order. When invoiced, the -invoice item type will copy from the purchase order to the invoice. - -. To create a new charge type, click _Administration -> Acquisitions Administration -> -Invoice Item Type_. - -. Click _New Invoice Item Type_. - -. Create a code for the charge type. No limits exist on the number of characters -that can be entered in this field. - -. Create a label. No limits exist on the number of characters that can be -entered in this field. The text in this field appears in the drop-down menu on -the invoice. - -. If items on the invoice were purchased with the monies in multiple funds, then -you can divide the additional charge across funds. Check the box adjacent to -Prorate-> if you want to prorate the charge across funds. - -. Click _Save_. - -=== Invoice payment method === - -The invoice payment method allows you to predefine the type(s) of invoices and -payment method(s) that you accept. The text that you enter in the admin module -will appear as a drop-down menu in the invoice type and payment method fields on -the invoice. - -. To create a new invoice payment method, click _Administration -> -Acquisitions Administration -> Invoice Payment Method_. - -. Click _New Invoice Payment Method_. - -. Create a code for the invoice payment method. No limits exist on the number of -characters that can be entered in this field. - -. Create a name for the invoice payment method. No limits exist on the number of -characters that can be entered in this field. The text in this field appears in -the drop-down menu on the invoice. - -. Click _Save_. - -Payment methods can be deleted from this screen. - -== Line Item Features == -[[line_item_features]] - -indexterm:[acquisitions,line items] - -Line item alerts are predefined text that can be added to line items that are on -selection lists or purchase orders. You can define the alerts from which staff -can choose. Line item alerts appear in a pop-up box when the line item, or any -of its copies, are marked as received. - -=== Create a line item alert === - -. To create a line item alert, click _Administration -> Acquisitions Administration -> -Line Item Alerts_. - -. Click _New Line Item Alert Text_. - -. Create a code for the text. No limits exist on the number of characters that -can be entered in this field. - -. Create a description for the text. No limits exist on the number of characters -that can be entered in this field. - -. Select an owning library from the drop-down menu. The owning library indicates -the organizational units whose staff can use this alert. This menu is populated -with the shortnames that you created for your libraries in the organizational -units tree (See Administration -> Server Administration -> Organizational Units). - -. Click _Save_. - -=== Line item MARC attribute definitions === - -Line item attributes define the fields that Evergreen needs to extract from the -bibliographic records that are in the acquisitions database to display in the -catalog. Also, these attributes will appear as fields in the New Brief Record -interface. You will be able to enter information for the brief record in the -fields where attributes have been defined. - -== Providers == - -Providers are vendors. You can create a provider profile that includes contact -information for the provider, holdings information, invoices, and other -information. - -=== Create a provider === - -. To create a new provider, click _Administration_ -> _Acquisitions Administration_ -> -_Providers_. - -. Enter the provider name. - -. Create a code for the provider. No limits exist on the number of characters -that can be entered in this field. - -. Select an owner from the drop-down menu. The owner indicates the -organizational units whose staff can use this provider. This menu is populated -with the shortnames that you created for your libraries in the organizational -units tree (See Administration -> Server Administration -> Organizational Units). -+ -[NOTE] -The rule of parental inheritance applies to this list. -+ -. Select a currency from the drop-down menu. This drop-down list is populated by -the list of currencies available in the currency types. - -. A provider must be active in order for purchases to be made from that -provider. To activate the provider, check the box adjacent to Active. To -deactivate a vendor, uncheck the box. - -. Add the default # of copies that are typically ordered through the provider. -This number will automatically populate the line item's _Copies_ box on any PO's -associated with this provider. If another quantity is entered during the -selection or ordering process, it will override this default. If no number is -specified, the default number of copies will be zero. - -. Select a default claim policy from the drop-down box. This list is derived -from the claim policies that can be created - -. Select an EDI default. This list is derived from the EDI accounts that can be -created. - -. Enter the provider's email address. - -. In the Fax Phone field, enter the provider's fax number. - -. In the holdings tag field, enter the tag in which the provider places holdings -data. - -. In the phone field, enter the provider's phone number. - -. If prepayment is required to purchase from this provider, then check the box -adjacent to prepayment required. - -. Enter the Standard Address Number (SAN) for your provider. - -. Enter the web address for the provider's website in the URL field. - -. Click Save. - -=== Add contact and holdings information to providers === - -After you save the provider profile, the screen reloads so that you can save -additional information about the provider. You can also access this screen by -clicking the hyperlinked name of the provider on the Providers screen. The tabs -allow you to add a provider address and contact, attribute definitions, and -holding subfields. You can also view invoices associated with the provider. - -. Enter a Provider Address, and click Save. -+ -[NOTE] -Required fields for the provider address are: Street 1, city, state, country, -post code. You may have multiple valid addresses. -+ -. Enter the Provider Contact, and click Save. - -. Your vendor may include information that is specific to your organization in -MARC tags. You can specify the types of information that should be entered in -each MARC tag. Enter attribute definitions to correlate MARC tags with the -information that they should contain in incoming vendor records. Some technical -knowledge is required to enter XPath information. As an example, if you need to -import the PO Name, you could set up an attribute definition by adding an XPath -similar to: -+ ------------------------------------------------------------------------------- -code => purchase_order -xpath => //*[@tag="962"]/*[@code="p"] -Is Identifier => false ------------------------------------------------------------------------------- -+ -where 962 is the holdings tag and p is the subfield that contains the PO Name. - - -. You may have entered a holdings tag when you created the provider profile. You -can also enter holdings subfields. Holdings subfields allow you to -specify subfields within the holdings tag to which your vendor adds holdings -information, such as quantity ordered, fund, and estimated price. - -. Click invoices to access invoices associated with a provider. - -=== Edit a provider === - -Edit a provider just as you would edit a currency type. - -You can delete providers only if no purchase orders have been assigned to them. - diff --git a/docs-antora/modules/admin/pages/actiontriggers.adoc b/docs-antora/modules/admin/pages/actiontriggers.adoc deleted file mode 100644 index 8d43a1c49a..0000000000 --- a/docs-antora/modules/admin/pages/actiontriggers.adoc +++ /dev/null @@ -1,278 +0,0 @@ -= Notifications / Action Triggers = -:toc: - - -== Introduction == - -indexterm:[action triggers, event definitions, notifications] - -Action Triggers give administrators the ability to set up actions for -specific events. They are useful for notification events such as hold notifications. - -To access the Action Triggers module, select *Administration* -> *Local Administration* -> *Notifications / Action triggers*. - -[NOTE] -========== -You must have Local Administrator permissions to access the Action Triggers module. -========== - -You will notice four tabs on this page: <>, <>, <> and <>. - - -[#event_definitions] - -== Event Definitions == - -Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: - -=== Table 1: Action Trigger Event Definitions === - - -|============================================== -|*Field* |*Description* -| Owning Library |The shortname of the library for which the action / trigger / hook is defined. -| Name |The name of the trigger event, that links to a trigger event environment containing a set of fields that will be returned to the <> and/or <> for processing. -| <> |The name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper class in the core_type column off of which the rest of the field definitions ``hang''. -| Enabled |Sets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run. -| Processing Delay |Defines how long after a given trigger / hook event has occurred before the associated action (``Reactor'') will be taken. -| Processing Delay Context Field |Defines the field associated with the event on which the processing delay is calculated. For example, the processing delay context field on the hold.capture hook (which has a core_type of ahr) is _capture_time_. -| Processing Group Context Field |Used to batch actions based on its associated group. -| <> |Links the action trigger to the Reactor. -| <> |The subroutines receive the trigger environment as an argument (see the linked Name for the environment definition) and returns either _1_ if the validator is _true_ or _0_ if the validator returns _false_. -| Event Repeatability Delay |Allows events to be repeated after this delay interval. -| Failure Cleanup |After an event is reacted to and if there is a failure a cleanup module can be run to clean up after the event. -| Granularity |Used to group events by how often they should be run. Options are Hourly, Daily, Weekly, Monthly, Yearly, but you may also create new values. -| Max Event Validity Delay |Allows events to have a range of time that they are valid. This value works with the *Processing Delay* to define a time range. -| Message Library Path |Defines the org_unit object for a Patron Message Center message. -| Message Template |A Template Toolkit template that can be used to generate output for a Patron Message Center message. The output may or may not be used by the reactor or another external process. -| Message Title |The title that will display on a Patron Message Center message. -| Message User Path |Defines the user object for a Patron Message Center message. -| Opt-In Settings Type |Choose which User Setting Type will decide if this event will be valid for a certain user. Use this to allow users to Opt-In or Opt-Out of certain events. -| Opt-In User Field |Set to the name of the field in the selected hook's core type that will link the core type to the actor.usr table. -| Success Cleanup |After an event is reacted to successfully a cleanup module can be run to clean up after the event. -| Template |A Template Toolkit template that can be used to generate output. The output may or may not be used by the reactor or another external process. -|============================================== - - -== Creating Action Triggers == - -. From the top menu, select *Administration* -> *Local Administration* -> *Notifications / Action triggers*. -. Click on the _New_ button. -+ -image::media/new_event_def.png[New Event Definition] -+ -. Select an _Owning Library_. -. Create a unique _Name_ for your new action trigger. -. Select the _Hook_. -. Check the _Enabled_ check box. -. Set the _Processing Delay_ in the appropriate format. E.g. _7 days_ to run 7 days from the trigger event or _00:01:00_ to run 1 hour after the _Processing Delay Context Field_. -. Set the _Processing Delay Context Field_ and _Processing Group Context Field_. -. Select the _Reactor_ and _Validator_. -. Set the _Event Repeatability Delay_. -. Select the _Failure Cleanup_ and _Granularity_. -. Set the _Max Event Validity Delay_. -+ -image::media/event_def_details.png[Event Definition Details] -+ -. If you wish to send a User Message through the Message Center, set a _Message Library Path_. Enter text in the _Message Template_. Enter a title for this message in _Message Title_, and set a value in _Message User Path_. -. Select the _Opt-In Setting Type_. -. Set the _Opt-In User Field_. -. Select the _Success Cleanup_. -+ -image::media/event_def_details_2.png[Event Definition Details] -+ -. Enter text in the _Template_ text box if required. These are for email messages. Here is a sample template for sending 90 day overdue notices: - - - [%- USE date -%] - [%- user = target.0.usr -%] - To: [%- params.recipient_email || user.email %] - From: [%- helpers.get_org_setting(target.home_ou.id, 'org.bounced_emails') || lib.email || params.sender_email || default_sender %] - Subject: Overdue Items Marked Lost - Auto-Submitted: auto-generated - - Dear [% user.family_name %], [% user.first_given_name %] - The following items are 90 days overdue and have been marked LOST. - [%- params.recipient_email || user.email %][%- params.sender_email || default_sender %] - [% FOR circ IN target %] - Title: [% circ.target_copy.call_number.record.simple_record.title %] - Barcode: [% circ.target_copy.barcode %] - Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] - Item Cost: [% helpers.get_copy_price(circ.target_copy) %] - Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] - Library: [% circ.circ_lib.name %] - [% END %] - - [% FOR circ IN target %] - Title: [% circ.target_copy.call_number.record.simple_record.title %] - Barcode: [% circ.target_copy.barcode %] - Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] - Item Cost: [% helpers.get_copy_price(circ.target_copy) %] - Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] - Library: [% circ.circ_lib.name %] - [% END %] - -. Once you are satisfied with your new event trigger, click the _Save_ button located at the bottom of the form. - - -[TIP] -========= -A quick and easy way to create new action triggers is to clone an existing action trigger. -========= - -=== Cloning Existing Action Triggers === - -. Check the check box next to the action trigger you wish to clone. -. Click _Clone Selected_ on the top left of the page. -. An editing window will open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and give the new action trigger a unique Name. -. Click _Save_. - -=== Editing Action Triggers === - -. Double-click on the action trigger you wish to edit. -. The edit screen will appear. When you are finished editing, click _Save_ at the bottom of the form. Or click _Cancel_ to exit the screen without saving. - -[NOTE] -============ -Before deleting an action trigger, you should consider disabling it through the editing form. This way you can keep it for future use or cloning. -============ - -=== Deleting Action Triggers === - -. Check the check box next to the action trigger you wish to delete -. Click _Delete Selected_ on the top-right of the page. - -=== Testing Action Triggers === - -. Go to the list of action triggers. -. Click on the blue link text for the action trigger you'd like to test. -+ -image::media/test_event_def.png[Blue Link Text] -+ -. Go to the Test tab. -. If there is a test available, fill in the required information. -. View the output of the test. - -image::media/test_event_def_output.png[Test Output] - -WARNING: If you are testing an email or SMS notification, use a test account and email as an example. Using the Test feature will actually result in the notification being sent if configured correctly. Similarly, use a test item or barcode when testing a circulation-based event like Mark Lost since the test will mark the item as lost. - -[#hooks] - -=== Hooks === - -Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions ``hang''. - - -==== Table 2. Hooks ==== - - -|======================= -| *Field* | *Description* -| Hook Key | A unique name given to the hook. -| Core Type | Used to link the action trigger to the IDL class in fm_IDL.xml -| Description | Text to describe the purpose of the hook. -| Passive | Indicates whether or not an event is created by direct user action or is circumstantial. -|======================= - -You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. - - -[#reactors] - -=== Reactors === - -Reactors link the trigger definition to the action to be carried out. - -==== Table 3. Action Trigger Reactors ==== - - -|======================= -| Field | Description -| Module Name | The name of the Module to run if the action trigger is validated. It must be defined as a subroutine in `/openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm` or as a module in `/openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm`. -| Description | Description of the Action to be carried out. -|======================= - -You may also create, edit and delete Reactors. Just remember that there must be an associated subroutine or module in the Reactor Perl module. - - -[#validators] - -=== Validators === - -Validators set the validation test to be preformed to determine whether the action trigger is executed. - -==== Table 4. Action Trigger Validators ==== - - -|======================= -| Field | Description -| Module Name | The name of the subroutine in `/openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm` to validate the action trigger. -| Description | Description of validation test to run. -|======================= - -You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. - -[#processing_action_triggers] -== Processing Action Triggers == - -To run action triggers, an Evergreen administrator will need to run the trigger processing script. This should be set up as a cron job to run periodically. To run the script, use this command: - ----- -/openils/bin/action_trigger_runner.pl --process-hooks --run-pending ----- - -You have several options when running the script: - -* --run-pending: Run pending events to send emails or take other actions as -specified by the reactor in the event definition. - -* --process-hooks: Create hook events - -* --osrf-config=[config_file]: OpenSRF core config file. Defaults to: -/openils/conf/opensrf_core.xml - -* --custom-filters=[filter_file]: File containing a JSON Object which describes any hooks -that should use a user-defined filter to find their target objects. Defaults to: -/openils/conf/action_trigger_filters.json - -* --max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go -away. Defaults to 3600 (1 hour). - -* --hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it -defaults to the list of hooks defined in the --custom-filters option. -Requires --process-hooks. - -* --granularity=[label]: Limit creating events and running pending events to -those only with [label] granularity setting. - -* --debug-stdout: Print server responses to STDOUT (as JSON) for debugging. - -* --lock-file=[file_name]: Sets the lock file for the process. - -* --verbose: Show details of script processing. - -* --help: Show help information. - -Examples: - -* Run all pending events that have no granularity set. This is what you tell -CRON to run at regular intervals. -+ ----- -perl action_trigger_runner.pl --run-pending ----- - -* Batch create all "checkout.due" events -+ ----- -perl action_trigger_runner.pl --hooks=checkout.due --process-hooks ----- - -* Batch create all events for a specific granularity and to send notices for all -pending events with that same granularity. -+ ----- -perl action_trigger_runner.pl --run-pending --granularity=Hourly --process-hooks ----- - diff --git a/docs-antora/modules/admin/pages/age_hold_protection.adoc b/docs-antora/modules/admin/pages/age_hold_protection.adoc deleted file mode 100644 index 6254f76320..0000000000 --- a/docs-antora/modules/admin/pages/age_hold_protection.adoc +++ /dev/null @@ -1,23 +0,0 @@ -= Age hold protection = -:toc: - -indexterm:[Holds] -indexterm:[Holds, Age Protection] - -Age hold protection prevents new items from filling holds requested for pickup at a library other than the owning library for a specified period of time. - -You can define the protection period in *Administration* -> *Server Administration* -> *Age Hold Protect Rules*. - -The protection period when applied to a item record can start with the item record create date (default) or active date. You can change this setting in *Administration* -> *Local Administration* -> *Library Settings Editor*: Use Active Date for Age Protection. - -In addition to time period, you can set the proximity value to define which organizational units are allowed to act as pickup libraries. The proximity values affect holds as follows: - -* "0" allows only holds where pickup library = owning library -* "1" allows holds where pickup library = owning library, parent, and child organizational units -* "2" allows holds where pickup library = owning library, parent, child, and/or sibling organizational units - -Age protection only applies to individual item records. You cannot configure age protection rules in hold policies. - -== Active date display in OPAC == - -If a library uses the item's active date to calculate holds age protection, the active date will display with the item details instead of the create date in the staff client view of the catalog. Libraries that do not enable the _Use Active Date for Age Protection_ library setting will continue to display the create date. diff --git a/docs-antora/modules/admin/pages/aged_circs.adoc b/docs-antora/modules/admin/pages/aged_circs.adoc deleted file mode 100644 index 21b4bb8ddb..0000000000 --- a/docs-antora/modules/admin/pages/aged_circs.adoc +++ /dev/null @@ -1,89 +0,0 @@ -= Aging Circulations = -:toc: - -.Use case -**** -Aging circulations helps to protect patron privacy and save disk space. -**** - -Evergreen allows for the bulk anonymization of circulation histories. Evergreen calls this aged circulation. Circulation statistics are preserved (total circs, last checkout/renewal date, checkout/renewal/checkin workstation, etc) but patron information (name : barcode) is replaced with text and the link to the patron record is removed. - -In the client, will show in the patron field in Circulation History Tab and Show Last Few Circulations. - -In the database, every time you attempt to `DELETE` a row from `action.circ`, it -copies over the appropriate data to `action.aged_circulation`, -then deletes the `action.circ` row. - -== Global Flags == - -There are four global flags used for aging circulations. - -1. Historical Circulation Retention Age - determines the timeframe for aging circulations based on transaction age (7 days, 14 days, 30 days, etc). - -2. Historical Circulations Per Item - determines how many circulations to keep (ex. 1, 2, 3). If set to 1, Evergreen will always keep the last (most recent) circulation. - -3. Historical Circulations use most recent xact_finish date instead of last circ's (true or false) - -4. Historical Circulations are kept for global retention age at a minimum, regardless of user preferences (true or false) - - - -== What Data is Aged? == - -Only completed transactions are aged. These circulations have been checked in (returned) and *do not* contain any unpaid fines or bills. - -Data that is not aged includes: - -* open transactions (i.e. checked out) -* closed transactions with unpaid fines -* closed transactions with unpaid bills -* the last X circulation(s) (determined by historical circulations per item flag) - - -[TIP] -========== -Aging circulations will not affect a patron being able to keep their checkout history. Minimal metadata is stored in the patron checkout history table. Once the corresponding circulation is aged, the full circulation metadata is no longer linked to the patron's reading history. -========== - -[TIP] -========== -Just aging circulations is not sufficient to protect patron circulation -history. Fully protecting these data would also involve a thoughtful -approach to logs and backups of these data. -========== - -[TIP] -========== -You can create a cron job to automatically age circulations. -========== - -== How Circulations are Aged == - -The action.aged_circulation table is for statistical reporting while breaking the link to the patron who had the item checked out. - -Circulations get moved under three circumstances in stock Evergreen: - -1. A patron is deleted. This moves all of the patron's circulations from action.circulation to action.aged_circulation - -2. A row or row(s) in action.circulation are deleted. The action.age_circ_on_delete trigger moves deleted action.circulations to action.aged_circulation. - -3. The action.purge_circulations function is run. This function is meant to be run periodically to enforce patron privacy. Its behavior is controlled by two internal flags: history.circ.retention_age and history.circ.retention_count. - -[TIP] -========== -The purge_circulations function is often run from a cron via the purge_circulations.srfsh script. -========== - - -[TIP] -========== -The purge_circulations function will take a *long* time to run for the first time on a system that has had much activity. The srfsh script will likely time out before the database function finishes and nothing will get moved. -========== - - -== Impacts on Billing Data == - -When a circulation is aged, billings and payments linked to the circulation are migrated from the active billing and payment tables to the `money.aged_billing` and `money.aged_payment` tables. - -NOTE: currently grocery bills are ignored and not aged. - diff --git a/docs-antora/modules/admin/pages/allowed_payments.adoc b/docs-antora/modules/admin/pages/allowed_payments.adoc deleted file mode 100644 index 56b26bd518..0000000000 --- a/docs-antora/modules/admin/pages/allowed_payments.adoc +++ /dev/null @@ -1,21 +0,0 @@ -=== Setting limits on allowed payment amounts === - -Two new settings have been added to prevent library staff -from accidentally clearing all patron bills by scanning a -barcode into the Payment Amount field, or accidentally -entering the amount without a decimal point (such as you -would when using a cash register). - -Both settings are available via the Library Settings Editor. -The Payment amount threshold for Are You Sure? dialog -(`ui.circ.billing.amount_warn`) setting identifies the amount -above which staff will be asked if they're sure they want -to apply the payment. The Maximum payment amount allowed -(`ui.circ.billing.amount_limit`) setting identifies the -maximum amount of money that can be accepted through the -staff client. - -These settings only affect the staff client, not credit -cards accepted through the public catalog, or direct API -calls from third party tools. - diff --git a/docs-antora/modules/admin/pages/apache_access_handler.adoc b/docs-antora/modules/admin/pages/apache_access_handler.adoc deleted file mode 100644 index c898972ea0..0000000000 --- a/docs-antora/modules/admin/pages/apache_access_handler.adoc +++ /dev/null @@ -1,141 +0,0 @@ -[#apache_access_handler_perl_module] -= Apache Access Handler Perl Module = -:toc: - -The OpenILS::WWW::AccessHandler Perl module is intended for limiting patron -access to configured locations in Apache. These locations could be folder -trees, static files, non-Evergreen dynamic content, or other Apache -features/modules. It is intended as a more patron-oriented and transparent -version of the OpenILS::WWW::Proxy and OpenILS::WWW:Proxy::Authen modules. - -Instead of using Basic Authentication the AccessHandler module instead redirects -to the OPAC for login. Once logged in additional checks can be performed, based -on configured variables: - - * Permission Checks (at Home OU or specified location) - * Home OU Checks (Org Unit or Descendant) - * "Good standing" Checks (Not Inactive or Barred) - -Use of the module is a simple addition to a Location block in Apache: - -[source,conf] ----- - - PerlAccessHandler OpenILS::WWW::AccessHandler - # For each option you wish to set: - PerlSetVar OPTION "VALUE" - ----- - -The available options are: - -OILSAccessHandlerLoginURL:: -* Default: /eg/opac/login -* The page to redirect to when Login is needed -OILSAccessHandlerLoginURLRedirectVar:: -* Default: redirect_to -* The variable the login page wants the "destination" URL stored in -OILSAccessHandlerFailURL:: -* Default: -* URL to go to if Permission, Good Standing, or Home OU checks fail. If not set - a 403 error is generated instead. To customize the 403 you could use an - ErrorDocument statement. -OILSAccessHandlerCheckOU:: -* Default: -* Org Unit to check Permissions at and/or to load Referrer from. Can be a - shortname or an ID. -OILSAccessHandlerPermission:: -* Default: -* Permission, or comma- or space-delimited set of permissions, the user must have to - access the protected area. -OILSAccessHandlerGoodStanding:: -* Default: 0 -* If set to a true value the user must be both Active and not Barred. -OILSAccessHandlerHomeOU:: -* Default: -* An Org Unit, or comma- or space-delimited set of Org Units, that the user's Home OU must - be equal to or a descendant of to access this resource. Can be set to - shortnames or IDs. -OILSAccessHandlerReferrerSetting:: -* Default: -* Library Setting to pull a forced referrer string out of, if set. - -As the AccessHandler module does not actually serve the content it is -protecting, but instead merely hands control back to Apache when it is done -authenticating, you can protect almost anything else you can serve with Apache. - -== Use Cases == -The general use of this module is "protect access to something else" - what that -something else is will vary. Some possibilities: - - * Apache features - ** Automatic Directory Indexes - ** Proxies (see below) - *** Electronic Databases - *** Software on other servers/ports - * Non-Evergreen software - ** Timekeeping software for staff - ** Specialized patron request packages - * Static files and folders - ** Semi-public Patron resources - ** Staff-only downloads - -== Proxying Websites == -One potentially interesting use of the AccessHandler module is to protect an -Apache Proxy configuration. For example, after installing and enabling -mod_proxy, mod_proxy_http, and mod_proxy_html you could proxy websites like so: - -[source,conf] ----- - - # Base "Rewrite URLs" configuration - ProxyHTMLLinks a href - ProxyHTMLLinks area href - ProxyHTMLLinks link href - ProxyHTMLLinks img src longdesc usemap - ProxyHTMLLinks object classid codebase data usemap - ProxyHTMLLinks q cite - ProxyHTMLLinks blockquote cite - ProxyHTMLLinks ins cite - ProxyHTMLLinks del cite - ProxyHTMLLinks form action - ProxyHTMLLinks input src usemap - ProxyHTMLLinks head profile - ProxyHTMLLinks base href - ProxyHTMLLinks script src for - - # To support scripting events (with ProxyHTMLExtended On) - ProxyHTMLEvents onclick ondblclick onmousedown onmouseup \ - onmouseover onmousemove onmouseout onkeypress \ - onkeydown onkeyup onfocus onblur onload \ - onunload onsubmit onreset onselect onchange - - # Limit all Proxy connections to authenticated sessions by default - PerlAccessHandler OpenILS::WWW::AccessHandler - - # Strip out Evergreen cookies before sending to remote server - RequestHeader edit Cookie "^(.*?)ses=.*?(?:$|;)(.*)$" $1$2 - RequestHeader edit Cookie "^(.*?)eg_loggedin=.*?(?:$|;)(.*)$" $1$2 - - - - # Proxy example.net - ProxyPass http://www.example.net/ - ProxyPassReverse http://www.example.net/ - ProxyPassReverseCookieDomain example.net example.com - ProxyPassReverseCookiePath / /proxy/example/ - - ProxyHTMLEnable On - ProxyHTMLURLMap http://www.example.net/ /proxy/example/ - ProxyHTMLURLMap / /proxy/mail/ - ProxyHTMLCharsetOut * - - # Limit to BR1 and BR3 users - PerlSetVar OILSAccessHandlerHomeOU "BR1,BR3" - ----- - -As mentioned above, this can be used for multiple reasons. In addition to -websites such as online databases for patron use you may wish to proxy software -for staff or patron use to make it appear on your catalog domain, or perhaps to -keep from needing to open extra ports in a firewall. diff --git a/docs-antora/modules/admin/pages/apache_rewrite_tricks.adoc b/docs-antora/modules/admin/pages/apache_rewrite_tricks.adoc deleted file mode 100644 index 5008cb3308..0000000000 --- a/docs-antora/modules/admin/pages/apache_rewrite_tricks.adoc +++ /dev/null @@ -1,148 +0,0 @@ -[#apache_rewrite_tricks] -= Apache Rewrite Tricks = -:toc: - -It is possible to use Apache's Rewrite Module features to perform a number of -useful tricks that can make people's lives much easier. - -== Short URLs == -Making short URLs for common destinations can simplify making printed media as -well as shortening or simplifying what people need to type. These are also easy -to add and require minimal maintenance, and generally can be implemented with a -single line addition to your eg_vhost.conf file. - -[source,conf] ----- -# My Account - http://host.ext/myaccount -> My Account Page -RewriteRule ^/myaccount https://%{HTTP_HOST}/eg/opac/myopac/main [R] - -# ISBN Search - http://host.ext/search/isbn/ -> Search Page -RewriteRule ^/search/isbn/(.*) /eg/opac/results?_special=1&qtype=identifier|isbn&query=$1 [R] ----- - -== Domain Based Content with RewriteMaps == -One creative use of Rewrite features is domain-based configuration in a single -eg_vhost.conf file. Regardless of how many VirtualHost blocks use the -configuration you don't need to duplicate things for minor changes, and can in -fact use wildcard VirtualHost blocks to serve multiple subdomains. - -For the wildcard blocks you will want to use a ServerAlias directive, and for -SSL VirtualHost blocks ensure you have a wildcard SSL certificate. - -[source,conf] ----- -ServerAlias *.example.com ----- - -For actually changing things based on the domain, or subdomain, you can use -RewriteMaps. Each RewriteMap is generally a lookup table of some kind. In the -following examples we will generally use text files, though database lookups -and external programs are also possible. - -Note that in the examples below we generally store things in Environment -Variables. From within Template Toolkit templates you can access environment -variables with the ENV object. - -.Template Toolkit ENV example, link library name/url if set -[source,html] ----- -[% IF ENV.eglibname && ENV.egliburl %][% ENV.eglibname %][% END %] ----- - -The first lookup to do is a domain to identifier, allowing us to re-use -identifiers for multiple domains. In addition we can also supply a default -identifier, for when the domain isn't present in the lookup table. - -.Apache Config -[source,conf] ----- -# This internal map allows us to lowercase our hostname, removing case issues in our lookup table -# If you prefer uppercase you can use "uppercase int:toupper" instead. -RewriteMap lowercase int:tolower -# This provides a hostname lookup -RewriteMap eglibid txt:/openils/conf/libid.txt -# This stores the identifier in a variable (eglibid) for later use -# In this case CONS is the default value for when the lookup table has no entry -RewriteRule . - [E=eglibid:${eglibid:${lowercase:%{HTTP_HOST}}|CONS}] ----- - -.Contents of libid.txt File -[source,txt] ----- -# Comments can be included -# Multiple TLDs for Branch 1 -branch1.example.com BRANCH1 -branch1.example.net BRANCH1 -# Branches 2 and 3 don't have alternate TLDs -branch2.example.com BRANCH2 -branch3.example.com BRANCH3 ----- - -Once we have identifiers we can look up other information, when appropriate. -For example, say we want to look up library names and URLs: - -.Apache Config -[source,conf] ----- -# Library Name Lookup - Note we provide no default in this case. -RewriteMap eglibname txt:/openils/conf/libname.txt -RewriteRule . - [E=eglibname:${eglibname:%{ENV:eglibid}}] -# Library URL Lookup - Also with no default. -RewriteMap egliburl txt:/openils/conf/liburl.txt -RewriteRule . - [E=egliburl:${egliburl:%{ENV:eglibid}}] ----- - -.Contents of libname.txt File -[source,txt] ----- -# Note that we cannot have spaces in the "value", so instead is used.   is also an option. -BRANCH1 Branch One -BRANCH2 Branch Two -BRANCH3 Branch Three -CONS Example Consortium Name ----- - -.Contents of liburl.txt File -[source,txt] ----- -BRANCH1 http://branch1.example.org -BRANCH2 http://branch2.example.org -BRANCH3 http://branch3.example.org -CONS http://example.org ----- - -Or, perhaps set the "physical location" variable for default search/display library: - -.Apache Config -[source,conf] ----- -# Lookup "physical location" IDs -RewriteMap eglibphysloc txt:/openils/conf/libphysloc.txt -# Note: physical_loc is a variable used in the TTOPAC and should not be re-named -RewriteRule . - [E=physical_loc:${eglibphysloc:%{ENV:eglibid}}] ----- - -.Contents of libphysloc.txt File -[source,txt] ----- -BRANCH1 4 -BRANCH2 5 -BRANCH3 6 -CONS 1 ----- - -Going further, you could also replace files to be downloaded, such as images or -stylesheets, on the fly: - -.Apache Config -[source,conf] ----- -# Check if a file exists based on eglibid and the requested file name -# Say, BRANCH1/opac/images/main_logo.png -RewriteCond %{DOCUMENT_ROOT}/%{ENV:eglibid}%{REQUEST_URI} -f -# Serve up the eglibid version of the file instead -RewriteRule (.*) /%{ENG:eglibid}$1 ----- - -Note that template files themselves cannot be replaced in that manner. - diff --git a/docs-antora/modules/admin/pages/audio_alerts.adoc b/docs-antora/modules/admin/pages/audio_alerts.adoc deleted file mode 100644 index 1d02c9e410..0000000000 --- a/docs-antora/modules/admin/pages/audio_alerts.adoc +++ /dev/null @@ -1,35 +0,0 @@ -== Managing audio alerts == - -=== Globally silencing sounds === -indexterm:[audio alerts,silencing] -indexterm:[nosound.wav] - -The file `nosound.wav` can be used -to globally disable audio alerts for a specific event on an Evergreen system. - -For example, to silence the alert that sounds after a successful patron search: - -[source, bash] ------------------------------------------------------------------------------- -mkdir -p /openils/var/web/audio/notifications/success/patron/ -cd /openils/var/web/audio/notifications/success/patron/ -ln -s ../../nosound.wav by_search.wav ------------------------------------------------------------------------------- - - -=== Self-check interface === -indexterm:[audio alerts,self check interface] -indexterm:[self check interface,audio alerts] -indexterm:[audio_config.tt2] - -Sounds may play at certain events in the self check interface. These -events are defined in the `templates/circ/selfcheck/audio_config.tt2` -template. To use the default sounds, you could run the following command -from your Evergreen server as the *root* user (assuming that -`/openils/` is your install prefix): - -[source, bash] ------------------------------------------------------------------------------- -cp -r /openils/var/web/xul/server/skin/media/audio /openils/var/web/. ------------------------------------------------------------------------------- - diff --git a/docs-antora/modules/admin/pages/authentication_proxy.adoc b/docs-antora/modules/admin/pages/authentication_proxy.adoc deleted file mode 100644 index 9cdaaeee7e..0000000000 --- a/docs-antora/modules/admin/pages/authentication_proxy.adoc +++ /dev/null @@ -1,97 +0,0 @@ -= Authentication Proxy = -:toc: - -indexterm:[authentication, proxy] - -indexterm:[authentication, LDAP] - -To support integration of Evergreen with organizational authentication systems, and to reduce the proliferation of user names and passwords, Evergreen offers a service called open-ils.auth_proxy. If you enable the service, open-ils.auth_proxy supports different authentication mechanisms that implement the authenticate method. You can define a chain of these authentication mechanisms to be tried in order within the *__* element of the _opensrf.xml_ configuration file, with the option of falling back to the native mode that uses Evergreen’s internal method of password authentication. - -This service only provides authentication. There is no support for automatic provisioning of accounts. To authenticate using any authentication system, the user account must first be defined in the Evergreen database. The user will be authenticated based on the Evergreen username and must match the user's ID on the authentication system. - -In order to activate Authentication Proxy, the Evergreen system administrator will need to complete the following steps: - -. Edit *_opensrf.xml_*. -.. Set the *_open-ils.auth_proxy_* app settings *_enabled_* tag to *_true_* -.. Add the *_authenticator_* to the list of authenticators or edit the existing example authenticator: -+ -[source,xml] ----- - - - ldap - OpenILS::Application::AuthProxy::LDAP_Auth - name.domain.com - ou=people,dc=domain,dc=com - cn=username,ou=specials,dc=domain,dc=com - uid - my_ldap_password_for_authid_user - - staff - opac - - - 103 - 104 - - ----- -+ -* *_name_* : Used to identify each authenticator. -* *_module_* : References to the perl module used by Evergreen to process the request. -* *_hostname_* : Hostname of the authentication server. -* *_basedn_* : Location of the data on your authentication server used to authenticate users. -* *_authid_* : Administrator ID information used to connect to the Authentication server. -* *_id_attr_* : Field name in the authenticator matching the username in the Evergreen database. -* *_password_* : Administrator password used to connect to the authentication server. Password for the *_authid_*. -* *_login_types_* : Specifies which types of logins will use this authenticator. This might be useful if staff use a different LDAP directory than general users. -* *_org_units_* : Specifies which org units will use the authenticator. This is useful in a consortium environment where libraries will use separate authentication systems. -+ -. Restart Evergreen and Apache to activate configuration changes. - -[TIP] -==================================================================== -If using proxy authentication with library employees that will click -the _Change Operator_ feature in the client software, then add -"Temporary" as a *_login_types_*. -==================================================================== - - -== Using arbitrary LDAP usernames == - -Authentication Proxy supports LDAP-based login with a username that is -different from your Evergreen username. - -.Use case -**** - -This feature may be useful for libraries that use an LDAP server for -single sign-on (SSO). Let's say you are a post-secondary library using -student or employee numbers as Evergreen usernames, but you want people -to be able to login to Evergreen with their SSO credentials, which may -be different from their student/employee number. To support this, -Authentication Proxy can be configured to accept your SSO username on login, -use it to look up your student/employee number on the LDAP server, and -log you in as the appropriate Evergreen user. - -**** - -To enable this feature, in the Authentication Proxy configuration for your LDAP server in -`opensrf.xml`, set `bind_attr` to the LDAP field containing your LDAP -username, and "id_attr" to the LDAP field containing your student or -employee number (or whatever other value is used as your Evergreen -username). If `bind_attr` is not set, Evergreen will assume that your -LDAP username and Evergreen username are the same. - -Now, let's say your LDAP server is only an authoritative auth provider -for Library A. Nothing prevents the server from reporting that your -student number is 000000, even if that Evergreen username is already in -use by another patron at Library B. We want to ensure that Authentication Proxy -does not use Library A's LDAP server to log you in as the Library B -patron. For this reason, a new `restrict_by_home_ou` setting has been -added to Authentication Proxy config. When enabled, this setting restricts LDAP -authentication to users belonging to a library served by that LDAP -server (i.e. the user's home library must match the LDAP server's -`org_units` setting in `opensrf.xml`). Use of this setting is strongly -recommended. - diff --git a/docs-antora/modules/admin/pages/authorities.adoc b/docs-antora/modules/admin/pages/authorities.adoc deleted file mode 100644 index ad32f08f1d..0000000000 --- a/docs-antora/modules/admin/pages/authorities.adoc +++ /dev/null @@ -1,146 +0,0 @@ -= Authorities = -:toc: - -== Authority Control Sets == - - -The tags and subfields that display in authority records in Evergreen are -proscribed by control sets. The Library of Congress control set is the default -control set in Evergreen. You can create customized -control sets for authority records. Also, you can define thesauri and authority -fields for these control sets. - -Patrons and staff will be able to browse authorities in the OPAC. The following -fields are browsable by default: author, series, subject, title, and topic. You -will be able to add custom browse axes in addition to these default fields. - -You can specify the MARC tags and subfields that an authority record should -contain. The Library of Congress control set exists in the staff client by -default. The control sets feature enables librarians to add or customize new -control sets. - -To access existing control sets, click *Administration* -> *Server Administration* -> -*Authority Control Sets*. - -image::media/Authority_Server_Admin_Menu.png[Server administration authority actions] - -=== Add a Control Set === - -. Click *Administration* -> *Server Administration* -> *Authority Control Sets*. -. Click *New Control Set*. -. Add a *Name* to the control set. Enter any number of characters. -. Add a *Description* of the control set. Enter any number of characters. -. Click *Save*. - -image::media/Authority_Control_Sets1.jpg[Authority_Control_Sets1] - -== Thesauri == - -A thesaurus describes the semantic rules that govern the meaning of words in a -MARC record. The thesaurus code, which indicates the specific thesaurus that -should control a MARC record, is encoded in a fixed field using the mnemonic -Subj in the authority record. Eleven thesauri associated with the Library of -Congress control set exist by default in the staff client. - -To access an existing thesaurus, click *Administration* -> *Server Administration* -> -*Authority Control Sets*, and choose the hyperlinked thesaurus that you -want to access, or click *Administration* -> *Server Administration* -> *Authority Thesauri*. - - -=== Add a Thesaurus === - -. Click *Administration* -> *Server Administration* -> *Authority Control Sets*, -and choose the hyperlinked thesaurus that you want to access, or click *Admin* --> *Server Administration* -> *Authority Thesauri*. -. Click *New Thesaurus*. -. Add a *Thesaurus Code*. Enter any single, upper case character. -This character will be entered in the fixed fields of the MARC record. -. Add a *Name* to the thesaurus. Enter any number of characters. -. Add a *Description* of the thesaurus. Enter any number of characters. - -image::media/Authority_Control_Sets2.jpg[Authority_Control_Sets2] - -== Authority Fields == - - -Authority fields indicate the tags and subfields that should be entered in the -authority record. Authority fields also enable you to specify the type of data -that should be entered in a tag. For example, in an authority record governed -by a Library of Congress control set, the 100 tag would contain a "Heading - -Personal Name." Authority fields also enable you to create the corresponding -tag in the bibliographic record that would contain the same data. - -=== Create an Authority Field === - -. Click *Administration* -> *Server Administration* -> *Authority Control Sets*. -. Click *Authority Fields*. The number in parentheses indicates the number of -authority fields that have been created for the control set. -. Click *New Authority Field*. -. Add a *Name* to the authority field. Enter any number of characters. -. Add a *Description* to describe the type of data that should be entered in -this tag. Enter any number of characters. -. Select a *Main Entry* if you are linking the tag(s) to another entry. -. Add a *Tag* in the authority record. -. Add a subfield in the authority record. Multiple subfields should be entered -without commas or spaces. -. Add a *Non-filing indicator* (either 1 or 2) to denote which indicator -contains non-filing information. Leave empty if not applicable. - -. Click *Save*. -+ -image::media/Authority_Control_Sets_Fields_Edit.png[Authority Fields edit form] -+ -. Create the corresponding tag in the bibliographic record that should contain -this information. Click the *None* link in the *Controlled Bib Fields* column. -. Click *New Control Set Bib Field*. -. Add the corresponding tag in the bibliographic record. -. Click *Save*. - -image::media/Authority_Control_Sets4.jpg[Authority_Control_Sets4] - - - -== Browse Axes == - -Authority records can be browsed, by default, along five axes: author, series, -subject, title, and topic. Use the *Browse Axes* feature to create additional -axes. - - -=== Create a new Browse Axis === - -. Click *Administration* -> *Server Administration* -> *Authority Browse Axes* -. Click *New Browse Axis*. -. Add a *code*. Do not enter any spaces. -. Add a *name* to the axis that will appear in the OPAC. Enter any number of -characters. -. Add a *description* of the axis. Enter any number of characters. -. Add a *sorter attribute*. The sorter attribute indicates the order in which -the results will be displayed. -+ -image::media/Authority_Control_Sets5.jpg[Authority_Control_Sets5] -. Assign the axis to an authority so that users can find the authority record -when browsing authorities. Click *Administration* -> *Server Administration* -> -*Authority Control Sets*. -. Choose the control set to which you will add the axis. Click *Authority -Fields*. -+ -image::media/Authority_Control_Sets_Fields.png[Authority fields link] - -. Click the link in the *Axes* column of the tag of your choice. -. Click *New Browse Axis-Authority Field Map*. -. Select an *Axis* from the drop down menu. -. Click *Save*. - -image::media/Authority_Control_Sets6.jpg[Authority_Control_Sets6] - - -*Permissions to use this Feature* - - -To use authority control sets, you will need the following permissions: - -* CREATE_AUTHORITY_CONTROL_SET -* UPDATE_AUTHORITY_CONTROL_SET -* DELETE_AUTHORITY_CONTROL_SET - diff --git a/docs-antora/modules/admin/pages/auto_suggest_search.adoc b/docs-antora/modules/admin/pages/auto_suggest_search.adoc deleted file mode 100644 index 23d7bf58c1..0000000000 --- a/docs-antora/modules/admin/pages/auto_suggest_search.adoc +++ /dev/null @@ -1,30 +0,0 @@ -= Auto Suggest in Catalog Search = -:toc: - -The auto suggest feature suggestions for completing search terms as the user enters his search query. Ten suggestions are the default, but the number of suggestions is configurable at -the database level. Scroll through suggestions with your mouse, or use the arrow keys to scroll through the suggestions. Select a suggestion to view records that are linked to -this suggestion. This feature is not turned on by default. You must turn it on in the Administration module. - - -== Enabling this Feature == - -. To enable this feature, click *Administration* -> *Server Administration* -> *Global Flags*. -. Scroll down to item 10, OPAC. -. Double click anywhere in the row to edit the fields. -. Check the box adjacent to *Enabled* to turn on the feature. -. The *Value* field is optional. If you checked *Enabled* in step 4, and you leave this field empty, then Evergreen will only suggest searches for which there are any corresponding MARC records. -+ -NOTE: If you checked *Enabled* in step 4, and you enter the string, *opac_visible*, into this field, then Evergreen will suggest searches for which -there are matching MARC records with copies within your search scope. For example, it will suggest MARC records with copies at your branch. -+ -. Click *Save.* - -image::media/Auto_Suggest_in_Catalog_Search2.jpg[Auto_Suggest_in_Catalog_Search2] - -== Using this Feature == - -. Enter search terms into the basic search field. Evergreen will automatically suggest search terms. -. Select a suggestion to view records that are linked to this suggestion. - -image::media/Auto_Suggest_in_Catalog_Search1.jpg[Auto_Suggest_in_Catalog_Search1] - diff --git a/docs-antora/modules/admin/pages/autorenewals.adoc b/docs-antora/modules/admin/pages/autorenewals.adoc deleted file mode 100644 index 0222a7c25f..0000000000 --- a/docs-antora/modules/admin/pages/autorenewals.adoc +++ /dev/null @@ -1,45 +0,0 @@ -= Autorenewals in Evergreen = -:toc: - -== Introduction == - -Circulation policies in Evergreen can now be configured to automatically renew items checked out on patron accounts. Circulations will be renewed automatically and patrons will not need to log in to their OPAC accounts or ask library staff to renew materials. - -Autorenewals are set in the Circulation Duration Rules, which allows this feature to be applied to selected circulation policies. Effectively, this makes autorenewals configurable by patron group, organizational unit or library, and circulation modifier. - -== Configure Autorenewals == - -Autorenewals are configured in *Administration -> Server Administration -> Circulation Duration Rules*. - -Enter the number of automatic renewals allowed in the new field called _max_auto_renewals_. The field called _max_renewals_ will still set the maximum number of manual renewals, whether staff or patron initiated. Typically, the _max_renewals_ value will be greater than _max_auto_renewals_, so that even if no more autorenewals are allowed, a patron may still renew via the OPAC. - -image::media/autorenew_circdur.PNG[Autorenewals in Circulation Duration Rules] - -The Circulation Duration Rule can then be applied to specific circulation policies (*Administration -> Local Administration -> Circulation Policies*) to implement autorenewals in Evergreen. - -== Autorenewal Notices and Action Triggers == - -Two new action triggers have been added to Evergreen for use with autorenewals. They can be found and configured in *Administration -> Local Administration -> Notifications/Action Triggers*. - -* Autorenew -- Uses the checkout.due hook to automatically renew circulations before they are due. -- Autorenewals will not occur if the item has holds, exceeds the maximum number of autorenewals allowed, or if the patron has been blocked from renewing items. - -* AutorenewNotify -- Email notification to inform patrons when their materials are automatically renewed or, if they are not automatically renewed due to meeting one of the criteria listed above. -- This notice can also be configured as an SMS notification. -- This notice does not change or interact with the Courtesy Notice (Pre-due Notice) that is also available in Evergreen. Libraries should evaluate whether they want to use both Courtesy Notices and Autorenewal notices. - -Sample of successful autorenewal notification: - -image::media/autorenew_renewnotice.PNG[Notification of Successful Autorenewal] - -Sample of blocked autorewal notification: - -image::media/autorenew_norenewnotice.PNG[Notification of Blocked Autorenewal] - -== Autorenewals in Patron Accounts == - -A new column called _AutoRenewalsRemaining_ indicates how many autorenewals are available for a transaction. - -image::media/autorenew_itemsout.PNG[Autorenewals Remaining in Patron Items Out] diff --git a/docs-antora/modules/admin/pages/backups.adoc b/docs-antora/modules/admin/pages/backups.adoc deleted file mode 100644 index 6ab02a0e97..0000000000 --- a/docs-antora/modules/admin/pages/backups.adoc +++ /dev/null @@ -1,202 +0,0 @@ -= Backing up your Evergreen System = -:toc: - -== Database backups == - -Although it might seem pessimistic, spending some of your limited time preparing for disaster is one of -the best investments you can make for the long-term health of your Evergreen system. If one of your -servers crashes and burns, you want to be confident that you can get a working system back in place -- -whether it is your database server that suffers, or an Evergreen application server. - -At a minimum, you need to be able to recover your system's data from your PostgreSQL database server: -patron information, circulation transactions, bibliographic records, and the like. If all else fails, -you can at least restore that data to a stock Evergreen system to enable your staff and patrons to find -and circulate materials while you work on restoring your local customizations such as branding, colors, -or additional functionality. This section describes how to back up your data so that you or a colleague -can help you recover from various disaster scenarios. - -=== Creating logical database backups === - -The simplest method to back up your PostgreSQL data is to use the `pg_dump` utility to create a logical -backup of your database. Logical backups have the advantage of taking up minimal space, as the indexes -derived from the data are not part of the backup. For example, an Evergreen database with 2.25 million -records and 3 years of transactions that takes over 120 GB on disk creates just a 7.0 GB compressed -backup file. The drawback to this method is that you can only recover the data at the exact point in time -at which the backup began; any updates, additions, or deletions of your data since the backup began will -not be captured. In addition, when you restore a logical backup, the database server has to recreate all -of the indexes--so it can take several hours to restore a logical backup of that 2.25 million record -Evergreen database. - -As the effort and server space required for logical database backups are minimal, your first step towards -preparing for disaster should be to automate regular logical database backups. You should also ensure -that the backups are stored in a different physical location, so that if a flood or other disaster strikes -your primary server room, you will not lose your logical backup at the same time. - -To create a logical dump of your PostgreSQL database: - -. Issue the command to back up your database: `pg_dump -Fc > `. If you -are not running the command as the postgres user on the database server itself, you may need to include -options such as `-U ` and `-h ` to connect to the database server. You can use a -newer version of the PostgreSQL to run `pg_dump` against an older version of PostgreSQL if your client -and server operating systems differ. The `-Fc` option specifies the "custom" format: a compressed format -that gives you a great deal of flexibility at restore time (for example, restoring only one table from -the database instead of the entire schema). -. If you created the logical backup on the database server itself, copy it to a server located in a -different physical location. - -You should establish a routine of nightly logical backups of your database, with older logical backups -being automatically deleted after a given interval. - -=== Restoring from logical database backups === - -To increase your confidence in the safety of your data, you should regularly test your ability to -restore from a logical backup. Restoring a logical backup that you created using the custom format -requires the use of the `pg_restore` tool as follows: - -. On the server on which you plan to restore the logical backup, ensure that you have installed -PostgreSQL and the corresponding server package prerequisites. The `Makefile.install` prerequisite -installer than came with your version of Evergreen contains an installation target that should -satisfy these requirements. Refer to the installation documentation for more details. -. As the `postgres` user, create a new database using the `createdb` command into which you will -restore the data. Base the new database on the _template0_ template database to enable the -combination of UTF8 encoding and C locale options, and specify the character type and collation -type as "C" using the `--lc-ctype` and `--lc-collate` parameters. For example, to create a new -database called "testrestore": `createdb --template=template0 --lc-ctype=C --lc-collate=C testrestore` -. As the `postgres` user, restore the logical backup into your newly created database using -the `pg_restore` command. You can use the `-j` parameter to use more CPU cores at a time to make -your recovery operation faster. If your target database is hosted on a different server, you can -use the `-U ` and `-h ` options to connect to that server. For example, -to restore the logical backup from a file named evergreen_20121212.dump into the "testrestore" -database on a system with 2 CPU cores: `pg_restore -j 2 -d testrestore evergreen_20171212.dump` - -=== Creating physical database backups with support for point-in-time recovery === - -While logical database backups require very little space, they also have the disadvantage of -taking a great deal of time to restore for anything other than the smallest of Evergreen systems. -Physical database backups are little more than a copy of the database file system, meaning that -the space required for each physical backup will match the space used by your production database. -However, physical backups offer the great advantage of almost instantaneous recovery, because the -indexes already exist and simply need to be validated when you begin database recovery. Your -backup server should match the configuration of your master server as closely as possible including -the version of the operating system and PostgreSQL. - -Like logical backups, physical backups also represent a snapshot of the data at the point in time -at which you began the backup. However, if you combine physical backups with write-ahead-log (WAL) -segment archiving, you can restore a version of your database that represents any point in time -between the time the backup began and the time at which the last WAL segment was archived, a -feature referred to as point-in-time recovery (PITR). PITR enables you to undo the damage that an -accidentally or deliberately harmful UPDATE or DELETE statement could inflict on your production -data, so while the recovery process can be complex, it provides fine-grained insurance for the -integrity of your data when you run upgrade scripts against your database, deploy new custom -functionality, or make global changes to your data. - -To set up WAL archiving for your production Evergreen database, you need to modify your PostgreSQL -configuration (typically located on Debian and Ubuntu servers in -`/etc/postgresql//postgresql.conf`): - -. Change the value of `archive_mode` to on -. Set the value of archive_command to a command that accepts the parameters `%f` (representing the -file name of the WAL segment) and %p (representing the complete path name for the WAL segment, -including the file name). You should copy the WAL segments to a remote file system that can be read -by the same server on which you plan to create your physical backups. For example, if `/data/wal` -represents a remote file system to which your database server can write, a possible value of -archive_command could be: `test ! -f /data/wal/%f && cp %p /data/wal/%f`, which effectively tests -to see if the destination file already exists, and if it does not, copies the WAL segment to that -location. This command can be and often is much more complex (for example, using `scp` or `rsync` -to transfer the file to the remote destination rather than relying on a network share), but you -can start with something simple. - -Once you have modified your PostgreSQL configuration, you need to restart the PostgreSQL server -before the configuration changes will take hold: -. Stop your OpenSRF services. -. Restart your PostgreSQL server. -. Start your OpenSRF services and restart your Apache HTTPD server. - -To create a physical backup of your production Evergreen database: - -. From your backup server, issue the -`pg_basebackup -x -D -U -h ` -command to create a physical backup of database on your backup server. - -You should establish a process for creating regular physical backups at periodic intervals, -bearing in mind that the longer the interval between physical backups, the more WAL segments -the backup database will have to replay at recovery time to get back to the most recent changes -to the database. For example, to be able to relatively quickly restore the state of your database -to any point in time over the past four weeks, you might take physical backups at weekly intervals, -keeping the last four physical backups and all of the corresponding WAL segments. - -=== Creating a replicated database === - -If you have a separate server that you can use to run a replica of your database, consider -replicating your database to that server. In the event that your primary database server suffers a -hardware failure, having a database replica gives you the ability to fail over to your database -replica with very little downtime and little or no data loss. You can also improve the performance of -your overall system by directing some read-only operations, such as reporting, to the database replica. -In this section, we describe how to replicate your database using PostgreSQL's streaming replication -support. - -You need to prepare your master PostgreSQL database server to support streaming replicas with several -configuration changes. The PostgreSQL configuration file is typically located on Debian and Ubuntu -servers at `/etc/postgresql//postgresql.conf`. The PostgreSQL host-based authentication -(`pg_hba.conf`) configuration file is typically located on Debian and Ubuntu servers at -`/etc/postgresql//pg_hba.conf`. Perform the following steps on your master database server: - -. Turn on streaming replication support. In postgresql.conf on your master database server, -change `max_wal_senders` from the default value of 0 to the number of streaming replicas that you need -to support. Note that these connections count as physical connections for the sake of the -`max_connections` parameter, so you might need to increase that value at the same time. -. Enable your streaming replica to endure brief network outages without having to rely on the -archived WAL segments to catch up to the master. In `postgresql.conf` on your production database server, -change `wal_keep_segments` to a value such as 32 or 64. -. Increase the maximum number of log file segments between automatic WAL checkpoints. In `postgresql.conf` -on your production database server, change checkpoint_segments from its default of 3 to a value such as -16 or 32. This improves the performance of your database at the cost of additional disk space. -. Create a database user for the specific purpose of replication. As the postgres user on the production -database server, issue the following commands, where replicant represents the name of the new user: -+ -[source,sql] -createuser replicant -psql -d ALTER ROLE replicant WITH REPLICATION; -+ -. Enable your replica database to connect to your master database server as a streaming replica. In -`pg_hba.conf` on your master database server, add a line to enable the database user replicant to connect -to the master database server from IP address 192.168.0.164: -+ -[source,perl] -host replication replicant 192.168.0.164/32 md5 -+ -. To enable the changes to take effect, restart your PostgreSQL database server. - -To avoid downtime, you can prepare your master database server for streaming replication at any maintenance -interval; then weeks or months later, when your replica server environment is available, you can begin -streaming replication. Once you are ready to set up the streaming replica, perform the following steps on -your replica server: - -. Ensure that the version of PostgreSQL on your replica server matches the version running on your production -server. A difference in the minor version (for example, 9.1.3 versus 9.1.5) will not prevent streaming -replication from working, but an exact match is recommended. -. Create a physical backup of the master database server. -. Add a `recovery.conf` file to your replica database configuration directory. This file contains the -information required to begin recovery once you start the replica database: -+ -[source,perl] -# turn on standby mode, disabling writes to the database -standby_mode = 'on' -# assumes WAL segments are available at network share /data/wal -restore_command = 'cp /data/wal/%f %p' -# connect to the master database to being streaming replication -primary_conninfo = 'host=kochab.cs.uoguelph.ca user=replicant password= -+ -. Start the PostgreSQL database server on your replica server. It should connect to the master. If the -physical backup did not take too long and you had a high enough value for `wal_keep_segments` set on your -master server, the replica should begin streaming replication. Otherwise, it will replay WAL segments -until it catches up enough to begin streaming replication. -. Ensure that the streaming replication is working. Check the PostgreSQL logs on your replica server and -master server for any errors. Connect to the replica database as a regular database user and check for -recent changes that have been made to your master server. - -Congratulations, you now have a streaming replica database that reflects the latest changes to your Evergreen -data! Combined with a routine of regular logical and physical database backups and WAL segment archiving -stored on a remote server, you have a significant insurance policy for your system's data in the event that -disaster does strike. - diff --git a/docs-antora/modules/admin/pages/booking-admin.adoc b/docs-antora/modules/admin/pages/booking-admin.adoc deleted file mode 100644 index 993bc9a5cf..0000000000 --- a/docs-antora/modules/admin/pages/booking-admin.adoc +++ /dev/null @@ -1,190 +0,0 @@ -= Booking Module Administration = -:toc: - -== Creating Bookable Non-Bibliographic Resources == - -Staff with the required permissions (Circulator and above) can create bookable non-bibliographic resources such as laptops, projectors, and meeting rooms. - -The following pieces make up a non-bibliographic resource: - -* Resource Type -* Resource Attribute -* Resource Attribute Values -* Resource -* Resource Attribute Map - -You need to create resource types and resource attributes (features of the resource types), and add booking items (resources) to individual resource type. Each resource attribute may have multiple values. You need to link the applicable features (resource attributes and values) to individual item (resource) through the Resource Attribute Map. Before you create resources (booking items) you need to have a resource type and associated resource attributes and values, if any, for them. - -=== Create New Resource Type === - -1) Select Administration -> Booking Administration -> Resource Types. - -image::media/booking-create-resourcetype_webclient-1.png[] - -2) A list of current resource types will appear. Use Back and Next buttons to browse the whole list. - -image::media/booking-create-resourcetype-2.png[] - -[NOTE] -You may also see cataloged items in the list. Those items have been marked bookable or booked before. - - -3) To create a new resource type, click New Resource Type in the top right corner, . - -image::media/booking-create-resourcetype-3.png[] - -4) A box will appear in which you create your new type of resource. - -image::media/booking-create-bookable-1.png[] - -* Resource Type Name - Give your resource a name. -* Fine Interval - How often will fines be charged? This period can be input in several ways: - -[NOTE] -==================================================================== -** second(s), minute(s), hour(s), day(s), week(s), month(s), year(s) -** sec(s), min(s) -** s, m, h -** 00:00:30, 00:01:00, 01:00:00 -==================================================================== - -* Fine Amount - The amount that will be charged at each Fine Interval. -* Owning Library - The home library of the resource. -* Catalog Item - (Function not currently available.) -* Transferable - This allows the item to be transferred between libraries. -* Inter-booking and Inter-circulation Interval - The amount of time required by your library between the return of a resource and a new reservation for the resource. This interval uses * the same input conventions as the Fine Interval. -* Max Fine Amount - The amount at which fines will stop generating. - -5) Click Save when you have entered the needed information. - -image::media/booking-create-resourcetype-4.png[] - -6) The new resource type will appear in the list. - -image::media/booking-create-resourcetype-5.png[] - -=== Create New Resource Attribute === - -1) Select Administration -> Booking Administration -> Resource Attributes. - -2) Click New Resource Attribute in the top right corner. - -3) A box will appear in which you can add the attributes of the resource. Attributes are categories of descriptive information that are provided to the staff member when the booking request is made. For example, an attribute of a projector may be the type of projector. Other attributes might be the number of seats available in a room, or the computing platform of a laptop. - -image::media/booking-create-bookable-2.png[] - -* Resource Attribute Name - Give your attribute a name. -* Owning Library - The home library of the resource. -* Resource Type - Type in the first letter to list then choose the Resource Type to which the Attribute is applicable. -* Is Required - (Function not currently available.) - -4) Click Save when the necessary information has been entered. - -5) The added attribute will appear in the list. - -[NOTE] -One resource type may have multiple attributes. You may repeat the above procedure to add more. - -=== Create New Resource Attribute Value === - -1) One resource attribute may have multiple values. To add new attribute value, select Administration -> Booking Administration -> Resource Attribute Values. - -2) Click New Resource Attribute Value in the top right corner. - -3) A box will appear in which you assign a value to a particular attribute. Values can be numbers, words, or a combination of them, that describe the particular aspects of the resource that have been defined as Attributes. As all values appear on the same list for selection, values should be as unique as possible. For example, a laptop may have a computing platform that is either PC or Mac. - -image::media/booking-create-bookable-3.png[] - -* Owning Library - The home library of the resource. -* Resource Attribute - The attribute you wish to assign the value to. -* Valid Value - Enter the value for your attribute. - -4) Click Save when the required information has been added. - -5) The attribute value will appear in the list. Each attribute should have at least two values attached to it; repeat this process for all applicable attribute values. - -=== Create New Resource === - -1) Add items to a resource type. Click Administration -> Booking Administration -> Resources. - -2) Click New Resource in the top right corner. - -3) A box will appear. Add information for the resource. - -image::media/booking-create-bookable-4.png[] - -* Owning Library - The home library of the resource. -* Resource Type - Type in the first letter of the resource type's name to list then select the resource type for your item. -* Barcode - Barcode for the resource. -* Overbook - This allows a single item to be reserved, picked up, and returned by multiple patrons during overlapping or identical time periods. -* Is Deposit Required - (Function not currently available.) -* Deposit Amount - (Function not currently available.) -* User Fee - (Function not currently available.) - -4) Click Save when the required information has been added. - -5) The resource will appear in the list. - -[NOTE] -One resource type may have multiple resources attached. - -=== Map Resource Attributes and Values to Resources === - -1) Use Resource Attribute Maps to bring together the resources and their attributes and values. Select Administration -> Booking Administration -> Resource Attribute Maps. - -2) Click New Resource Attribute Map in the right top corner. - -3) A box will appear in which you will map your attributes and values to your resources. - -image::media/booking-create-bookable-5.png[] - -* Resource - Enter the barcode of your resource. -* Resource Attribute - Select an attribute that belongs to the Resource Type. -* Attribute Value - Select a value that belongs to your chosen attribute and describes your resource. If your attribute and value do not belong together you will be unable to save. - -4) Click Save once you have entered the required information. - -[NOTE] -A resource may have multiple attributes and values. Repeat the above steps to map all. - -5) The resource attribute map will appear in the list. - -Once all attributes have been mapped your resource will be part of a hierarchy similar to the example below. - -image::media/booking-create-bookable-6.png[] - - -== Editing Non-Bibliographic Resources == - -Staff with the required permissions can edit aspects of existing non-bibliographic resources. For example, resource type can be edited in the event that the fine amount for a laptop changes from $2.00 to $5.00. - -=== Editing Resource Types === - -1) Bring up your list of resource types. Select Administration -> Booking Administration -> Resource Types. - -2) A list of current resource types will appear. - -3) Double click anywhere on the line of the resource type you would like to edit. - -4) The resource type box will appear. Make your changes and click Save. - -5) Following the same procedure you may edit Resource Attributes, Attributes Values, Resources and Attribute Map by selecting them on Administration -> Booking Administration. - - - - -== Deleting Non-bibliographic Resources == - -1) To delete a booking resource, go to Administration -> Booking Administration -> Resources. - -2) Select the checkbox in front the resource you want to delete. Click Delete Selected. The resource will disappear from the list. - -Following the same procedure you may delete Resource Attributes Maps. - -You may also delete Resource Attribute Values, Resource Attributes and Resource Types. But you have to delete them in the reverse order when you create them to make sure the entry is not in use when you try to delete it. - -This is the deletion order: Resource Attribute Map/Resources -> Resource Attribute Values -> Resource Attributes -> Resource Types. - - - - diff --git a/docs-antora/modules/admin/pages/circing_uncataloged_materials.adoc b/docs-antora/modules/admin/pages/circing_uncataloged_materials.adoc deleted file mode 100644 index 8389a83876..0000000000 --- a/docs-antora/modules/admin/pages/circing_uncataloged_materials.adoc +++ /dev/null @@ -1,73 +0,0 @@ -== Circulating uncataloged materials == - -=== Introduction === - -This section discusses settings for circulating items that are not cataloged. -Evergreen offers two ways to circulate an item that is not in the catalog: - -* Pre-cataloged items (also known as on-the-fly items) have a barcode, as -well as some basic metadata which staff members enter at the time of checkout. -These are represented in Evergreen with an item record which has to be manually -deleted or transferred when it is no longer needed. - -* Non-cataloged items (also known as ephemeral items) do not have barcodes, -have no metadata, and are not represented with an item record. No fines -accrue on these materials, but Evergreen does collect statistics on these -circulations. - -=== Pre-cataloged item settings === - -indexterm:[on-the-fly circulation] -indexterm:[pre-cataloged items,routing to a different library] - -By default, when a pre-cataloged item is created, Evergreen sets the _Circ Library_ -field to the library where it was checked out. You may change this so that the -circ library is set to a different library. This can be helpful in cases where the -cataloger who fixes pre-cataloged items is at another library, and you'd like all -pre-cataloged items to be routed to that cataloger's library when they are returned. - -To change this setting: - -. Go to Administration > Local Administration > Library Settings Editor. -. Choose _Pre-cat Item Circ Lib_. -. Click _Edit_. -. Select the appropriate context. For example, if all pre-cataloged items in your -system should have the same circ library, you should choose your system as the -context. -. Type in the shortname of the library that should be in the circ lib field. Make -sure to type this correctly, or Evergreen won't be able to create pre-cataloged -items. - -NOTE: Evergreen always sets the owning library of pre-cataloged items to be the -consortium. - -=== Non-cataloged item settings === - -indexterm:[ephemeral items] - -In Evergreen, libraries may elect to create their own local non-cataloged item -types. For example, you may choose to circulate non-cataloged paperbacks or magazine -back-issues, but not wish to catalog them. - -==== Adding a new non-cataloged type ==== - -. Go to Administration > Local Administration > Non-Cataloged Types Editor. -. Under _Create a new non-cataloged type_, start filling out the appropriate - information. -. Choose an appropriate duration. This period of time will be used to calculate - a due date that is displayed to the patron on the patron's receipt and _My Account_ - view in the public catalog. The item will be automatically removed from the - _My Account_ view the day after the due date. -. The _Circulate In-House?_ checkbox is only for your records. This checkbox does - not affect how these materials circulate. -. Click the _Create_ button when you are done. - -image::media/noncataloged_type_add.png[] - -==== Deleting a non-cataloged type ==== - -. Go to Administration > Local Administration > Non-Cataloged Types Editor. -. Click the _Delete_ button next to the type you wish to delete. Note that - if any non-cataloged items of this type have ever been entered, you will - not be able to delete it. - diff --git a/docs-antora/modules/admin/pages/circulation_limit_groups.adoc b/docs-antora/modules/admin/pages/circulation_limit_groups.adoc deleted file mode 100644 index e3dda15318..0000000000 --- a/docs-antora/modules/admin/pages/circulation_limit_groups.adoc +++ /dev/null @@ -1,46 +0,0 @@ -= Circulation Limit Sets = -:toc: - -== Maximum Checkout by Shelving Location == - -This feature enables you to specify the maximum number of checkouts of items by -shelving location and is an addition to the circulation limit sets. Circulation -limit sets refine circulation policies by limiting the number of items that -users can check out. Circulation limit sets are linked by name to circulation -policies. - -To limit checkouts by shelving location: - -. Click *Administration -> Local Administration -> Circulation Limit Sets*. -. Click *New* to create a new circulation limit set. -. In the *Owning Library* field, select the library that can create and edit -this limit set. -. Enter a *Name* for the circulation set. You will select the *Name* to link -the circulation limit set to a circulation policy. -. Enter the number of *Items Out* that a user can take from this shelving location. -. Enter the *Min Depth*, or the minimum depth in the org tree that Evergreen -will consider as valid circulation libraries for counting items out. The min -depth is based on org unit type depths. For example, if you want the items in -all of the circulating libraries in your consortium to be eligible for -restriction by this limit set when it is applied to a circulation policy, then -enter a zero (0) in this field. -. Check the box adjacent to *Global Flag* if you want all of the org units in -your consortium to be restricted by this limit set when it is applied to a -circulation policy. Otherwise, Evergreen will only apply the limit to the direct -ancestors and descendants of the owning library. -. Enter a brief *Description* of the circulation limit set. -. Click *Save*. - -image::media/Maximum_Checkout_by_Copy_Location1.jpg[Maximum_Checkout_by_Copy_Location1] - -To link the circulation limit set to a circulation policy: - -. Click *Administration* -> *Local Administration* -> *Circulation Policies* -. Select an existing circulation policy, or create a new one. -. Scroll down to the *Linked Limit Sets*. -. Select the *Name* of the limit set that you want to add to the circulation -policy. -. Click *Add*. -. Click *Save*. - -image::media/Maximum_Checkout_by_Copy_Location2.jpg[Maximum_Checkout_by_Copy_Location2] diff --git a/docs-antora/modules/admin/pages/closed_dates.adoc b/docs-antora/modules/admin/pages/closed_dates.adoc deleted file mode 100644 index bceb70591a..0000000000 --- a/docs-antora/modules/admin/pages/closed_dates.adoc +++ /dev/null @@ -1,48 +0,0 @@ -= Set closed dates using the Closed Dates Editor = -:toc: - -indexterm:[Closed Dates] - -These dates are in addition to your regular weekly closed days. Both regular closed days and those entered in the Closed Dates Editor affect due dates and fines: - -* *Due dates.* Due dates that would fall on closed days are automatically pushed forward to the next open day. Likewise, if an item is checked out at 8pm, for example, and would normally be due on a day when the library closes before 8pm, Evergreen pushes the due date forward to the next open day. -* *Overdue fines.* Overdue fines may not be charged on days when the library is closed. This fine behavior depends on how the _Charge fines on overdue circulations when closed_ setting is configured in the Library Settings Editor. - -Closed dates do not affect the processing delays for Action/Triggers. For example, if your library has a trigger event that marks items as lost after 30 days, that 30 day period will include both open and closed dates. - -== Adding a closure == - -. Select _Administration > Local Administration_. -. Select _Closed Dates Editor_. -. Select type of closure: typically Single Day or Multiple Day. -. Click the Calendar gadget to select the All Day date or starting and ending - dates. -. Enter a Reason for closure (optional). -. Click *Apply to all of my libraries* if your organizational unit has children - units that will also be closed. This will add closed date entries to all of those - child libraries. -+ -[NOTE] -By default, creating a closed date in a parent organizational unit does _not_ also -close the child unit. For example, adding a system-level closure will not also -close all of that system's branches, unless you check the *Apply to all of my libraries* -box. -+ -. Click *Save*. - -image::media/closed_dates.png[] - -Now that your organizational structure is established, you can begin -configuring permissions for the staff users of your Evergreen system. - -== Detailed closure == - -If your closed dates include a portion of a business day, you should create a detailed closing. - -. Select _Administration -> Local Administration_. -. Select _Closed Dates Editor_. -. Select _Add Detailed Closing_. -. Enter applicable dates, times, and a descriptive reason for the closing. -. Click Save. -. Check the Apply to all of my libraries box if your library is a multi-branch system and the closing applies to all of your branches. - diff --git a/docs-antora/modules/admin/pages/cn_prefixes_and_suffixes.adoc b/docs-antora/modules/admin/pages/cn_prefixes_and_suffixes.adoc deleted file mode 100644 index 4d5ae6290a..0000000000 --- a/docs-antora/modules/admin/pages/cn_prefixes_and_suffixes.adoc +++ /dev/null @@ -1,43 +0,0 @@ -= Call Number Prefixes and Suffixes = -:toc: - -You can configure call number prefixes and suffixes in the Admin module. This feature ensures more precise cataloging because each cataloger will have access to an identical drop down menu of call number prefixes and suffixes that are used at his library. In addition, it may streamline cataloging workflow. Catalogers can use a drop down menu to enter call number prefixes and suffixes rather than entering them manually. You can also run reports on call number prefixes and suffixes that would facilitate collection development and maintenance. - - -== Configure call number prefixes == - -Call number prefixes are codes that precede a call number. - -To configure call number prefixes: - -1. Select *Administration -> Server Administration -> Call Number Prefixes*. -2. Click *New Prefix*. -3. Enter the *call number label* that will appear on the item. -4. Select the *owning library* from the drop down menu. Staff at this library, and its descendant org units, with the appropriate permissions, will be able to apply this call number prefix. -5. Click *Save*. - - - -image::media/Call_Number_Prefixes_and_Suffixes_2_21.jpg[Call_Number_Prefixes_and_Suffixes_2_21] - - - -== Configure call number suffixes == - -Call number suffixes are codes that succeed a call number. - -To configure call number suffixes: - -1. Select *Administration -> Server Administration -> Call Number Suffixes*. -2. Click *New Suffix*. -3. Enter the *call number label* that will appear on the item. -4. Select the *owning library* from the drop down menu. Staff at this library, and its descendant org units, with the appropriate permissions, will be able to apply this call number suffix. -5. Click *Save*. - - -image::media/Call_Number_Prefixes_and_Suffixes_2_22.jpg[Call_Number_Prefixes_and_Suffixes_2_22] - - -== Apply Call Number Prefixes and Suffixes == - -You can apply call number prefixes and suffixes to items from a pre-configured list in the Holdings Editor. diff --git a/docs-antora/modules/admin/pages/copy_locations.adoc b/docs-antora/modules/admin/pages/copy_locations.adoc deleted file mode 100644 index bed58bb841..0000000000 --- a/docs-antora/modules/admin/pages/copy_locations.adoc +++ /dev/null @@ -1,109 +0,0 @@ -= Administering shelving locations = -:toc: - -== Creating new shelving locations == - -. Click _Administration_. -. Click _Local Administration_. -. Click _Shelving Locations Editor_. -. Type the name of the shelving location. -. In _OPAC Visible_, choose whether you would like items in this shelving location - to appear in the catalog. -. In _Hold Verify_, -. In _Checkin Alert_, choose whether you would like a routing alert to appear - when an item in this location is checked in. This is intended for special - locations, such as 'Display', that may require special handling, or that - temporarily contain items that are not normally in that location. -+ -NOTE: By default, these alerts will only display when an item is checked in, _not_ -when it is used to record an in-house use. -+ -To also display these alerts when an item in your location is scanned for in-house -use, go to Administration > Local Administration > Library Settings Editor and -set _Display shelving location check in alert for in-house-use_ to True. -+ -. If you would like a prefix or suffix to be added to the call numbers of every - volume in this location, enter it. -. If you would like, add a URL to the _URL_ field. When a URL is entered in - this field, the associated shelving location will display as a link in the Public - Catalog summary display. This link can be useful for retrieving maps or other - directions to the shelving location to aid users in finding material. -. If you would like to override any item-level circulation/hold policies to - make sure that items in your new location can't circulate or be holdable, - choose _No_ in the appropriate field. If you choose _Yes_, Evergreen will - use the typical circulation and hold policies to determine circulation - abilities. - -== Deleting shelving locations == - -You may only delete a shelving location if: -. it doesn't contain any items, or -. it only contains deleted items. - -Evergreen preserves shelving locations in the database, so no statistical information -is lost when a shelving location is deleted. - -== Modifying shelving location order == - -. Go to _Administration_. -. Go to _Local Administration_. -. Click _Shelving Location Order_. -. Drag and drop the locations until you are satisfied with their order. -. Click _Apply changes_. - - -== Shelving location groups == - -.Use case -**** -Mayberry Public Library provides a scope allowing users to search for all -children's materials in their library. The library's children's scope -incorporates several shelving locations used at the library, including Picture -Books, Children's Fiction, Children's Non-Fiction, Easy Readers, and Children's -DVDs. The library also builds a similar scope for YA materials that incorporates -several shelving locations. -**** - -This feature allows staff to create and name sets of shelving locations to use as -a search filter in the catalog. OPAC-visible groups will display within the -library selector in the Public Catalog. When a user selects a group -and performs a search, the set of results will be limited to records that have -items in one of the shelving locations within the group. Groups can live at any -level of the library hierarchy and may include shelving locations from any parent -org unit or child org unit. - -NOTE: To work with Shelving Location Groups, you will need the ADMIN_COPY_LOCATION_GROUP -permission. - -=== Create a Shelving Location Group === - -. Click Administration -> Local Administration -> Shelving Location Groups. -. At the top of the screen is a drop down menu that displays the org unit tree. - Select the unit within the org tree to which you want to add a shelving location group. - The shelving locations associated with the org unit appear in the Shelving Locations column. -. In the column called _Location Groups_, click _New_. -. Choose how you want the shelving location group to display to patrons in the catalog's - org unit tree in the OPAC. By default, when you add a new shelving location group, the - group displays in the org unit tree beneath any branches or sub-libraries of its - parental org unit. If you check the box adjacent to Display above orgs, then the - group will appear above the branches or sub-libraries of its parental org unit. -. To make the shelving location group visible to users searching the public catalog, check - the box adjacent to Is OPAC visible? -. Enter a _Name_ for the shelving location group. -. Click Save. The name of the Shelving Location Group appears in the Location Groups. -. Select the shelving locations that you want to add to the group, and click Add. The shelving - locations will populate the middle column, Group Entries. -. The shelving location group is now visible in the org unit tree in the catalog. Search - the catalog to retrieve results from any of the shelving locations that you added to - the shelving location group. - -=== Order Shelving Location Groups === - -If you create more than one shelving location group, then you can order the groups in the -org unit tree. - -. Click Administration -> Local Administration -> Shelving Location Groups. -. Three icons appear next to each location group. Click on the icons to drag the shelving - location groups into the order in which you would like them to appear in the catalog. -. Search the catalog to view the reorder of the shelving location groups. - diff --git a/docs-antora/modules/admin/pages/copy_statuses.adoc b/docs-antora/modules/admin/pages/copy_statuses.adoc deleted file mode 100644 index 915ea67926..0000000000 --- a/docs-antora/modules/admin/pages/copy_statuses.adoc +++ /dev/null @@ -1,93 +0,0 @@ -= Item Status = -:toc: - -indexterm:[copy status] - -To navigate to the item status editor from the staff client menu, select -*Administration* -> *Server Administration* -> *Item Statuses*. - -The Item Status Editor is used to add, edit and delete statuses of items in -your system. - -For each status, you can set the following properties: - -* Holdable - If checked, users can place holds on items in this status, -provided there are no other flags or rules preventing holds. If unchecked, -users cannot place holds on items in this status. -* OPAC Visible - If checked, items in this status will be visible in the -public catalog. If unchecked, items in this status will not be visible in the -public catalog, but they will be visible when using the catalog in the staff -client. -* Sets item active - If checked, moving an item that does not yet have an -active date to this status will set the active date. If the item already has -an active date, then no changes will be made to the active date. If unchecked, -this status will never set the item's active date. -* Is Available - If checked, items with this status will appear in catalog -searches where "limit to available" is selected as a search filter. Also, -items with this status will check out without status warnings. -By default, the "Available" and "Reshelving" statuses have the "Is Available" -flag set. The flag may be applied to local/custom statuses via the item status -admin interface. - -Evergreen comes pre-loaded with a number of item statuses. - -.Stock item statuses and default settings -[options="header"] -|============================================== -|ID|Name|Holdable|OPAC Visible|Sets copy active -|0|Available|true|true|true -|1|Checked out|true|true|true -|2|Bindery|false|false|false -|3|Lost|false|false|false -|4|Missing|false|false|false -|5|In process|true|true|false -|6|In transit|true|true|false -|7|Reshelving|true|true|true -|8|On holds shelf|true|true|true -|9|On order|true|true|false -|10|ILL|false|false|true -|11|Cataloging|false|false|false -|12|Reserves|false|true|true -|13|Discard/Weed|false|false|false -|14|Damaged|false|false|false -|15|On reservation shelf|false|false|true -|16|Long Overdue|false|false|false -|17|Lost and Paid|false|false|false -|============================================== - -== Adding Item Statuses == - -. In the _New Status_ field, enter the name of the new status you wish to add. -. Click _Add_. -. Locate your new status and check the _Holdable_ check box if you wish to allow -users to place holds on items in this status. Check _OPAC Visible_ if you wish -for this status to appear in the public catalog. Check _Sets copy active_ if you -wish for this status to set the active date for new items. -. Click _Save Changes_ at the bottom of the screen to save changes to the new -status. - -image::media/copy_status_add.png[Adding item statuses] - -== Deleting Item Statuses == - -. Highlight the statuses you wish to delete. Ctrl-click to select more than one -status. -. Click _Delete Selected_. -. Click _OK_ to verify. - -image::media/copy_status_delete.png[Deleting item statuses] - -[NOTE] -You will not be able to delete statuses if items currently exist with that -status. - -== Editing Item Statuses == - -. Double click on a status name to change its name. Enter the new name. - -. To change whether a status is holdable, visible in the OPAC, or sets the -item's active date, check or uncheck the relevant checkbox. - -. Once you have finished editing the statuses, remember to click Save Changes. - -image::media/copy_status_edit.png[Editing item statuses] diff --git a/docs-antora/modules/admin/pages/copy_tags_admin.adoc b/docs-antora/modules/admin/pages/copy_tags_admin.adoc deleted file mode 100644 index 79697b0a9e..0000000000 --- a/docs-antora/modules/admin/pages/copy_tags_admin.adoc +++ /dev/null @@ -1,70 +0,0 @@ -= Item Tags (Digital Bookplates) = -:toc: - -indexterm:[copy tags] - -Item Tags allow staff to apply custom, pre-defined labels or tags to items. Item tags are visible in the public catalog and are searchable in both the staff client and public catalog based on configuration. This feature was designed to be used for Digital Bookplates to attach donation or memorial information to items, but may be used for broader purposes to tag items. - - -== Administration == - -New Permissions: - -* ADMIN_COPY_TAG_TYPES: required to create a new tag type under *Server Administration->Item Tag Types* -* ADMIN_COPY_TAG: required to create a new tag under *Local Administration->Item Tags* - -NOTE: The existing permission UPDATE_COPY is required to assign a tag to a item - - -New Library Settings: - -* OPAC: Enable Digital Bookplate Search: when set to _True_ for a given org unit, the digital bookplate search option will be available in the catalog. - - -== Creating item Tags == -There are two components to this feature: Item Tag Types and Item Tags. - -Item Tag Types are used to define the type of tag, such as “Bookplates” or “Local History Notes”, as well as the organizational unit scope for use of the tag type. - -Item Tags are associated with a Item Tag Type and are used to configure the list of tags that can be applied to copies, such as a list of memorial or donation labels, that are applicable to a particular organizational unit. - -=== Create Item Tag Types === - -. Go to *Administration->Server Administration->Item Tag Types*. -. In the upper left hand corner, click *New Record*. A dialog box will appear. Assign the following to create a new Item Tag Type: -.. *Code*: a code to identify the item tag type. -.. *Label*: a label that will appear in drop down menus to identify the item tag type. -.. *Owner*: the organizational unit that can see and use the item tag type. -. Click *Save* and the new Item Tag Type will appear in the list. Next create the associated Item Tags. - -image::media/copytags1.PNG[Create Item Tag Types] - -image::media/copytags2.PNG[Item Tag Types Grid View] - -=== Create Item Tags === - -. Go to *Administration->Local Administration->Item Tags*. -. In the upper left hand corner, click *New Record*. A dialog box will appear. Assign the following to create a new Item Tag: -.. *Item Tag Type*: select the Item Tag Type with which you want to associate the new Item Tag. -.. *Label*: assign a label to the new item tag. -.. *Value*: assign a value to the new item tag. This will display in the catalog. -.. *Staff Note*: a note may be added to guide staff in when to apply the item tag. -.. *Is OPAC Visible?*: If a item tag is OPAC Visible, it can be searched for and viewed in the OPAC and the staff catalog. If a item tag is not OPAC Visible, it can only be searched for and viewed in the staff catalog. -.. *Owner*: select the organization unit at which this tag can be seen and used. -. Click *Save* and the new Item Tag will appear in the list. - -image::media/copytags3.PNG[Create Item Tags] - -image::media/copytags4.PNG[Item Tags Grid View] - - -== Managing Item Tags == - -=== Editing Tags === - -Existing item tags can be edited by selecting a tag and clicking *Actions->Edit Record* or right-clicking on a tag and selecting *Edit Record*. The dialog box will appear and you can modify the item tag. Click *Save* to save any changes. Changes will be propagated to any items that the tag has been attached to. - -=== Deleting Tags === - -Existing item tags can be deleted by selecting a tag and clicking *Actions->Delete Record* or right-clicking on a tag and selecting *Delete Record*. Deleting a tag will delete the tag from any items it was attached to in the catalog. - diff --git a/docs-antora/modules/admin/pages/desk_payments.adoc b/docs-antora/modules/admin/pages/desk_payments.adoc deleted file mode 100644 index 25b861af2d..0000000000 --- a/docs-antora/modules/admin/pages/desk_payments.adoc +++ /dev/null @@ -1,37 +0,0 @@ -= Cash Reports = -:toc: - -Cash reports are useful for quickly getting information about money that -your library has collected from patrons. This can be helpful in a few -different scenarios, such as: - -. Reconciling a cash drawer at the end of the day. -. Seeing how popular a specific payment type is (perhaps when evaluating -a food-for-fines program). - -To use the cash reports, - -. Under the _Administration_ menu, choose _Local Administration_. -. Click _Cash reports_. -. Select the time period and library you are interested in. This -interface defaults to showing payments accepted during the current day. -. Click _Submit_. - -[TIP] -==== -You can click on the names of columns to sort the reports. -==== - -[TIP] -==== -You need the _VIEW_TRANSACTION_ permission to view these reports. -==== - -[NOTE] -==== -These payments are divided into two different types: _Desk payments_ -- -in which a staff member simply accepted a credit card, check, or cash -payment -- and _User payments_ -- in which a staff member had to make a -specific decision about whether to accept a payment of goods or work; or -forgave or granted credit to a particular patron. -==== diff --git a/docs-antora/modules/admin/pages/ebook_api.adoc b/docs-antora/modules/admin/pages/ebook_api.adoc deleted file mode 100644 index adf79e8cf6..0000000000 --- a/docs-antora/modules/admin/pages/ebook_api.adoc +++ /dev/null @@ -1,123 +0,0 @@ -== Ebook API integration == - -Evergreen supports integration with third-party APIs provided by OverDrive and -OneClickdigital. - -When ebook API integration is enabled, the following features are supported: - - * Bibliographic records from these vendors that appear in your -public catalog will include vendor holdings and availability information. - * Patrons can check out and place holds on OverDrive and OneClickdigital ebook -titles from within the public catalog. - * When a user is logged in, the public catalog dashboard and My Account -interface will include information about that user's checkouts and holds for -supported vendors. - -WARNING: The ability to check out and place holds on ebook titles is an experimental -feature in 3.0. It is not recommended for production use without careful -testing. - -For API integration to work, you need to request API access from the -vendor and configure your Evergreen system according to the instructions -below. You also need to configure the new `open-ils.ebook_api` service. - -This feature assumes that you are importing MARC records supplied by the -vendor into your Evergreen system, using Vandelay or some other MARC -import method. This feature does not search the vendor's online -collections or automatically import vendor records into your system; it -merely augments records that are already in Evergreen. - -A future Evergreen release will add the ability for users to check out -titles, place holds, etc., directly via the public catalog. - -=== Ebook API service configuration === -This feature uses the new `open-ils.ebook_api` OpenSRF service. This -service must be configured in your `opensrf.xml` and `opensrf_core.xml` -config files for ebook API integration to work. See -`opensrf.xml.example` and `opensrf_core.xml.example` for guidance. - -=== OverDrive API integration === -Before enabling OverDrive API integration, you will need to request API -access from OverDrive. OverDrive will provide the values to be used for -the following new org unit settings: - - * *OverDrive Basic Token*: The basic token used for API client - authentication. To generate your basic token, combine your client - key and client secret provided by OverDrive into a single string - ("key:secret"), and then base64-encode that string. On Linux, you - can use the following command: `echo -n "key:secret" | base64 -` - * *OverDrive Account ID*: The account ID (a.k.a. library ID) for your - OverDrive API account. - * *OverDrive Website ID*: The website ID for your OverDrive API - account. - * *OverDrive Authorization Name*: The authorization name (a.k.a. - library name) designated by OverDrive for your library. If your - OverDrive subscription includes multiple Evergreen libraries, you - will need to add a separate value for this setting for each - participating library. - * *OverDrive Password Required*: If your library's OverDrive - subscription requires the patron's PIN (password) to be provided - during patron authentication, set this setting to "true." If you do - not require the patron's PIN for OverDrive authentication, set this - setting to "false." (If set to "true," the password entered by a - patron when logging into the public catalog will be cached in plain text in - memcached.) - * *OverDrive Discovery API Base URI* and *OverDrive Circulation API - Base URI*: By default, Evergreen uses OverDrive's production API, so - you should not need to set a value for these settings. If you want - to use OverDrive's integration environment, you will need to add the - appropriate base URIs for the discovery and circulation APIs. See - OverDrive's developer documentation for details. - * *OverDrive Granted Authorization Redirect URI*: Evergreen does not - currently support granted authorization with OverDrive, so this - setting is not currently in use. - -For more information, consult the -https://developer.overdrive.com/docs/getting-started[OverDrive API -documentation]. - -To enable OverDrive API integration, adjust the following public catalog settings -in `config.tt2`: - - * `ebook_api.enabled`: set to "true". - * `ebook_api.overdrive.enabled`: set to "true". - * `ebook_api.overdrive.base_uris`: list of regular expressions - matching OverDrive URLs found in the 856$9 field of older OverDrive - MARC records. As of fall 2016, OverDrive's URL format has changed, - and the record identifier is now found in the 037$a field of their - MARC records, with "OverDrive" in 037$b. Evergreen will check the - 037 field for OverDrive record identifiers; if your system includes - older-style OverDrive records with the record identifier embedded in - the 856 URL, you need to specify URL patterns with this setting. - -=== OneClickdigital API integration === -Before enabling OneClickdigital API integration, you will need to -request API access from OneClickdigital. OneClickdigital will provide -the values to be used for the following new org unit settings: - - * *OneClickdigital Library ID*: The identifier assigned to your - library by OneClickdigital. - * *OneClickdigital Basic Token*: Your client authentication token, - supplied by OneClickdigital when you request access to their API. - -For more information, consult the -http://developer.oneclickdigital.us/[OneClickdigital API documentation]. - -To enable OneClickdigital API integration, adjust the following public catalog -settings in `config.tt2`: - - * `ebook_api.enabled`: set to "true". - * `ebook_api.oneclickdigital.enabled`: set to "true". - * `ebook_api.oneclickdigital.base_uris`: list of regular expressions - matching OneClickdigital URLs found in the 859$9 field of your MARC - records. Evergreen uses the patterns specified here to extract - record identifiers for OneClickdigital titles. - -=== Additional configuration === -Evergreen communicates with third-party vendor APIs using the new -`OpenILS::Utils::HTTPClient` module. This module is configured using -settings in `opensrf.xml`. The default settings should work for most -environments by default, but you may need to specify a custom location -for the CA certificates installed on your server. You can also disable -SSL certificate verification on HTTPClient requests altogether, but -doing so is emphatically discouraged. diff --git a/docs-antora/modules/admin/pages/ebook_api_service.adoc b/docs-antora/modules/admin/pages/ebook_api_service.adoc deleted file mode 100644 index 6b5546f613..0000000000 --- a/docs-antora/modules/admin/pages/ebook_api_service.adoc +++ /dev/null @@ -1,11 +0,0 @@ -= ebook_api service = - -The `open-ils.ebook_api` service looks up title and -patron information from specified ebook vendor APIs. - -The Evergreen catalog accesses data from this service -through OpenSRF JS bindings. - -The `OpenILS::Utils::HTTPClient` module is required -for this service. - diff --git a/docs-antora/modules/admin/pages/emergency_closing_handler.adoc b/docs-antora/modules/admin/pages/emergency_closing_handler.adoc deleted file mode 100644 index 7901f1eea2..0000000000 --- a/docs-antora/modules/admin/pages/emergency_closing_handler.adoc +++ /dev/null @@ -1,82 +0,0 @@ -= Emergency Closing Handler = -:toc: - -== Introduction == - -The *Closed Dates Editor* now includes an Emergency Closing feature that allows libraries to shift due dates and expiry dates to the next open day. Overdue fines will be automatically voided for the day(s) the library is marked closed. Once an Emergency Closing is processed, it is permanent and cannot be rolled back. - -== Administration == - -=== Permissions === - -To create an Emergency Closing, the EMERGENCY_CLOSING permission needs to be granted to the user for all locations to be affected by an emergency closing. - -== Create an emergency closing == - -The Emergency Closing feature is located within the *Closed Dates Editor* screen, which can be accessed via *Administration -> Local Administration -> Closed Dates Editor*. - -Within the closed dates editor screen, scheduled closed dates are listed and can be scoped by specific org unit and date. The date filter in the upper right-hand corner will show upcoming library closings on or after the selected date in the filter. - -image::media/ECHClosedDatesEditorAddClosing.png[Add Closing] - -Select *Add closing* to begin the emergency closing process. A pop-up will appear with fields to fill out. - -image::media/ECHLibraryClosingConstruction.png[Create Closing for One Full Day] - -*Library* - Using the dropdown window, select the org unit which will be closing. - -*Apply to all of my libraries* - When selected, this checkbox will apply the emergency closing date to the selected org unit and any associated child org unit(s). - -*Closing Type* - The following Closing Type options are available in a drop down window: -* One full day -* Multiple days -* Detailed closing - -The _Multiple days_ and _Detailed closing_ options will display different date options (e.g. start and end dates) in the next field if selected. - -image::media/ECHLibraryClosingMultipleDays.png[Create Closing for Multiple Days] - -image::media/ECHLibraryClosingDetailed.png[Create Detailed Closing] - -*Date* - Select which day or days the library will be closed. - -[NOTE] -======================== -*NOTE* The Closed Dates editor is now date-aware. If a selected closed date is either in the past, or nearer in time than the end of the longest configured circulation period, staff will see a notification that says "Possible Emergency Closing" in both the dialog box and in the bottom right-hand corner. -======================== - -*Reason* - Label the reason for library closing accordingly, e.g. 3/15 Snow Day - -=== Emergency Closing Handler === - -When a date is chosen that is nearer in time than the end of the longest configured circulation period or in the past, then a *Possible Emergency Closing* message will appear in the pop-up and in the bottom right-hand corner of the screen. Below the Possible Emergency Closing message, two checkboxes appear: *Emergency* and *Process Immediately*. - -[NOTE] -========================= -*NOTE* The *Emergency* checkbox must still be manually selected in order to actually set the closing as an Emergency Closing. -========================= - -By selecting the *Emergency* checkbox, the system will void any overdue fines incurred for that emergency closed day or days and push back any of the following dates to the next open day as determined by the library’s settings: -* item due dates -* shelf expire times -* booking start times - -image::media/ECHClosingSnowDay.png[Create Emergency Closing] - -When selecting the *Process Immediately* checkbox, Evergreen will enact the Emergency Closing changes immediately once the Emergency Closed Date information is saved. If Process Immediately is not selected at the time of creation, staff will need to go back and edit the closing later, or the Emergency processing will not occur. - -Upon clicking *OK*, a progress bar will appear on-screen. After completion, the Closed Dates Editor screen will update, and under the Emergency Closing Processing Summary column, the number of affected/processed Circulations, Holds, and Reservations will be listed. - -image::media/ECHLibraryClosingDone.png[Emergency Closing Processing Complete] - -=== Editing Closing to process Emergency Closing === - -If *Process immediately* is not selected during an Emergency Closing event creation, staff will need to edit the existing Emergency Closing event and process the affected items. - -In the Closed Dates Editor screen, select the existing Emergency Closing event listed. Then, go to *Actions -> Edit closing*. - -image::media/ECHEditClosing.png[Edit Closing] - -A pop-up display will appear with the same format as creating a Closed Dates event with the Emergency checkbox checked and the Process Immediately un-checked at the bottom. Select the *Process immediately* checkbox, and then *OK*. A progress bar will appear on-screen, the Emergency Closing processing will take occur, and the Closed Dates Editor display will update. - -image::media/ECHEditClosingModal.png[Edit Closing Pop-Up] diff --git a/docs-antora/modules/admin/pages/floating_groups.adoc b/docs-antora/modules/admin/pages/floating_groups.adoc deleted file mode 100644 index 6072fb7d4c..0000000000 --- a/docs-antora/modules/admin/pages/floating_groups.adoc +++ /dev/null @@ -1,120 +0,0 @@ -= Floating Groups = -:toc: - -Before floating groups items could float or not. If they floated then they floated everywhere, with no restrictions. - -After floating groups where an item will float is defined by what group it has been assigned to. - -== Floating Groups == - -Each floating group comes with a name and a manual flag, plus zero or more group members. The name is used solely for selection and display purposes. - -The manual flag dictates whether or not the "Manual Floating Active" checkin modifier needs to be active for an item to float. This allows for greater control over when items float. It also prevents automated checkins via SIP2 from triggering floats. - -=== Floating Group Members === - -Each member of a floating group references an org unit and has a stop depth, an optional max depth, and an exclude flag. - -=== Org Unit === - -The org unit and all descendants are included, unless max depth is set, in which case the tree is cut off at the max depth. - -=== Stop Depth === - -The stop depth is the highest point from the current item circ library to the checkin library for the item that will be traversed. If the item has to go higher than the stop depth on the tree the member rule in question is ignored. - -=== Max Depth === - -As mentioned with the org unit, the max depth is the furthest down on the tree from the org unit that gets included. This is based on the entire tree, not just off of the org unit. So in the default tree a max depth of 1 will stop at the system level no matter if org unit is set to CONS or SYS1. - -=== Exclude === - -Exclude, if set, causes floating to not happen for the member. Excludes always take priority, so you can remove an org unit from floating without having to worry about other rules overriding it. - -== Examples == - -=== Float Everywhere === - -This is a default floating rule to emulate the previous floating behavior for new installs and upgrades. - -One member: - -* Org Unit: CONS -* Stop Depth: 0 -* Max Depth: Unset -* Exclude: Off - -=== Float Within System === - -This would permit an item to float anywhere within a system, but would return to the system if it was returned elsewhere. - -One member: - -* Org Unit: CONS -* Stop Depth: 1 -* Max Depth: Unset -* Exclude: Off - -=== Float To All Branches === - -This would permit an item to float to any branch, but not to sublibraries or bookmobiles. - -One member: - -* Org Unit: CONS -* Stop Depth: 0 -* Max Depth: 2 -* Exclude: Off - -=== Float To All Branches Within System === - -This would permit an item to float to any branch in a system, but not to sublibraries or bookmobiles, and returning to the system if returned elsewhere. - -One member: - -* Org Unit: CONS -* Stop Depth: 1 -* Max Depth: 2 -* Exclude: Off - -=== Float Between BR1 and BR3 === - -This would permit an item to float between BR1 and BR3 specifically, excluding sublibraries and bookmobiles. - -It would consist of two members, identical other than the org unit: - -* Org Unit: BR1 / BR3 -* Stop Depth: 0 -* Max Depth: 2 -* Exclude: Off - -=== Float Everywhere Except BM1 === - -This would allow an item to float anywhere except for BM1. It accomplishes this with two members. - -The first includes all org units, just like Float Everywhere: - -* Org Unit: CONS -* Stop Depth: 0 -* Max Depth: Unset -* Exclude: Off - -The second excludes BM1: - -* Org Unit: BM1 -* Stop Depth: 0 -* Max Depth: Unset -* Exclude: On - -That works because excludes are applied first. - -=== Float into, but not out of, BR2 === - -This would allow an item to float into BR2, but once there it would never leave. Why you would want to allow items to float to but not from a single library I dunno, but here it is. This takes advantage of the fact that the rules say where we can float *to*, but outside of stop depth don't care where we are floating *from*. - -One member: - -* Org Unit: BR2 -* Stop Depth: 0 -* Max Depth: Unset -* Exclude: Off diff --git a/docs-antora/modules/admin/pages/hold_driven_recalls.adoc b/docs-antora/modules/admin/pages/hold_driven_recalls.adoc deleted file mode 100644 index 7de6254d92..0000000000 --- a/docs-antora/modules/admin/pages/hold_driven_recalls.adoc +++ /dev/null @@ -1,50 +0,0 @@ -= Hold-driven recalls = -:toc: - -indexterm:[hold-driven recalls] -indexterm:[circulation, recalls, hold-driven] - -In academic libraries, it is common for groups like faculty and graduate -students to have extended loan periods (for example, 120 days), while -others have more common loan periods such as 3 weeks. In these environments, -it is desirable to have a hold placed on an item that has been loaned out -for an extended period to trigger a 'recall', which: - - . Truncates the loan period - . Sets the remaining available renewals to 0 - . 'Optionally': Changes the fines associated with overdues for the new due - date - . 'Optionally': Notifies the current patron of the recall, including the - new due date and fine level - -== Enabling hold-driven recalls == - -By default, holds do not trigger recalls. To enable hold-driven recalls -of circulating items, library settings must be changed as follows: - - . Click *Administration* -> *Local Administration* -> *Library Settings Editor.* - . Set the *Recalls: Circulation duration that triggers a recall - (recall threshold)* setting. The recall threshold is specified as an - interval (for example, "21 days"); any items with a loan duration of - less that this interval are not considered for a recall. - . Set the *Recalls: Truncated loan period (return interval)* setting. - The return interval is specified as an interval (for example, "7 days"). - The due date on the recalled item is changed to be the greater of either - the recall threshold or the return interval. - . 'Optionally': Set the *Recalls: An array of fine amount, fine interval, - and maximum fine* setting. If set, this applies the specified fine rules - to the current circulation period for the recalled item. - -When a hold is placed and no available items are found by the hold targeter, -the recall logic checks to see if the recall threshold and return interval -settings are set; if so, then the hold targeter checks the currently -checked-out items to determine if any of the currently circulating items at -the designated pickup library have a loan duration longer than the recall -threshold. If so, then the eligible item with the due date nearest to the -current date is recalled. - -== Editing the item recall notification email template == - -The template for the item recall notification email is contained in the -'Item Recall Email Notice' template, found under *Administration* -> *Local -Administration* -> *Notifications / Action Triggers*. diff --git a/docs-antora/modules/admin/pages/hold_targeter_service.adoc b/docs-antora/modules/admin/pages/hold_targeter_service.adoc deleted file mode 100644 index 783375401f..0000000000 --- a/docs-antora/modules/admin/pages/hold_targeter_service.adoc +++ /dev/null @@ -1,4 +0,0 @@ -= hold-targeter service = - -The `open-ils.hold-targeter` service is used to target holds. - diff --git a/docs-antora/modules/admin/pages/hours.adoc b/docs-antora/modules/admin/pages/hours.adoc deleted file mode 100644 index 5aefbf27bc..0000000000 --- a/docs-antora/modules/admin/pages/hours.adoc +++ /dev/null @@ -1,9 +0,0 @@ -=== Setting regular library hours === - -You may do this in _Administration_ > _Server Administration_ > _Organizational -Units_. - -The *Hours of Operation* tab is where you enter regular, weekly hours. Holiday -and other closures are set in the *Closed Dates Editor*. Hours of operation and -closed dates impact due dates and fine accrual. - diff --git a/docs-antora/modules/admin/pages/infrastructure_auth_browse.adoc b/docs-antora/modules/admin/pages/infrastructure_auth_browse.adoc deleted file mode 100644 index b89eed9b92..0000000000 --- a/docs-antora/modules/admin/pages/infrastructure_auth_browse.adoc +++ /dev/null @@ -1,37 +0,0 @@ -= Infrastructure Changes to Authority Browse = -:toc: - -As part of a larger development and consulting project to improve how authority records are used in public catalog browse, improvements have been made to how authority records are indexed in Evergreen. This will not result in any direct changes to the public catalog, but will create infrastructure for improvements to the browse list. Specifically, a configuration table will be used to specify how browse entries from authority records should be generated. This new tables will supplement the existing authority control set configuration tables but will not replace them. - -== Backend functionality == - -The new configuration table, authority.heading_field, specifies how headings can be extracted from MARC21 authority records. The general mechanism is similar to how config.metabib_field specifies how bibliographic records should be indexed: the XML representation of the MARC21 authority record is first passed through a stylesheet specified by the authority.heading_field definition, then XPath expressions are used to extract the heading for generating browse entries for the authority.simple_heading and metabib.browse_entry tables. - -The initial set of definitions supplied for authority.heading_field use the MARCXML to MADS 2.1 stylesheet; this helps ensure that heading strings extracted from authority records will match headings extracted from bibliographic records using the MODS stylesheet. - -== Staff User Interface == - -An interface for configuring authority headings is available in Server Administration in the web-based staff client, under the name "Authority Headings Fields". - -When navigated to, the interface looks like this: - -images::media/auth_browse_infra1.png[] - -Individual heading field definitions can be edited like this: - -images::media/auth_browse_infra2.png[] - -The available fields are: - -* Heading type: this can be personal_name, corporate_name, meeting_name, uniform_title, named_event, chronology_term, topical_term, geographic_name, genre_form_term, or medium_of_performance_term. -* Heading purpose: this can be main, related, or variant, corresponding to authority record 1XX, 5XX, or 4XX fields respectively. -* Heading field label: Label for use by administrators -* Heading XSLT Format: This core -* Heading XPath: Main XPath expression for selecting a part of the authority record to extract a heading from. -* Heading Component XPath: XPath express for selecting parts of a heading string from the elements selected by Heading XPath. -* Related/Variant Type XPath: Expression used, for variant and related headings, for identifying the specific purpose of the heading (e.g., broader term, narrower term, etc.). -* Thesaurus XPath: Expression used for extracting the thesaurus that controls the heading -* Thesaurus Override XPath: Expression used for identifying the thesaurus that controls a related heading. -* Joiner string: String used to stitch together components of the heading into a single display string. If not set, " -- " is used. - -It should be noted that unless one has non-standard authority records, it is recommended that changes to the authority heading field definitions be minimized. diff --git a/docs-antora/modules/admin/pages/librarysettings.adoc b/docs-antora/modules/admin/pages/librarysettings.adoc deleted file mode 100644 index 8a1bb2e5d6..0000000000 --- a/docs-antora/modules/admin/pages/librarysettings.adoc +++ /dev/null @@ -1,512 +0,0 @@ -= Library Settings Editor = -:toc: - -== Introduction == -(((Library Settings Editor))) - -With the *Library Settings Editor* one can optionally customize -Evergreen's behavior for a particular library or library system. For -descriptions of available settings see the xref:#settings_overview[Settings Overview] table below. - -== Editing Library Settings == - -1. To open the *Library Settings Editor* select *Admin* -> *Local -Administration* -> *Library Settings Editor*. -2. Settings having effects on the same function or module are grouped -together. You may browse the list or search for the entry you want to -edit. Type in your search term in the filter box. You may clear or -re-apply the filter by clicking *Clear Filter* or *Filter*. -+ -image::media/lse-1.png[Filtering the Library Settings Editor List] -+ -3. To edit an entry click *Edit* in the line. -4. Read the instruction in the pop-up window. Make the change. Click -*Update Setting* to save the change. Click *Delete Setting* if you wish -to delete it. -+ -image::media/lse-2.png[Editing a Library Setting] -+ -5. Click *History* to view the previous values, if any, of a setting. -You can revert back to an old value by clicking *revert*. -+ -image::media/lse-3.png[Library Setting History] - -NOTE: Please note that different settings may require different data -formats, which are listed in the xref:#settings_overview[Settings Overview] table. Refer to the xref:#data_types[Data Types] table at the -bottom of this page for more information. - -== Exporting/Importing Library Settings == -((("Exporting", "Library Settings Editor"))) -((("Importing", "Library Settings Editor"))) - -1. To export library settings, click the *Export* button on the above -*Library Setting Editor* screen. Click *Copy* in the pop-up window. -Those settings displayed on the screen are copied to the clipboard. -Paste the contents to a text editor, such as Notepad. Save the file on -your computer. -+ -image::media/lse-4.png[Exporting Library Settings] -+ -2. To import library settings, click the *Import* button on the *Library -Settings Editor* screen. Open your previously saved file and copy the -contents. Click *Paste* in the pop-up window. Click *Submit*. -+ -image::media/lse-5.png[Importing Library Settings] - -[#settings_overview] -== Settings Overview == - -The settings are grouped together in separate tables based on functions -and modules, which are affected by the setting. They are in the same -sequence as you see in the staff client. Each table describes the -available settings in the group and shows which can be changed on a -per-library basis. At the bottom is the table with a list of - xref:#data_types[data types] with details about acceptable settings -values. - -((("Acquisitions", "Library Settings Editor"))) - -[[lse-acq]] -.Acquisitions -[options="header"] -|======== -|Setting|Description|Data type|Notes -|Allow funds to be rolled over without bringing money along|Allow funds to be rolled over without bringing the money along. This makes money left in the old fund disappear, modeling its return to some outside entity.|True/False| -|Allows patrons to create automatic holds from purchase requests.|Allows patrons to create automatic holds from purchase requests.|True/False| -|Default circulation modifier|Sets the default circulation modifier for use in acquisitions.|Text| -|Default copy location|Sets the default item location(shelving location) for use in acquisitions.|Selection list| -|Fund Spending Limit for Block|When the amount remaining in the fund, including spent money and encumbrances, goes below this percentage, attempts to spend from the fund will be blocked.|Number| -|Fund Spending Limit for Warning|When the amount remaining in the fund, including spent money and encumbrances, goes below this percentage, attempts to spend from the fund will result in a warning to the staff.|Number| -|Rollover Distribution Formulae Funds|During fiscal rollover, update distribution formulae to use new funds|True/False| -|Set copy creator as receiver|When receiving an item in acquisitions, set the item "creator" to be the staff that received the item|True/False| -|Temporary barcode prefix|Temporary barcode prefix added to temporary item records.|Text| -|Temporary call number prefix|Temporary call number prefix|Text| -|Upload Activate PO|Activate the purchase order by default during ACQ file upload|True/False| -|Upload Create PO|Create a purchase order by default during ACQ file upload|True/False| -|Upload Default Insufficient Quality Fall-Thru Profile|Default low-quality fall through profile used during ACQ file upload|Selection List|Match Only Merge and Full Overlay are the selections. -|Upload Default Match Set|Default match set to use during ACQ file upload|Selection List|Can be set to authority test or biblio -|Upload Default Merge Profile|Default merge profile to use during ACQ file upload|Selection List|Match Only Merge and Full Overlay are the selections. -|Upload Default Min. Quality Ratio|Default minimum quality ratio used during ACQ file upload|Number| -|Upload Default Provider|Default provider to use during ACQ file upload|Selection List|This list is populated by your Providers. -|Upload Import Non Matching by Default|Import non-matching records by default during ACQ file upload|True/False| -|Upload Load Items for Imported Records by Default|Load items for imported records by default during ACQ file upload|True/False| -|Upload Merge on Best Match by Default|Merge records on best match by default during ACQ file upload|True/False| -|Upload Merge on Exact Match by Default|Merge records on exact match by default during ACQ file upload|True/False| -|Upload Merge on Single Match by Default|Merge records on single match by default during ACQ file upload|True/False| -|======== - -((("Booking", "Library Settings Editor"))) -((("Cataloging", "Library Settings Editor"))) - -[[lse-cataloging]] -.Booking and Cataloging -[options="header"] -|====================== -|Setting|Description|Data type|Notes -|Allow email notify|Permit email notification when a reservation is ready for pick-up.|True/false| -|Elbow room|Elbow room specifies how far in the future you must make a reservation on an item if that item will have to transit to reach its pick-up location. It secondarily defines how soon a reservation on a given item must start before the check-in process will opportunistically capture it for the reservation shelf.|Duration| -|Default Classification Scheme|Defines the default classification scheme for new call numbers: 1 = Generic; 2 = Dewey; 3 = LC|Number|It has effect on call number sorting. -|Default copy status (fast add)|Default status when an item is created using the "Fast Item Add" interface.|Selection list|Default: In process -|Default copy status (normal)|Default status when an item is created using the normal volume/copy creator interface.|Selection list| -|Defines the control number identifier used in 003 and 035 fields||Text| -|Delete bib if all items are deleted via Acquisitions line item cancellation.||True/False| -|Delete volume with last copy|Automatically delete a volume when the last linked item is deleted.|True/False|Default TRUE -|Maximum Parallel Z39.50 Batch Searches|The maximum number of Z39.50 searches that can be in-flight at any given time when performing batch Z39.50 searches|Number| -|Maximum Z39.50 Batch Search Results|The maximum number of search results to retrieve and queue for each record + Z39 source during batch Z39.50 searches|Number| -|Spine and pocket label font family|Set the preferred font family for spine and pocket labels. You can specify a list of fonts, separated by commas, in order of preference; the system will use the first font it finds with a matching name. For example, "Arial, Helvetica, serif".|Text| -|Spine and pocket label font size|Set the default font size for spine and pocket labels|Number| -|Spine and pocket label font weight|Set the preferred font weight for spine and pocket labels. You can specify "normal", "bold", "bolder", or "lighter".|Text| -|Spine label left margin|Set the left margin for spine labels in number of characters.|Number| -|Spine label line width|Set the default line width for spine labels in number of characters. This specifies the boundary at which lines must be wrapped.|Number| -|Spine label maximum lines|Set the default maximum number of lines for spine labels.|Number| -|====================== - -((("Circulation", "Library Settings Editor"))) - -[[lse-circulation]] -.Circulation -[options="header"] -|=========== -|Setting|Description|Data type|Notes -|Allow others to use patron account (privacy waiver)|Add a note to a user account indicating that specified people are allowed to place holds, pick up holds, check out items, or view borrowing history for that user account.|True/False| -|Auto-extend grace periods|When enabled grace periods will auto-extend. By default this will be only when they are a full day or more and end on a closed date, though other options can alter this.|True/False| -|Auto-extending grace periods extend for all closed dates|It works when the above setting "Auto-Extend Grace Periods" is set to TRUE. If enabled, when the grace period falls on a closed date(s), it will be extended past all closed dates that intersect, but within the hard-coded limits (your library's grace period).|True/False| -|Auto-extending grace periods include trailing closed dates|It works when the above setting "Auto-Extend Grace Periods" is set to TRUE. If enabled, grace periods will include closed dates that directly follow the last day of the grace period. A backdated check-in with effective date on the closed dates will assume the item is returned after hours on the last day of the grace period.|True/False|Useful when libraries' book drop equipped with AMH. -|Block hold request if hold recipient privileges have expired||True/False| -|Cap max fine at item price|This prevents the system from charging more than the item price in overdue fines|True/False| -|Charge fines on overdue circulations when closed|When set to True, fines will be charged during scheduled closings and normal weekly closed days.|True/False| -|Checkout fills related hold|When a patron checks out an item and they have no holds that directly target the item, the system will attempt to find a hold for the patron that could be fulfilled by the checked out item and fulfills it. On the Staff Client you may notice that when a patron checks out an item under a title on which he/she has a hold, the hold will be treated as filled though the item has not been assigned to the patron's hold.|True/false| -|Checkout fills related hold on valid copy only|When filling related holds on checkout only match on items that are valid for opportunistic capture for the hold. Without this set a Title or Volume hold could match when the item is not holdable. With this set only holdable items will match.|True/False| -|Checkout auto renew age|When an item has been checked out for at least this amount of time, an attempt to check out the item to the patron that it is already checked out to will simply renew the circulation. If the checkout attempt is done within this time frame, Evergreen will prompt for choosing Renewing or Check-in then Checkout the item.|Duration| -|Display copy alert for in-house-use|Setting to true for an organization will cause an alert to appear with the copy's alert message, if it has one, when recording in-house-use for the copy.|True/False| -|Display copy location check in alert for in-house-use|Setting to true for an organization will cause an alert to display a message indicating that the item needs to be routed to its location if the location has check in alert set to true.|True/False| -|Do not change fines/fees on zero-balance LOST transaction|When an item has been marked lost and all fines/fees have been completely paid on the transaction, do not void or reinstate any fines/fees EVEN IF "Void lost item billing when returned" and/or "Void processing fee on lost item return" are enabled|True/False| -|Do not include outstanding Claims Returned circulations in lump sum tallies in Patron Display.|In the Patron Display interface, the number of total active circulations for a given patron is presented in the Summary sidebar and underneath the Items Out navigation button. This setting will prevent Claims Returned circulations from counting toward these tallies.|True/False| -|Hold shelf status delay|The purpose is to provide an interval of time after an item goes into the on-holds-shelf status before it appears to patrons that it is actually on the holds shelf. This gives staff time to process the item before it shows as ready-for-pick-up.|Duration| -|Include Lost circulations in lump sum tallies in Patron Display.|In the Patron Display interface, the number of total active circulations for a given patron is presented in the Summary sidebar and underneath the Items Out navigation button. This setting will include Lost circulations as counting toward these tallies.|True/False| -|Invalid patron address penalty|When set, if a patron address is set to invalid, a penalty is applied.|True/False| -|Item status for missing pieces|This is the Item Status to use for items that have been marked or scanned as having Missing Pieces. In the absence of this setting, the Damaged status is used.|Selection list| -|Load patron from Checkout|When scanning barcodes into Checkout auto-detect if a new patron barcode is scanned and auto-load the new patron.|True/False| -|Long-Overdue Check-In Interval Uses Last Activity Date|Use the long-overdue last-activity date instead of the due_date to determine whether the item has been checked out too long to perform long-overdue check-in processing. If set, the system will first check the last payment time, followed by the last billing time, followed by the due date. See also "Long-Overdue Max Return Interval"|True/False| -|Long-Overdue Items Usable on Checkin|Long-overdue items are usable on checkin instead of going "home" first|True/False| -|Long-Overdue Max Return Interval|Long-overdue check-in processing (voiding fees, re-instating overdues, etc.) will not take place for items that have been overdue for (or have last activity older than) this amount of time|Duration| -|Lost check-in generates new overdues|Enabling this setting causes retroactive creation of not-yet-existing overdue fines on lost item check-in, up to the point of check-in time (or max fines is reached). This is different than "restore overdue on lost", because it only creates new overdue fines. Use both settings together to get the full complement of overdue fines for a lost item|True/False| -|Lost items usable on checkin|Lost items are usable on checkin instead of going 'home' first|True/false| -|Max patron claims returned count|When this count is exceeded, a staff override is required to mark the item as claims returned.|Number| -|Maximum visible age of User Trigger Events in Staff Interfaces|If this is unset, staff can view User Trigger Events regardless of age. When this is set to an interval, it represents the age of the oldest possible User Trigger Event that can be viewed.|Duration| -|Minimum transit checkin interval|In-Transit items checked in this close to the transit start time will be prevented from checking in|Duration| -|Number of Retrievable Recent Patrons|Number of most recently accessed patrons that can be re-retrieved in the staff client. A value of 0 or less disables the feature. Defaults to 1.|Number| -|Patron merge address delete|Delete address(es) of subordinate user(s) in a patron merge.|True/False| -|Patron merge barcode delete|Delete barcode(s) of subordinate user(s) in a patron merge|True/False| -|Patron merge deactivate card|Mark barcode(s) of subordinate user(s) in a patron merge as inactive.|True/False| -|Patron Registration: Cloned patrons get address copy|If True, in the Patron editor, addresses are copied from the cloned user. If False, addresses are linked from the cloned user which can only be edited from the cloned user record.|True/False| -|Printing: custom JavaScript file|Full URL path to a JavaScript File to be loaded when printing. Should implement a print_custom function for DOM manipulation. Can change the value of the do_print variable to false to cancel printing.|Text| -|Require matching email address for password reset requests||True/False| -|Restore Overdues on Long-Overdue Item Return||True/False| -|Restore overdues on lost item return|If true when a lost item is checked in overdue fines are charged (up to the maximum fines amount)|True/False| -|Specify search depth for the duplicate patron check in the patron editor|When using the patron registration page, the duplicate patron check will use the configured depth to scope the search for duplicate patrons.|Number| -|Suppress hold transits group|To create a group of libraries to suppress Hold Transits among them. All libraries in the group should use the same unique value. Leave it empty if transits should not be suppressed.|Text| -|Suppress non-hold transits group|To create a group of libraries to suppress Non-Hold Transits among them. All libraries in the group should use the same unique value. Leave it empty if Non-Hold Transits should not be suppressed.|Text| -|Suppress popup-dialogs during check-in.|When set to True, no pop-up window for exceptions on check-in. But the accompanying sound will be played.|True/False| -|Target copies for a hold even if copy's circ lib is closed|If this setting is true at a given org unit or one of its ancestors, the hold targeter will target items from this org unit even if the org unit is closed (according to the Org Unit's closed dates.).|True/False|Set the value to True if you want to target items for holds at closed circulating libraries. Set the value to False, or leave it unset, if you do not want to enable this feature. -|Target copies for a hold even if copy's circ lib is closed IF the circ lib is the hold's pickup lib|If this setting is true at a given org unit or one of its ancestors, the hold targeter will target items from this org unit even if the org unit is closed (according to the Org Unit's closed dates) IF AND ONLY IF the item's circ lib is the same as the hold's pickup lib.|True/False| Set the value to True if you want to target items for holds at closed circulating libraries when the circulating library of the item and the pickup library of the hold are the same. Set the value to False, or leave it unset, if you do not want to enable this feature. -|Truncate fines to max fine amount||True/False|Default:TRUE -|Use Lost and Paid copy status|Use Lost and Paid copy status when lost or long overdue billing is paid|True/False| -|Void Long-Overdue Item Billing When Returned||True/False| -|Void Processing Fee on Long-Overdue Item Return||True/False| -|Void longoverdue item billing when claims returned||True/False| -|Void longoverdue item processing fee when claims returned||True/False| -|Void lost item billing when claims returned||True/False| -|Void lost item billing when returned|If true,when a lost item is checked in the item replacement bill (item price) is voided.|True/False| -|Void lost item processing fee when claims returned|When an item is marked claims returned that was marked Lost, the item processing fee will be voided.|True/False| -|Void lost max interval|Items that have been overdue this long will not result in lost charges being voided when returned, and the overdue fines will not be restored, either. Only applies if *Circ: Void lost item billing* or *Circ: Void processing fee on lost item* are true.|Duration| -|Void processing fee on lost item return|Void processing fee when lost item returned|True/False| -|Warn when patron account is about to expire|If set, the staff client displays a warning this number of days before the expiry of a patron account. Value is in number of days.|Duration| -|=========== - -((("Credit Card Processing", "Library Settings Editor"))) - -[[lse-credit-cards]] -.Credit Card Processing -[options="header"] -|====================== -|Setting|Description|Data type|Notes -|AuthorizeNet login|Authorize.Net Username|Text|Obtain from Authorize.Net at http://www.authorize.net -|AuthorizeNet password|Authorize.Net Password|Text|Obtain from Authorize.Net -|AuthorizeNet server|Required if using a developer/test account with Authorize.Net.|Text|Enter the server name from Authorize.Net. This is for use on test or developer account. If using live, leave blank. -|AuthorizeNet test mode|Places Authorize.Net transactions in Test Mode|True/False| -|Enable AuthorizeNet payments|This actually enables use of Authorize.Net|True/False| -|Enable PayPal payments|This will enable use of PayPal payments through the staff client.|True/False| -|Enable PayflowPro payments|This will enable the use of PayPal's Payflow Pro. This is not the same as PayPal.|True/False| -|Enable Stripe payments|This will enable the use of the stripe credit card processing.|True/False|https://stripe.com -|Name default credit processor|This might be "AuthorizeNet", "PayPal", "PayflowPro", or "Stripe".|Text|This sets the company that you will use to process the credit cards. -|PayPal login|Enter the PayPal login Username|Text|Obtain from PayPal -|PayPal password|Enter the PayPal password.|Text|Obtain from PayPal. -|PayPal signature|HASH Signature for PayPal|Text|Enter the HASH obtained from PayPal. -|PayPal test mode|Places the PayPal credit card payments in test mode.|True/False|This sends the transactions to PayPal's development.paypal.com server for testing only. -|PayflowPro login/merchant ID|Enter the PayflowPro Merchant ID|Text|Obtain from Payflow Pro Partner. -|PayflowPro partner|Enter the Partner ID from your Payflow Partner|Text|This will obtained from your Payflow Pro partner. This can be "PayPal" or "VeriSign", sometimes others. -|PayflowPro password|Password for PayflowPro|Text|Obtain from Payflow Pro Partner -|PayflowPro test mode|Place Payflow Pro in test mode.|True/False|Do not really process transactions, but stay in test mode - uses pilot-payflowpro.paypal.com instead of the usual host. -|PayflowPro vendor|Currently the same as the Payflow Pro login.|Text|Obtain from Payflow Pro partner. -|Stripe publishable key|Publishable API Key from stripe.|Text| -|Stripe secret key|Secret API key from stripe.|Text| -|====================== - -((("Finances", "Library Settings Editor"))) - -[[lse-finances]] -.Finances -[options="header"] -|======== -|Setting|Description|Data type|Notes -|Allow credit card payments|If enabled, patrons will be able to pay fines accrued at this location via credit card.|True/False| -|Charge item price when marked damaged|If true Evergreen bills item price to the last patron who checked out the damaged item. Staff receive an alert with patron information and must confirm the billing.| True/false| -|Charge lost on zero|If set to True, default item price will be charged when an item is marked lost even though the price in item record is 0.00 (same as no price). If False, only processing fee, if used, will be charged.|True/false| -|Charge processing fee for damaged items|Optional processing fee billed to last patron who checked out the damaged item. Staff receive an alert with patron information and must confirm the billing.|Number(Dollar)| Disabled when set to 0 -|Default item price|Replacement charge for lost items if price is unset in the *Copy Editor*. Does not apply if item price is set to $0|Number(dollars)| -|Disable Patron Credit|Do not allow patrons to accrue credit or pay fines/fees with accrued credit|True/False| -|Leave transaction open when long overdue balance equals zero|Leave transaction open when long-overdue balance equals zero. This leaves the long-overdue copy on the patron record when it is paid|True/False| -|Leave transaction open when lost balance equals zero|Leave transaction open when lost balance equals zero. This leaves the lost item on the patron record when it is paid|True/False| -|Long-Overdue Materials Processing Fee|The amount charged in addition to item price when an item is marked Long-Overdue|Number|Currency -|Lost materials processing fee|The amount charged in addition to item price when an item is marked lost.| Number|Currency -|Maximum Item Price|When charging for lost items, limit the charge to this as a maximum.|Number|Currency -|Minimum Item Price|When charging for lost items, charge this amount as a minimum.|Number|Currency -|Negative Balance Interval (DEFAULT)|Amount of time after which no negative balances (refunds) are allowed on circulation bills. The "Prohibit negative balance on bills" setting must also be set to "true".|Duration| -|Negative Balance Interval for Lost|Amount of time after which no negative balances (refunds) are allowed on bills for lost/long overdue materials. The "Prohibit negative balance on bills for lost materials" setting must also be set to "true".|Duration| -|Negative Balance Interval for Overdues|Amount of time after which no negative balances (refunds) are allowed on bills for overdue materials. The "Prohibit negative balance on bills for overdue materials" setting must also be set to "true".|Duration| -|Prohibit negative balance on bills (Default)|Default setting to prevent negative balances (refunds) on circulation related bills. Set to "true" to prohibit negative balances at all times or, when used in conjunction with an interval setting, to prohibit negative balances after a set period of time.|True/False| -|Prohibit negative balance on bills for lost materials|Prevent negative balances (refunds) on bills for lost/long overdue materials. Set to "true" to prohibit negative balances at all times or, when used in conjunction with an interval setting, to prohibit negative balances after an interval of time.|True/False| -|Prohibit negative balance on bills for overdue materials|Prevent negative balances (refunds) on bills for lost/long overdue materials. Set to "true" to prohibit negative balances at all times or, when used in conjunction with an interval setting, to prohibit negative balances after an interval of time.|True/False| -|Void Overdue Fines When Items are Marked Long-Overdue|If true overdue fines are voided when an item is marked Long-Overdue|True/False| -|Void overdue fines when items are marked lost|If true overdue fines are voided when an item is marked lost|True/False| -|======== - -((("GUI", "Library Settings Editor"))) -((("Graphic User Interface", "Library Settings Editor"))) -((("Patron Registration Settings", "Library Settings Editor"))) - -[[lse-gui]] -.GUI: Graphic User Interface -[options="header",separator="!"] -!=========================== -!Setting!Description!Data type!Notes -!Alert on empty bib records!Alert staff when the last item for a record is being deleted.!True/False! -!Button bar!If TRUE, the staff client button bar appears by default on all workstations registered to your library; staff can override this setting at each login.!True/False! -!Cap results in Patron Search at this number.!The maximum number of results returned per search. If 100 is set up here, any search will return 100 records at most.!Number! -!Default Country for New Addresses in Patron Editor!This is the default Country for new addresses in the patron editor.!Text! -!Default hotkeyset!Default Hotkeyset for clients (filename without the .keyset). Examples: Default, Minimal, and None!Text!Individual workstations' default overrides this setting. -!Default ident type for patron registration!This is the default Ident Type for new users in the patron editor.!Selection list! -!Default showing suggested patron registration fields!Instead of All fields, show just suggested fields in patron registration by default.!True/False! -!Disable the ability to save list column configurations locally.!GUI: Disable the ability to save list column configurations locally. If set, columns may still be manipulated, however, the changes do not persist. Also, existing local configurations are ignored if this setting is true.!True/False! -!Enable Experimental Angular Staff Catalog!Adds an entry to the Web client's search menu so that staff can experiment with the new Angular Staff Catalog.!True/False! -!Example for Day_phone field on patron registration!The example on validation on the Day_phone field in patron registration.!Text! -!Example for Email field on patron registration!The example on validation on the Email field in patron registration.!Text! -!Example for Evening-phone on patron registration!The example on validation on the Evening-phone field in patron registration.!Text! -!Example for Other-phone on patron registration!The example on validation on the Other-phone field in patron registration.!Text! -!Example for phone fields on patron registration!The example on validation on phone fields in patron registration. Applies to all phone fields without their own setting.!Text! -!Example for Postal Code field on patron registration!The example on validation on the Postal Code field in patron registration.!Text! -!Format Dates with this pattern.!Format Dates with this pattern (examples: "yyyy-MM-dd" for "2010-04-26, "MMM d, yyyy" for "Apr 26, 2010"). Formats are effective in display (not editing) area.!Text! -!Format Times with this pattern.!Format Times with this pattern '(examples: "h:m:s.SSS a z" for "2:07:20.666 PM Eastern Daylight Time", "HH:mm" for "14:07")'. Formats are effective in display (not editing) area.!Text! -!GUI: Hide these fields within the Item Attribute Editor.!Sets which fields in the Item Attribute Editor to hide in the staff client.!Text!This is useful to hide attributes that are not used. -!Horizontal layout for Volume/Copy Creator/Editor.!The main entry point for this interface is in Holdings Maintenance, Actions for Selected Rows, Edit Item Attributes / Call Numbers / Replace Barcodes. This setting changes the top and bottom panes (if FALSE) for that interface into left and right panes (if TRUE).!True/False! -!Idle timeout!If you want staff client windows to be minimized after a certain amount of system idle time, set this to the number of seconds of idle time that you want to allow before minimizing (requires staff client restart).!Number! -!Items Out Claims Returned display setting!Value is a numeric code, describing which list the circulation should appear while checked out and whether the circulation should continue to appear in the bottom list, when checked in with outstanding fines. 1 = top list, bottom list. 2 = bottom list, bottom list. 5 = top list, do not display. 6 = bottom list, do not display.!Number! -!Items Out Long-Overdue display setting!Value is a numeric code, describing which list the circulation should appear while checked out and whether the circulation should continue to appear in the bottom list, when checked in with outstanding fines. 1 = top list, bottom list. 2 = bottom list, bottom list. 5 = top list, do not display. 6 = bottom list, do not display.!Number! -!Items Out Lost display setting!Value is a numeric code, describing which list the circulation should appear while checked out and whether the circulation should continue to appear in the bottom list, when checked in with outstanding fines. 1 = top list, bottom list. 2 = bottom list, bottom list. 5 = top list, do not display. 6 = bottom list, do not display.!Number! -!Max user activity entries to retrieve (staff client)!Sets the maximum number of recent user activity entries to retrieve for display in the staff client.!Number! -!Maximum previous checkouts displayed! The maximum number of previous circulations the staff client will display when investigating item details!Number! -!Patron circulation summary is horizontal!!True/False! -!Record in-house use: # of uses threshold for Are You Sure? dialog.!In the Record In-House Use interface, a submission attempt will warn if the # of uses field exceeds the value of this setting.!Number! -!Record In-House Use: Maximum # of uses allowed per entry.!The # of uses entry in the Record In-House Use interface may not exceed the value of this setting.!Number! -!Regex for barcodes on patron registration!The Regular Expression for validation on barcodes in patron registration.!Regular Expression! -!Regex for Day_phone field on patron registration! The Regular Expression for validation on the Day_phone field in patron registration. Note: The first capture group will be used for the "last 4 digits of phone number" as patron password feature, if enabled. Ex: "[2-9]\d{2}-\d{3}-(\d{4})( x\d+)?" will ignore the extension on a NANP number.!Regular expression! -!Regex for Email field on patron registration!The Regular Expression on validation on the Email field in patron registration.!Regular expression! -!Regex for Evening-phone on patron registration!The Regular Expression on validation on the Evening-phone field in patron registration.!Regular expression! -!Regex for Other-phone on patron registration!The Regular Expression on validation on the Other-phone field in patron registration.!Regular expression! -!Regex for phone fields on patron registration!The Regular Expression on validation on phone fields in patron registration. Applies to all phone fields without their own setting.!Regular expression!`^(?:(?:\+?1\s*(?:[.-]\s*)?)?(?:\(\s*([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9])\s*\)|([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9]))\s*(?:[.-]\s*)?)?([2-9]1[02-9]|[2-9][02-9]1|[2-9][02-9]{2})\s*(?:[.-]\s*)?([0-9]{4})(?:\s*(?:#|x\.?|ext\.?|extension)\s*(\d+))?$` is a US phone number -!Regex for Postal Code field on patron registration!The Regular Expression on validation on the Postal Code field in patron registration.!Regular expression! -!Require at least one address for Patron Registration!Enforces a requirement for having at least one address for a patron during registration. If set to False, you need to delete the empty address before saving the record. If set to True, deletion is not allowed.!True/False! -!Require XXXXX field on patron registration!The XXXXX field will be required on the patron registration screen.!True/False!XXXXX can be Country, State, Day-phone, Evening-phone, Other-phone, DOB, Email, or Prefix. -!Require staff initials for entry/edit of patron standing penalties and messages.!Appends staff initials and edit date into patron standing penalties and messages.!True/False! -!Require staff initials for entry/edit of patron notes.!Appends staff initials and edit date into patron note content.!True/False! -!Require staff initials for entry/edit of copy notes.!Appends staff initials and edit date into copy note content.!True/False! -!Show billing tab first when bills are present!If true accounts for patrons with bills will open to the billing tab instead of check out!True/false! -!Show XXXXX field on patron registration!The XXXXX field will be shown on the patron registration screen. Showing a field makes it appear with required fields even when not required. If the field is required this setting is ignored.!True/False! -!Suggest XXXXX field on patron registration!The XXXXX field will be suggested on the patron registration screen. Suggesting a field makes it appear when suggested fields are shown. If the field is shown or required this setting is ignored.!True/False! -!Juvenile account requires parent/guardian!When this setting is set to true, a value will be required in the patron editor when the juvenile flag is active.!True/False! -!Toggle off the patron summary sidebar after first view.!When true, the patron summary sidebar will collapse after a new patron sub-interface is selected.!True/False! -!URL for remote directory containing list column settings.!The format and naming convention for the files found in this directory match those in the local settings directory for a given workstation. An administrator could create the desired settings locally and then copy all the tree_columns_for_* files to the remote directory.!Text! -!Uncheck bills by default in the patron billing interface!Uncheck bills by default in the patron billing interface, and focus on the Uncheck All button instead of the Payment Received field.!True/False! -!Unified Volume/Item Creator/Editor!If True, combines the Volume/Copy Creator and Item Attribute Editor in some instances.!True/False! -!Work Log: maximum actions logged!Maximum entries for "Most Recent Staff Actions" section of the Work Log interface.!Number! -!Work Log: maximum patrons logged!Maximum entries for "Most Recently Affected Patrons..." section of the Work Log interface.!Number! -!=========================== - -((("Global", "Library Settings Editor"))) - -[[lse-global]] -.Global -[options="header"] -|====== -|Setting|Description|Data type|Notes -|Allow multiple username changes|If enabled (and Lock Usernames is not set) patrons will be allowed to change their username when it does not look like a barcode. Otherwise username changing in the OPAC will only be allowed when the patron's username looks like a barcode.|True/False|Default TRUE. -|Global default locale||Number| -|Lock Usernames|If enabled username changing via the OPAC will be disabled.|Default FALSE| -|Password format|Defines acceptable format for OPAC account passwords|Regular expression|Default requires that passwords "be at least 7 characters in length,contain at least one letter (a-z/A-Z), and contain at least one number. -|Patron barcode format|Defines acceptable format for patron barcodes|Regular expression| -|Patron username format|Regular expression defining the patron username format, used for patron registration and self-service username changing only|Regular expression| -|====== - -((("Holds", "Library Settings Editor"))) - -[[lse-holds]] -.Holds -[options="header"] -|===== -|Setting|Description|Data type|Notes -|Behind desk pickup supported|If a branch supports both a public holds shelf and behind-the-desk pickups, set this value to true. This gives the patron the option to enable behind-the-desk pickups for their holds by selecting Hold is behind Circ Desk flag in patron record.|True/False| -|Best-hold selection sort order|Defines the sort order of holds when selecting a hold to fill using a given copy at capture time|Selection list| -|Block renewal of items needed for holds|When an item could fulfill a hold, do not allow the current patron to renew|True/False| -|Cancelled holds display age|Show all cancelled holds that were cancelled within this amount of time|Duration| -|Cancelled holds display count|How many cancelled holds to show in patron holds interfaces|Number| -|Clear shelf copy status|Any copies that have not been put into reshelving, in-transit, or on-holds-shelf (for a new hold) during the clear shelf process will be put into this status. This is basically a purgatory status for copies waiting to be pulled from the shelf and processed by hand|Selection list| -|Default estimated wait|When predicting the amount of time a patron will be waiting for a hold to be fulfilled, this is the default estimated length of time to assume an item will be checked out.|Duration| -|Default hold shelf expire interval|Hold Shelf Expiry Time is calculated and inserted into hold record based on this interval when capturing a hold.|Duration| -|Expire alert interval|Time before a hold expires at which to send an email notifying the patron|Duration| -|Expire interval|Amount of time until an unfulfilled hold expires|Duration| -|FIFO|Force holds to a more strict First-In, First-Out capture. Default is SAVE-GAS, which gives priority to holds with pickup location the same as checkin library.|True/False|Applies only to multi-branch libraries. Default is SAVE-GAS. -|Hard boundary||Number| -|Hard stalling interval||Duration| -|Has local copy alert|If there is an available item at the requesting library that could fulfill a hold during hold placement time, alert the patron.|True/False| -|Has local copy block|If there is an available item at the requesting library that could fulfill a hold during hold placement time, do not allow the hold to be placed.|True/False| -|Max foreign-circulation time|Time a item can spend circulating away from its circ lib before returning there to fill a hold|Duration|For multi-branch libraries. -|Maximum library target attempts|When this value is set and greater than 0, the system will only attempt to find a item at each possible branch the configured number of times|Number|For multi-branch libraries. -|Minimum estimated wait|When predicting the amount of time a patron will be waiting for a hold to be fulfilled, this is the minimum estimated length of time to assume an item will be checked out.|Duration | -|Org unit target weight|Org Units can be organized into hold target groups based on a weight. Potential items from org units with the same weight are chosen at random.|Number| -|Reset request time on un-cancel|When a hold is uncancelled, reset the request time to push it to the end of the queue|True/False| -|Skip for hold targeting|When true, don't target any items at this org unit for holds|True/False| -|Soft boundary|Holds will not be filled by items outside this boundary if there are holdable items within it.|Number | -|Soft stalling interval|For this amount of time, holds will not be opportunistically captured at non-pickup branches.|Duration| -For multiple branch libraries -|Use Active Date for age protection|When calculating age protection rules use the Active date instead of the Creation Date.|True/False|Default TRUE -|Use weight-based hold targeting|Use library weight based hold targeting|True/False| -|===== - -((("Library", "Library Settings Editor"))) - -[[lse-library]] -.Library -[options="header"] -|======= -|Setting|Description|Data type|Notes -|Change reshelving status interval|Amount of time to wait before changing an item from “Reshelving” status to “available”|Duration| -The default is at midnight each night for items with "Reshelving" status for over 24 hours. -|Claim never checked out: mark copy as missing|When a circ is marked as claims-never-checked-out, mark the item as missing|True/False| -|Claim return copy status|Claims returned copies are put into this status. Default is to leave the copy in the Checked Out status|Selection list| -|Courier code|Courier Code for the library. Available in transit slip templates as the %courier_code% macro.|Text| -|Juvenile age threshold|Upper cut-off age for patrons to be considered juvenile, calculated from date of birth in patron accounts|Duration (years)| -|Library information URL (such as "http://example.com/about.html")|URL for information on this library, such as contact information, hours of operation, and directions. Use a complete URL, such as "http://example.com/hours.html".|Text| -|Mark item damaged voids overdues|When an item is marked damaged, overdue fines on the most recent circulation are voided.|True/False| -|Pre-cat item circ lib|Override the default circ lib of "here" with a pre-configured circ lib for pre-cat items. The value should be the "shortname" (aka policy name) of the org unit|Text | -|Telephony: Arbitrary line(s) to include in each notice callfile|This overrides lines from opensrf.xml. Line(s) must be valid for your target server and platform (e.g. Asterisk 1.4).|Text| -|======= - -((("OPAC", "Library Settings Editor"))) - -[[lse-opac]] -.OPAC -[options="header"] -|==== -|Setting|Description|Data type|Notes -|Allow Patron Self-Registration|Allow patrons to self-register, creating pending user accounts|True/False| -|Allow pending addresses|If true patrons can edit their addresses in the OPAC. Changes must be approved by staff|True/False| -|Auto-Override Permitted Hold Blocks (Patrons)|This will allow patrons with the permission "HOLD_ITEM_CHECKED_OUT.override" to automatically override permitted holds.|True/False|When a patron places a hold in the OPAC that fails, and the patron has the permission to override the failed hold, this automatically overrides the failed hold rather than requiring the patron to manually override the hold. Default is False. -|Jump to details on 1 hit (OPAC)|When a search yields only 1 result, jump directly to the record details page. This setting only affects the public OPAC|True/False| -|Jump to details on 1 hit (staff client)|When a search yields only 1 result, jump directly to the record details page. This setting only affects the PAC within the staff client|True/False| -|OPAC: Number of staff client saved searches to display on left side of results and record details pages|If unset, the OPAC (only when wrapped in the staff client!) will default to showing you your ten most recent searches on the left side of the results and record details pages. If you actually don't want to see this feature at all, set this value to zero at the top of your organizational tree.|Number| -|OPAC: Org Unit is not a hold pickup library|If set, this org unit will not be offered to the patron as an option for a hold pickup location. This setting has no affect on searching or hold targeting.|True/False| -|Org unit hiding depth|This will hide certain org units in the public OPAC if the Original Location (url param "ol") for the OPAC inherits this setting. This setting specifies an org unit depth, that together with the OPAC Original Location determines which section of the Org Hierarchy should be visible in the OPAC. For example, a stock Evergreen installation will have a 3-tier hierarchy (Consortium/System/Branch), where System has a depth of 1 and Branch has a depth of 2. If this setting contains a depth of 1 in such an installation, then every library in the System in which the Original Location belongs will be visible, and everything else will be hidden. A depth of 0 will effectively make every org visible. The embedded OPAC in the staff client ignores this setting.|Number| -|Paging shortcut links for OPAC Browse|The characters in this string, in order, will be used as shortcut links for quick paging in the OPAC browse interface. Any sequence surrounded by asterisks will be taken as a whole label, not split into individual labels at the character level, but only the first character will serve as the basis of the search.|Text| -|Patron Self-Reg. Display Timeout|Number of seconds to wait before reloading the patron self-registration interface to clear sensitive data|Duration| -|Patron Self-Reg. Expire Interval|If set, this is the amount of time a pending user account will be allowed to sit in the database. After this time, the pending user information will be purged|Duration| -|Payment history age limit|The OPAC should not display payments by patrons that are older than any interval defined here.|Duration| -|Tag Circulated Items in Results|When a user is both logged in and has opted in to circulation history tracking, turning on this setting will cause previous (or currently) circulated items to be highlighted in search results|True/False| -|Tag Circulated Items in Results|When a user is both logged in and has opted in to circulation history tracking, turning on this setting will cause previous (or currently) circulated items to be highlighted in search results.|True/False|Default TRUE -|Use fully compressed serial holdings|Show fully compressed serial holdings for all libraries at and below the current context unit|True/False| -|Warn patrons when adding to a temporary book list|Present a warning dialogue when a patron adds a book to the temporary book list.|True/False| -|==== - -((("Offline", "Library Settings Editor"))) -((("Program", "Library Settings Editor"))) - -[[lse-offline]] -.Offline and Program -[options="header"] -|=================== -|Setting|Description|Data type|Notes -|Skip offline checkin if newer item Status Changed Time.|Skip offline checkin transaction (raise exception when processing) if item Status Changed Time is newer than the recorded transaction time. WARNING: The Reshelving to Available status rollover will trigger this.|True/False| -|Skip offline checkout if newer item Status Changed Time.|Skip offline checkout transaction (raise exception when processing) if item Status Changed Time is newer than the recorded transaction time. WARNING: The Reshelving to Available status rollover will trigger this.|True/False| -|Skip offline renewal if newer item Status Changed Time.|Skip offline renewal transaction (raise exception when processing) if item Status Changed Time is newer than the recorded transaction time. WARNING: The Reshelving to Available status rollover will trigger this.|True/False| -|Disable automatic print attempt type list|Disable automatic print attempts from staff client interfaces for the receipt types in this list. Possible values: "Checkout", "Bill Pay", "Hold Slip", "Transit Slip", and "Hold/Transit Slip". This is different from the Auto-Print checkbox in the pertinent interfaces in that it disables automatic print attempts altogether, rather than encouraging silent printing by suppressing the print dialogue. The Auto-Print checkbox in these interfaces have no effect on the behavior for this setting. In the case of the Hold, Transit, and Hold/Transit slips, this also suppresses the alert dialogues that precede the print dialogue (the ones that offer Print and Do Not Print as options).|Text| -|Retain empty bib records|Retain a bib record even when all attached copies are deleted|True/False| -|Sending email address for patron notices|This email address is for automatically generated patron notices (e.g. email overdues, email holds notification). It is good practice to set up a generic account, like info@nameofyourlibrary.org, so that one person’s individual email inbox doesn’t get cluttered with emails that were not delivered.|Text| -|=================== - -((("Receipt Templates", "Library Settings Editor"))) -((("SMS Settings", "Library Settings Editor"))) -((("Text Messaging", "Library Settings Editor"))) - -[[lse-receipt]] -.Receipt Templates and SMS Text Message -[options="header"] -|====================================== -|Setting|Description|Data type|Notes -|Content of alert_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(alert_text)%|Text| -|Content of event_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(event_text)%|Text| -|Content of footer_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(footer_text)%|Text| -|Content of header_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(header_text)%|Text| -|Content of notice_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(notice_text)%|Text| -|Disable auth requirement for texting call numbers.|Disable authentication requirement for sending call number information via SMS from the OPAC.|True/False| -|Enable features that send SMS text messages.|Current features that use SMS include hold-ready-for-pickup notifications and a "Send Text" action for call numbers in the OPAC. If this setting is not enabled, the SMS options will not be offered to the user. Unless you are carefully silo-ing patrons and their use of the OPAC, the context org for this setting should be the top org in the org hierarchy, otherwise patrons can trample their user settings when jumping between orgs.|True/False| -|====================================== - -((("Security", "Library Settings Editor"))) - -[[lse-security]] -.Security -[options="header"] -|======== -|Setting|Description|Data type|Notes -|Default level of patrons' internet access|Enter numbers 1 (Filtered), 2 (Unfiltered), or 3 (No Access)|Number| -|Maximum concurrently active self-serve password reset requests|Prevent the creation of new self-serve password reset requests until the number of active requests drops back below this number.|Number| -|Maximum concurrently active self-serve password reset requests per user|When a user has more than this number of concurrently active self-serve password reset requests for their account, prevent the user from creating any new self-serve password reset requests until the number of active requests for the user drops back below this number.|Number| -|OPAC Inactivity Timeout (in seconds)|Number of seconds of inactivity before OPAC accounts are automatically logged out.|Number| -|Obscure the Date of Birth field|When true, the Date of Birth column in patron lists will default to Not Visible, and in the Patron Summary sidebar the value will display as unless the field label is clicked.|True/False| -|Offline: Patron usernames allowed|During offline circulations, allow patrons to identify themselves with -usernames in addition to barcode. For this setting to work, a barcode format must also be defined|True/False| -|Patron opt-in boundary|This determines at which depth above which patrons must be opted in, and below which patrons will be assumed to be opted in.|Text| -|Patron opt-in default|This is the default depth at which a patron is opted in; it is calculated as an org unit relative to the current workstation.|Text| -|Patron: password from phone #|If true the last 4 digits of the patron's phone number is the password for new accounts (password must still be changed at first OPAC login)|True/false| -|Persistent login duration|How long a persistent login lasts, e.g. '2 weeks'|Duration| -|Self-serve password reset request time-to-live|Length of time (in seconds) a self-serve password reset request should remain active.|Duration| -|Staff login inactivity timeout (in seconds)|Number of seconds of inactivity before staff client prompts for login and password.|Number| -|======== - -((("Self Check", "Library Settings Editor"))) - -[[lse-selfcheck]] -.Self Check and Others -[options="header"] -|===================== -|Setting|Description|Data type|Notes -|Audio Alerts|Use audio alerts for selfcheck events.|True/false| -|Block copy checkout status|List of copy status IDs that will block checkout even if the generic COPY_NOT_AVAILABLE event is overridden.|Number|Look up copy status ID from Server Admin. -|Patron login timeout (in seconds)|Number of seconds of inactivity before the patron is logged out of the selfcheck interface.|Duration| -|Pop-up alert for errors|If true, checkout/renewal errors will cause a pop-up window in addition to the on-screen message.|True/False| -|Require Patron Password|If true, patrons will be required to enter their password in addition to their username/barcode to log into the selfcheck interface.|True/False|This replaced "Require patron password" -|Require patron password||True/False|This was replaced by "Require Patron Password" and is currently invalid. -|Selfcheck override events list|List of checkout/renewal events that the selfcheck interface should automatically override instead instead of alerting and stopping the transaction.|Text| -|Workstation Required|All selfcheck stations must use a workstation.|True/False| -|Default display grouping for serials distributions presented in the OPAC.|Default display grouping for serials distributions presented in the OPAC. This can be "enum" or "chron".|Text| -|Previous issuance copy location|When a serial issuance is received, copies (units) of the previous issuance will be automatically moved into the configured shelving location.|Selection List| -|Maximum redirect lookups|For URLs returning 3XX redirects, this is the maximum number of redirects we will follow before giving up.|Number| -|Maximum wait time (in seconds) for a URL to lookup|If we exceed the wait time, the URL is marked as a "timeout" and the system moves on to the next URL|Duration| -|Number of URLs to test in parallel|URLs are tested in batches. This number defines the size of each batch and it directly relates to the number of back-end processes performing URL verification.|Number| -|Number of seconds to wait between URL test attempts|Throttling mechanism for batch URL verification runs. Each running process will wait this number of seconds after a URL test before performing the next.|Duration| -|===================== - -((("Vandelay", "Library Settings Editor"))) - -[[lse-vandelay]] -.Vandelay -[options="header"] -|======== -|Setting|Description|Data type|Notes -|Default Record Match Set|Sets the Default Record Match set |Selection List|Populated by the Vandelay Record Match Sets -|Vandelay Default Barcode Prefix|Apply this prefix to any auto-generated item barcode|Text| -|Vandelay Default Call Number Prefix|Apply this prefix to any auto-generated item call numbers.|Text| -|Vandelay Default Circulation Modifier|Default circulation modifier value for imported items|Selection List|Populated by your Circulation Modifiers. -|Vandelay Default Copy Location|Default copy location value for imported items|Selection List|Populated from Shelving Locations -|Vandelay Generate Default Barcodes|Auto-generate default item barcodes when no item barcode is present|True/False| -|Vandelay Generate Default Call Numbers|Auto-generate default item call numbers when no item call number is present|True/False|These are pulled from the MARC Record. -|======== - -[#data_types] -=== Data Types === - -((("Data Types", "Library Settings Editor"))) - -Acceptable formats for each setting type are listed below. Quotation -marks are never required when updating settings in the staff client. - -.Data Types in the Library Settings Editor -[options="header"] -|============= -|Data type|Formatting -|True/False|Boolean True/False drop down -|Number|Enter a numerical value (decimals allowed in price settings) -|Duration|Enter a number followed by a space and any of the following units: minutes, hours, days, months (30 minutes, 2 days, etc) -|Selection list|Choose from a drop-down list of options (e.g. copy status, copy location) -|Text|Free text -|============= diff --git a/docs-antora/modules/admin/pages/lsa-address_alert.adoc b/docs-antora/modules/admin/pages/lsa-address_alert.adoc deleted file mode 100644 index c6e8d9e84c..0000000000 --- a/docs-antora/modules/admin/pages/lsa-address_alert.adoc +++ /dev/null @@ -1,129 +0,0 @@ -= Address Alert = -:toc: - -indexterm:[address alerts] - -The Address Alert module gives administrators the ability to notify staff with a custom message when -addresses with certain patterns are entered in patron records. - -This feature only serves to provide pertinent information to your library system's circulation staff during the registration process. An alert will not prevent the new patron account from being registered and the information will not be permanently associated with the patron account. - -To access the Address Alert module, select *Administration* -> *Local Administration* -> *Address Alerts*. - -[NOTE] -========== -You must have Local Administrator permissions or ADMIN_ADDRESS_ALERT permission to access the Address Alert module. -========== - -== General Usage Examples == - -- Alert staff when an address for a large apartment is entered to prompt them to ask for unit number. -- Alert staff when the address of a hotel or other temporary housing is entered. -- Alert staff when an address for a different country is entered. -- Alert staff when a specific city or zip code is entered if that city or zip code needs to be handled in a special way. If you have a neighboring city that you don't have a reciprocal relationship with, you could notify staff that a fee card is required for this customer. - -== Access Control and Scoping == - -Each address alert is tied to an Org Unit and will only be matched against staff client instances of that Org Unit and its children. - -When viewing the address alerts you will only see the alerts associated with the specific org unit selected in the *"Context Org Unit"* selection box. You won't see alerts associated with parent org units, so the list of alerts isn't a list of all alerts that may effect your org unit, only of the ones that you can edit. - -The specific permission that controls access to configuring this feature is ADMIN_ADDRESS_ALERT. Local Administrator level users will already have this permission. It is possible for the Local Administrator to grant this permission to other staff. - -== Adding a new Address Alert == - -How to add an address to the alert list: - -. Log into the Evergreen Staff Client using a Local Administrator account or another account that has been granted the proper permission. -. Click on Administration -> Local Administration -> Address Alerts. -. Click "New Address Alert." -. A form will open with the following fields to fill out: -+ -.New Address Alert Fields -|=== -|*Field* |*Description* -| Owner |Which Org Unit owns this alert. Set this to your system or branch. -| Active |Check-box that controls if the alert is active or not. Inactive alerts are not processed. -| Match All Fields |Check-box that controls if all the fields need to match to trigger the alert(checked), or only at least one field needs to match(unchecked). -| Alert Message |Message that will be displayed to staff when this alert is triggered. -| Street (1) |Street 1 field regular expression. -| Street (2) |Street 2 field regular expression. -| City |City regular expression. -| County |County regular expression. -| State |State regular expression. -| Country |County regular expression. -| Postal Code |Postal Code regular expression. -| Address Alert ID |Displays the internal database id for alert after the alert has been saved. -| Billing Address |Check-box that specifies that the alert will only match a billing address if checked. -| Mailing Address |Check-box that specifies that the alert will only match a mailing address if checked. -|=== -+ -. Click save once you have finished. - -== Editing an Address Alert == - -To make changes to an existing alert, double click on the alert in the list. The editing form will appear, make your changes and click save or cancel when you are done. - -If you don't see your alerts, make sure the *"Context Org Unit"* selection box has the correct Org Unit selected. - -== Deleting an Address Alert == - -To delete an alert or many alerts, click the selection check-box for all alerts you would like to delete. Then click the "Delete Selected" button at the top of the screen. - -== Staff View of Address Alerts == - -When an Address Alert is triggered by a matching address the staff will see the address block highlighted with a red dashed line, along with an *"Address Alert"* block which contains the alert message. - -Here is an example of what staff would see. - -image::media/lsa-address_alert_staff_view.png[Address Alert Staff View] - -== Regular Expressions / Wildcards == - -All of the patterns entered to match the various address fields are evaluated as case-insensitive regular expressions by default. - -[NOTE] -========== -Address Alerts use POSIX Regular Expressions included in the PostgreSQL database engine. See the PostgreSQL documentation for full details. -========== - -If you want to do a case-sensitive match you need to prepend the pattern with "(?c)" - -The simplest regular expression that acts as a wildcard is ".*", that matches any type of character zero or more times. - -== Examples == - -.Apartment address -Match an apartment address to prompt for unit number. - -. Choose *Owner* Org Unit. -. Active = Checked -. Match All Fields = Checked -. Alert Message = "This is a large apartment building, Please ask customer for unit number." -. Street (1) = "1212 Evergreen Lane.*" -. City = "mytown" - -.All addresses on street -Match all addresses on a certain street. Matches ave and avenue because of ending wildcard. - -. Choose *Owner* Org Unit. -. Active = Checked -. Match All Fields = Checked -. Alert Message = "This street is in a different county, please setup reciprocal card." -. Street (1) = ".* Evergreen Ave.*" -. City = "mytown" - -.Match list of cities -Match several different cities with one alert. Could be used if certain cities don't have reciprocal agreements. Note the use of parentheses and the | character to separate the different options. - -. Choose *Owner* Org Unit. -. Active = Checked -. Match All Fields = Checked -. Alert Message = "Customer must purchase a Fee card." -. City = "(Emeryville|San Jose|San Francisco)" - -== Development == - -Links to resources with more information on how and why this feature was developed and where the various source files are located. - -- Launchpad ticket for the feature request and development of address alerts - https://bugs.launchpad.net/evergreen/+bug/898248 diff --git a/docs-antora/modules/admin/pages/lsa-barcode_completion.adoc b/docs-antora/modules/admin/pages/lsa-barcode_completion.adoc deleted file mode 100644 index 2f0e32c635..0000000000 --- a/docs-antora/modules/admin/pages/lsa-barcode_completion.adoc +++ /dev/null @@ -1,248 +0,0 @@ -= Barcode Completion = -:toc: - -indexterm:[Barcode Completion,Lazy Circ] - -The Barcode Completion feature gives users the ability to only enter the -unique part of patron and item barcodes. This can significantly reduce the -amount of typing required for manual barcode input. - -This feature can also be used if there is a difference between what the -barcode scanner outputs and what is stored in the database, as long as the -barcode that is stored has more characters then what the scanner is -outputting. Barcode Completion is additive only; you cannot use it to match a -stored barcode that has less characters than what is entered. For example, if -your barcode scanners previously output *a123123b* and now exclude the prefix -and suffix, you could match both formats using Barcode Completion rules. - -Because this feature adds an extra database search for each enabled rule to -the process of looking up a barcode, it can add extra delays to the check-out -process. Please test in your environment before using in production. - -== Scoping and Permissions == - -*Local Administrator* permission is needed to access the admin interface of the -Barcode Completion feature. - -Each rule requires an owner org unit, which is how scoping of the rules is -handled. Rules are applied for staff users with the same org unit or -descendants of that org unit. - - -== Access Points == - -The admin interface for Barcode Completion is located under *Administration* --> *Local Administration* -> *Barcode Completion*. - -image::media/lsa-barcode_completion_admin.png[Barcode Completion Admin List] - -The barcode completion functionality is available at the following interfaces. - -=== Check Out Step 1: Lookup Patron by Barcode === - -image::media/Barcode_Checkout_Patron_Barcode.png[Patron Barcode Lookup for Checking Out] - -=== Check Out Step 2: Scanning Item Barcodes === - -image::media/Barcode_Checkout_Item_Barcode.png[Item Barcode at Check Out] - -=== Staff Client Place Hold from Catalog === - -image::media/Barcode_OPAC_Staff_Place_Hold.png[Patron Barcode Lookup for Staff Placing Hold] - -=== Check In === - -image::media/Barcode_Check_In.png[Item Barcode at Check In] - -=== Item Status === - -image::media/Barcode_Item_Status.png[Item Barcode at Item Status screen] - - -NOTE: Barcode completion is also available during check out if library -setting "Load patron from Checkout" is set. -(Automatically detects if an actor/user barcode is scanned during -check out, and starts a new check out session using that user.) - -NOTE: Barcode Completion does not work in the - *Search for Patron [by Name]* interface. - - -== Multiple Matches == - -If multiple barcodes are matched, say if you have both "123" and "00000123" -as valid barcodes, you will receive a list of all the barcodes that match all -the rules that you have configured. It doesn't stop after the first rule -that matches, or after the first valid barcode is found. - -image::media/lsa-barcode_completion_multiple.png[Barcode Completion Multiple Matches] - -== Barcode Completion Data Fields == - -The following data fields can be set for each Barcode Completion rule. - -.Barcode Completion Fields -|======= -|*Active* | Check to indicate entry is active. *Required* -|*Owner* | Setting applies to this Org Unit and to all children. *Required* -|*Prefix* | Sequence that appears at the beginning of barcode. -|*Suffix* | Sequence that appears at the end of barcode. -|*Length* | Total length of barcode. -|*Padding* | Character that pads out non-unique characters in the barcode. -|*Padding At End* | Check if the padding starts at the end of the barcode. -|*Applies to Items*| Check if entry applies to item barcodes. -|*Applies to Users*| Check if entry applies to user barcodes. -|======= - - -.Length and Padding - -Length and Padding are related, you cannot use one without the other. If a barcode -has to be a certain length, then it needs to be able to be padded out to that length. -If a barcode has padding, then we need to know the max length that we need to pad out -to. If length is set to blank or zero, or padding is left blank then they are both -ignored. - - -.Applies to Items/Users -One or both of these options must be checked for the rule to have any effect. - -image::media/lsa-barcode_completion_fields.png[Barcode Completion Data Fields] - -== Create, Update, Filter, Delete/Disable Rules == - -image::media/lsa-barcode_completion_admin.png[Barcode Completion Admin] - -In the Barcode Completion admin interface at *Administration* -> *Local Administration* --> *Barcode Completion* you can create, update and disable rules. - -=== Create Rules === -To create a new rule click on the *New* button in the upper right corner. -When you are are done with editing the new rule click the *Save* button. If -you want to cancel the new rule creation click the *Cancel* button. - -=== Update Rules === -To edit a rule double click on the rule in the main list. - -=== Filter Rules === -It may be useful to filter the rules list if there are a large number of -rules. Click on the *filter* link to bring up the *Filter Results* dialog -box. You can filter on any of the data fields and you can setup multiple -filter rules. Click *Apply* to enable the filter rules, only the rows that match -will now be displayed. - -To clear out the filter rules, delete all of the filter rules by clicking the -*X* next to each rule, and then click *Apply*. - -=== Delete/Disable Rules === -It isn't possible to delete a rule from the database from the admin interface. -If a rule is no longer needed set *Active* to "False" to disable it. To keep -the number of rules down, reuse inactive rules when creating new rules. - -== Examples == - -In all these examples, the unique part of the barcode is *123*. So that is -all that users will need to type to match the full barcode. - -=== Barcode With Prefix and Padding === - -Barcode: *4545000123* - -To match this 10 character barcode by only typing in *123* we need the -following settings. - - * *Active* - Checked - * *Owner* - Set to your org unit. - * *Prefix* - 4545 - This is the prefix that the barcode starts with. - * *Length* - 10 - Total length of the barcode. - * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. - * *Applies to Items* and/or *Applies to Users* - Checked - -The system takes the *123* that you entered and adds the prefix to the beginning -of it. Then adds zeros between the prefix and your number to pad it out to -10 characters. Then it searches the database for that barcode. - -=== Barcode With Suffix === - -Barcode: *123000book* - -To match this 10 character barcode by only typing in *123* we need the -following settings. - - * *Active* - Checked - * *Owner* - Set to your org unit. - * *Suffix* - book - This is the suffix that the barcode ends with. - * *Length* - 10 - Total length of the barcode. - * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. - * *Padding at End* - Checked - * *Applies to Items* and/or *Applies to Users* - Checked - -The system takes the *123* that you entered and adds the suffix to the end of it. -Then adds zeros between your number and the suffix to pad it out to 10 -characters. Then it searches the database for that barcode. - -=== Barcode With Left Padding === - -Barcode: *0000000123* - -To match this 10 character barcode by only typing in *123* we need the -following settings. - - * *Active* - Checked - * *Owner* - Set to your org unit. - * *Length* - 10 - Total length of the barcode. - * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. - * *Applies to Items* and/or *Applies to Users* - Checked - -The system takes the *123* that you entered, then adds zeros between your -number and the left to pad it out to 10 characters. Then it searches the -database for that barcode. - -=== Barcode With Right Padding === - -Barcode: *1230000000* - -To match this 10 character barcode by only typing in *123* we need the -following settings. - - * *Active* - Checked - * *Owner* - Set to your org unit. - * *Length* - 10 - Total length of the barcode. - * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. - * *Padding at End* - Checked - * *Applies to Items* and/or *Applies to Users* - Checked - -The system takes the *123* that you entered, then adds zeros between your -number and the right to pad it out to 10 characters. Then it searches the -database for that barcode. - -=== Barcode of any Length with Prefix and Suffix === - -Barcode: *a123b* - -To match this 5 character barcode by only typing in *123* we need the -following settings. This use of Barcode Completion doesn't save many -keystrokes, but it does allow you to handle the case where your barcode -scanners at one point were set to output a prefix and suffix which was stored -in the database. Now your barcode scanners no longer include the prefix and suffix. -These settings will simply add the prefix and suffix to any barcode entered and -search for that. - - * *Active* - Checked - * *Owner* - Set to your org unit. - * *Length/Padding* - 0/null - Set the length to 0 and/or leave the padding blank. - * *Prefix* - a - This is the prefix that the barcode starts with. - * *Suffix* - b - This is the suffix that the barcode starts with. - * *Applies to Items* and/or *Applies to Users* - Checked - -The system takes the *123* that you entered, then adds the prefix and suffix -specified. Then it searches the database for that barcode. Because no length -or padding was entered, this rule will add the prefix and suffix to any -barcode that is entered and then search for that valid barcode. - - -== Testing == - -To test this feature, setup the rules that you want, then setup items/users -with barcodes that should match. Then try scanning the short version of -those barcodes in the various supported access points. diff --git a/docs-antora/modules/admin/pages/lsa-standing_penalties.adoc b/docs-antora/modules/admin/pages/lsa-standing_penalties.adoc deleted file mode 100644 index 59eb0b8acd..0000000000 --- a/docs-antora/modules/admin/pages/lsa-standing_penalties.adoc +++ /dev/null @@ -1,25 +0,0 @@ -= Standing Penalties = -:toc: - -In versions of Evergreen prior to 2.3, the following penalty types were -available by default. When applied to user accounts, these penalties prevented -users from completing the following actions: - -* *CIRC* - Users cannot check out items -* *HOLD* - Users cannot place holds on items -* *RENEW* - Users cannot renew items - -In version 2.3, two new penalty types are available in Evergreen: - -* *CAPTURE* - This penalty prevents a user's holds from being captured. If the -_HOLD_ penalty has not been applied to a user's account, then the patron can place a -hold, but the targeted item will not appear on a pull list and will not be -captured for a hold if it is checked in. -* *FULFILL* - This penalty prevents a user from checking out an item that is on -hold. If the _HOLD_ and _CAPTURE_ penalties have not been applied to a user's -account, then the user can place a hold on an item, and the item can be captured -for a hold. However, when he tries to check out the item, the circulator will -see a pop up box with the name of the penalty type, _FULFILL_. The circulator -must correct the problem with the account or must override the penalty to check -out the item. - diff --git a/docs-antora/modules/admin/pages/lsa-statcat.adoc b/docs-antora/modules/admin/pages/lsa-statcat.adoc deleted file mode 100644 index eb7f3a8632..0000000000 --- a/docs-antora/modules/admin/pages/lsa-statcat.adoc +++ /dev/null @@ -1,88 +0,0 @@ -= Statistical Categories Editor = -:toc: - -This is where you configure your statistical categories (stat cats). Stat cats are a way to save and report on additional information that doesn't fit elsewhere in Evergreen's default records. It is possible to have stat cats for copies or patrons. - -1. Click *Administration -> Local Administration -> Statistical Categories Editor.* - -2. To create a new stat cat, enter the name of the category and select either _patron_ or _copy_ from the *Type* dropdown menu. Each category type has a number of options you may set. - -*Copy Statistical Categories* - -Copy stat cats appear in the _Holdings Editor_. You might use copy stat cats to track books you have bought from a specific vendor, or donations. - -An example of the _Create a new statistical category_ controls for copies: - -image::media/lsa-statcat-1.png[Create copy stat cat] - -* _OPAC Visibility_: Should the category be displayed in the OPAC? -* _Required_: Must the category be assigned a value when editing the item attributes? -* _Archive with Circs_: Should the category and its values for the copy be archived with aged circulation data? -* _SIP Field_: Select the SIP field identifier that will contain the category and its value -* _SIP Format_: Specify the SIP format string - -Some sample copy stat cats: - -image::media/lsa-statcat-2.png[Sample copy stat cats] - -To add an entry, select _Add_. Due to a known bug, individual entries for stat cats cannot be edited in the web client. - -Stat cats can be edited or deleted by clicking on _Edit_. - -This is how the copy stat cats appear in the _Holdings Editor_: - -image::media/lsa-statcat-3.png[Stat cats in Holdings Editor] - -You can use the _Filter by Library_ selector to display copy stat cats owned by a particular library: - -image::media/lsa-statcat-3a.png[Stat cat library selector] - -*Patron Statistical Categories* - -Patron stat cats can be used to keep track of information such as a patron's school affiliation, membership in a group like the Friends of the Library, or patron preferences. They appear in the fourth section of the _Patron Registration_ or _Edit Patron_ screen, under the label _Statistical Categories_. - -An example of the _Create a new statistical category_ controls for patrons: - -image::media/lsa-statcat-4.png[Create patron stat cat] - -* _OPAC Visibility_: Should the category be displayed in the OPAC? -* _Required_: Must the category be assigned a value when registering a new patron or editing an existing one? -* _Archive with Circs_: Should the category and its values for the patron be archived with aged circulation data? -* _Allow Free Text_: May the person registering/editing the patron information supply their own value for the category? -* _Show in Summary_: Display the category and its value in the patron summary view? -* _SIP Field_: Select the SIP field identifier that will contain the category and its value -* _SIP Format_: Specify the SIP format string - -[WARNING] -.WARNING -===================================== -If you make a category *required* and also *disallow free text*, make sure that you populate an entry list for the category so that the user may select a value. Failure to do so will result in an unsubmittable patron registration/edit form! -===================================== - -Some sample patron stat cats: - -image::media/lsa-statcat-5.png[Sample patron stat cats] - -To add an entry, click on _Add_ in the category row under the _Add Entry_ column: - -image::media/lsa-statcat-6.png[Add patron category entry] - -Stat cats can be edited or deleted by clicking on _Edit_. - -Due to a known bug, individual entries for stat cats cannot be edited in the web client. - -An *organizational unit* (consortium, library system branch library, sub library, etc.) may create their own categories and entries, or supplement categories defined by a higher-level org unit with their own entries. - -An entry can be set as the *default* entry for a category and for an org unit. If an entry is set as the default, it will be automatically selected in the patron edit screen, provided no other value has been previously set for the patron. Only one default may be set per category for any given org unit. - -Lower-level org unit defaults override defaults set for higher-level org units; but in the absence of a default set for a given org unit, the nearest parent org unit default will be selected. - -Default entries for the focus location org unit are marked with an asterisk in the entry dropdowns. - -This is how patron stat cats appear in the patron registration/edit screen: - -image::media/lsa-statcat-8.png[Patron stat cats in registration screen] - -The yellow highlight denotes a stat cat that is required, and you will not be allowed to save or create a patron unless a value is entered. - -To remove a stat cat value, select the text in the right-hand box and use your keyboard's backspace or delete key. diff --git a/docs-antora/modules/admin/pages/lsa-work_log.adoc b/docs-antora/modules/admin/pages/lsa-work_log.adoc deleted file mode 100644 index 42e179d97f..0000000000 --- a/docs-antora/modules/admin/pages/lsa-work_log.adoc +++ /dev/null @@ -1,20 +0,0 @@ -= Work Log = -:toc: - -indexterm:[Work Log] -indexterm:[staff client, Work Log] -indexterm:[workstation, Work Log] - - -== Expanding the Work Log == - -The Work Log records checkins, checkouts, patron registration, patron editing, renewals, payments and holds placed from with the patron record for a given login. - -To access the Work Log go to *Administration* -> *Local Administration* -> *Work Log*. - -There are two seperate logs, *Most Recently Logged Staff Actions* and *Most Recently Affected patrons*. The *Most Recently Logged Staff Actions* logs the the transactions in order they have occured on the workstation. The *Most Recently Affected Patrons* log is a listing of the last patrons that transactions happened on. - -The Work Log can contain a maximum number of transactions, this number is set via the xref:admin:librarysettings.adoc[Library Settings Editor]. They are in the GUI group of settings. *Work Log: Maximum Actions Logged* effects the number of transactions listing under the *Most Recently Logged Staff Actions* and *Work Log: Maximum Patrons Logged* limits the number of patrons that are listed in the log. - -image::worklog.png[Work Log] - diff --git a/docs-antora/modules/admin/pages/marc_templates.adoc b/docs-antora/modules/admin/pages/marc_templates.adoc deleted file mode 100644 index cac1fb9210..0000000000 --- a/docs-antora/modules/admin/pages/marc_templates.adoc +++ /dev/null @@ -1,63 +0,0 @@ -= MARC Templates = -:toc: - -MARC Templates make the cataloging process more efficient for catalogers. At this time, MARC Templates have to be -created on the server, rather than in the Web client. - -== Adding MARC Templates == - -. Create a marc template in the directory _/openils/var/templates/marc/_. It should be in xml format. Here is an - example file `k_book.xml`: -+ -[source,xml] ---------------------------------------------------------------------- - - 00620cam a2200205Ka 4500 - 070101s eng d - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ---------------------------------------------------------------------- -+ -. Add the template to the to the marctemplates list in the _open-ils.cat_ section of the Evergreen configuration - file `opensrf.xml`. -. Restart perl services for changes to take effect with the command - `/openils/bin/osrf_control -l --restart --service=open-ils.cat` diff --git a/docs-antora/modules/admin/pages/multilingual_search.adoc b/docs-antora/modules/admin/pages/multilingual_search.adoc deleted file mode 100644 index 6dea7d67d9..0000000000 --- a/docs-antora/modules/admin/pages/multilingual_search.adoc +++ /dev/null @@ -1,67 +0,0 @@ -= Multilingual Search in Evergreen = -:toc: - -It is now possible to search for items that contain multiple languages in the Evergreen catalog. This will help facilitate searching for bilingual and multilingual materials, including specific translations, alternative languages, and to exclude specific translations from a search. - -To identify the language of materials, Evergreen looks at two different fields in the MARC bibliographic record: - -* 008/35-37: the language code located in characters 35-37 of the 008 tag -* 041$abdefgm: the 041 tag, subfields $abdefgm, which contain additional language codes - -Multilingual searches can be conducted by constructing searches using specific language codes as a filter. To search using specific language codes, use the Record Attribute Definition name _item_lang_ followed by the appropriate MARC Code for Languages. For example, _item_lang(spa)_ will search only for Spanish language materials. - -The language filter can be appended to any search. For example, a title search for _pippi longstocking item_lang(eng,swe)_ will search for English or Swedish language publications of the title. - -image::media/multilingual_search1.png[] - -== Search Syntax == - -To search for materials that contain multiple languages (Boolean AND), the search filters can be constructed in the following ways: - -. Implicit Boolean filtering: _item_lang(eng) item_lang(spa)_ -.. Evergreen assumes a Boolean AND between the search filters -. Explicit Boolean filtering: _item_lang(eng) && item_lang(spa)_ -.. The double ampersands (&&) explicitly tell Evergreen to apply a Boolean AND to the search filters - -To search for materials that contain at least one of the searched languages (Boolean OR), the search filters can be constructed in the following ways: - -. List filtering: _item_lang(eng,spa)_ -.. Listing the language codes, separated by a comma, within the search filter, tells Evergreen to apply a Boolean OR to the search filters -. Explicit Boolean filtering: _item_lang(eng) || item_lang(spa)_ -.. The double pipes (||) explicitly tell Evergreen to apply a Boolean OR to the search filters - -To search for materials that contain a specific language and exclude another language from the search results (Boolean NOT), the search filters can be constructed as follows: - -. Boolean filtering: _item_lang(spa) -item_lang(eng)_ -.. The dash (-) explicitly tells Evergreen to apply a Boolean NOT to the english language search filter. Evergreen assumes a Boolean AND between the search filters. - -To exclude multiple languages from search results (Boolean NOT), the search filters can be constructed as follows: - -. Boolean filtering: _-item_lang(eng) -item_lang(spa)_ -.. The dash (-) explicitly tells Evergreen to apply a Boolean NOT to both search filters. Evergreen assumes a Boolean AND between the search filters. - -To conduct a search for materials that do not contain at least of the of the languages searched (Boolean “NOT” and “OR”), the search filters can be constructed in the following ways: - -. List filtering: _-item_lang(eng,spa)_ -.. Explicit Boolean filtering: _-item_lang(eng) || -item_lang(spa)_ - - -== Advanced Search == - -Within the Advanced Search interface, multiple languages can be selected from the Language filter by holding down the Ctrl key on the keyboard and selecting the desired languages. This will apply a Boolean OR operator to the language filters. - -image::media/multilingual_search2.PNG[] - - -== Adding Subfields to the Index == - -Additional subfields for the 041 tag, such as h, j, k, and n, can be added to the index through the Record Attribute Definitions interface. Any records containing the additional subfields will need to be reingested into the database after making changes to the Record Attribute Definition. - -. Go to *Administration>Server Administration>Record Attribute Definitions*. -. Click *Next* to locate the _item_lang_ record attribute definition. -. To edit the definition, double click on the item_lang row and the configuration window will appear. -. In the _MARC Subfields_ field, add the subfields you want included in the index. -. Click *Save*. - -image::media/multilingual_search3.PNG[] - diff --git a/docs-antora/modules/admin/pages/patron_address_by_zip_code.adoc b/docs-antora/modules/admin/pages/patron_address_by_zip_code.adoc deleted file mode 100644 index da53c8e79a..0000000000 --- a/docs-antora/modules/admin/pages/patron_address_by_zip_code.adoc +++ /dev/null @@ -1,158 +0,0 @@ -= Patron Address City/State/County Pre-Populate by ZIP Code = -:toc: - -indexterm:[zips.txt, Populate Address by ZIP Code, ZIP code] - -This feature saves staff time and increases accuracy when entering patron address information by -automatically filling in the City, State and County information based on the -ZIP code entered by the staff member. - -*Released:* Evergreen 0.1, available in all versions. - -Please be aware of the following when using this feature. - -* ZIP codes do not always match 1 to 1 with City, State and County. ZIP codes were designed for postal delivery and represent postal delivery zones that may cover more than one city, state or county. -** It is currently only possible to have one match per ZIP code, but you can add an alert to those entries to prompt staff to double check the entered data. -* Only the first 5 digits of the ZIP are used. ZIP+4 is not currently supported. -* The zips.txt data is loaded once at service startup and stored in memory, so changes to the zips.txt data file require that Evergreen be restarted. Specifically, you need to restart the "open-ils.search" OpenSRF service. - - -== Scoping and Permissions == - -There are no staff client permissions associated with this feature since there is no staff client interface. - -This feature affects all users of the system; there is no way to have separate settings per Org Unit. - -== Setup Steps == - -=== Step 1 - Setup Data File === - -The default location and name of the data file is /openils/var/data/zips.txt on your Evergreen server. You can choose a different location if needed. - -The file format of your zips.txt will look like this (delimited by the .): - -ID|*StateAbb*|*City*|*ZIP*|*IsDefault*|StateID|*County*|AreaCode|*AlertMesg* - -The only fields that are used are *StateAbb*, *City*, *ZIP*, *IsDefault*, *County* and *AlertMesg*. - -Most fields can be left blank if the information is not available and that data will not be entered. - -.Data Field Descriptions -. ID - ID field to uniquely identify this row. Not required, can be left blank. -. *StateAbb* - State abbreviation like "MN" or "ND". -. *City* - Name of city. -. *ZIP* - ZIP code, only first 5 digits used. -. *IsDefault* - Must be set to 1 for the row to be used. Easy way to disable/enable a row. -. StateID - Unknown and unused. -. *County* - County name. -. AreaCode - Phone number area code, unused. -. *AlertMesg* - Message to display to staff to alert them of any special circumstances. - -TIP: The Address Alerts feature -- described in the Staff Client Sysadmin manual -- can also be used to alert staff about certain addresses. - -Here is an example of what the data file should look like. - -.Example zips.txt ----- -|MN|Moorhead|56561|1||Clay|| -|MN|Moorhead|56562|1||Clay|| -|MN|Moorhead|56563|1||Clay|| -|MN|Sabin|56580|1||Clay|| -|MN|Ulen|56585|1||Clay|| -|MN|Lake Itasca|56460|1||Clearwater County|| -|MN|Bagley|56621|1||Clearwater|| -|MN|Clearbrook|56634|1||Clearwater|| -|MN|Gonvick|56644|1||Clearwater|| ----- - -=== Step 2 - Enable Feature === - -The next step is to tell the system to use the zips.txt file that you created. This is done by editing /openils/conf/opensrf.xml. Look about halfway into the file and you may very well see a commented section in the file that looks similar to this: - ----- - - - - ----- - -Uncomment the area by . .. Change the file path if you placed your file in a different location. The file should look like this after you are done. - ----- - - /openils/var/data/zips.txt - - ----- - -.Save and Restart -Save your changes to the opensrf.xml file, restart Evergreen and restart Apache. - -NOTE: The specific opensrf services you need to restart are "opensrf.setting" and "open-ils.search". - -=== Step 3 - Test === - -Open up the staff client and try to register a new patron. When you get to the address section, enter a ZIP code that you know is in your zips.txt file. The data from the file that matches your ZIP will auto fill the city, state and county fields. - -== ZIP Code Data == - -There are several methods you can use to populate your zips.txt with data. - -=== Manual Entry === - -If you only have a few communities that you serve, entering data manually may be the simplest approach. - -=== Geonames.org Data === - -Geonames.org provides free ZIP code to city, state and county information licensed under the Creative Commons Attribution 3.0 License, which means you need to put a link to them on your website. Their data includes primary city, state and county information only. It doesn't include info about which other cities are included in a ZIP code. Visit http://www.geonames.org for more info. - -The following code example shows you how to download and reformat the data into the zips.txt format. You have the option to filter the data to only include certain states also. - -[source,bash] ----- -## How to get a generic Evergreen zips.txt for free -wget http://download.geonames.org/export/zip/US.zip -unzip US.zip -cut -f2,3,5,6 US.txt \ -| perl -ne 'chomp; @f=split(/\t/); print "|" . join("|", (@f[2,1,0], "1", "", $f[3], "")), "|\n";' \ -> zips.txt - -##Optionally filter the data to only include certain states -egrep "^\|(ND|MN|WI|SD)\|" zips.txt > zips-mn.txt ----- - -=== Commercial Data === - -There are many vendors that sell databases that include ZIP code to city, state and county information. A web search will easily find them. Many of the commercial vendors will include more information on which ZIP codes cover multiple cities, counties and states, which you could use to populate the alert field. - -=== Existing Patron Database === - -Another possibility is to use your current patron database to build your zips.txt. Pull out the current ZIP, city, state, county unique rows and use them to form your zips.txt. - -.Small Sites - -For sites that serve a small geographic area (less than 30 ZIP codes), an sql query like the following will create a zips.txt for you. It outputs the number of matches as the first field and sorts by ZIP code and number of matches. You would need to go through the resulting file and deal with duplicates manually. - -[source,bash] ----- -psql egdb26 -A -t -F $'|' \ - -c "SELECT count(substring(post_code from 1 for 5)) as zipcount, state, \ - city, substring(post_code from 1 for 5) as pc, \ - '1', '', county, '', '' FROM actor.usr_address \ - group by pc, city, state, county \ - order by pc, zipcount DESC" > zips.txt ----- - -.Larger Sites -For larger sites Ben Ostrowsky at ESI created a pair of scripts that handles deduplicating the results and adding in county information. Instructions for use are included in the files. - -* http://git.esilibrary.com/?p=migration-tools.git;a=blob;f=elect_ZIPs -* http://git.esilibrary.com/?p=migration-tools.git;a=blob;f=enrich_ZIPs - - -== Development == - -If you need to make changes to how this feature works, such as to add support for other postal code formats, here is a list of the files that you need to look at. - -. *Zips.pm* - contains code for loading the zips.txt file into memory and replying to search queries. Open-ILS / src / perlmods / lib / OpenILS / Application / Search / Zips.pm -. *register.js* - This is where patron registration logic is located. The code that queries the ZIP search service and fills the address is located here. Open-ILS / web / js / ui / default / actor / user / register.js diff --git a/docs-antora/modules/admin/pages/patron_registration.adoc b/docs-antora/modules/admin/pages/patron_registration.adoc deleted file mode 100644 index 974271e33d..0000000000 --- a/docs-antora/modules/admin/pages/patron_registration.adoc +++ /dev/null @@ -1,63 +0,0 @@ -== Patron registration administration == - -indexterm:[new patron form] -indexterm:[edit patron form] -indexterm:[patron registration form] -indexterm:[forms,new patron] -indexterm:[forms,edit patron] -indexterm:[forms,patron registration] - -=== Email addresses === - -indexterm:[patrons,email addresses] -indexterm:[email] - - -It's possible to set up the patron registration form to -either allow or disallow users to enter multiple email -addresses for a single patron, separated by a comma. - -To do this, go to Administration -> Local Administration --> Library Settings Editor. Search for the setting called -`ui.patron.edit.au.email.regex`. - -If you'd like to allow multiple email addresses, set this -value to `^(?:(?:\b[^@,\s]+@[^@,\s]+\.[^@.,\s]+\b)(?:,\s?(?!$)|$))*$` - -If you'd like to disallow multiple email addresses, set -this value to `^(?:\b[^@,\s]+@[^@,\s]+\.[^@.,\s]+\b)$` - -=== Parent/guardian field === - -indexterm:[patrons,parent/guardian field] -indexterm:[parent] -indexterm:[guardian] -indexterm:[juvenile] - - -In addition to the standard "show" and "suggest" visibility settings, -the guardian field has a library setting called -'ui.patron.edit.guardian_required_for_juv' ("GUI: Juvenile account -requires parent/guardian"). When this setting is set to true, a value -will be required in the patron editor when the juvenile flag is active. - -=== Privacy waiver === - -indexterm:[Allow others to use my account] -indexterm:[checking out,materials on another patron's account] -indexterm:[holds,picking up another patron's] -indexterm:[privacy waiver] - -Patrons who wish to authorize other people to use their account may -now do so via the OPAC. In the Search and History Preferences tab -under Account Preferences, a section labeled "Allow others to use -my account" allows patrons to enter a name and indicate that the -specified person is allowed to place holds, pickup holds, view -borrowing history, or check out items on their account. This -information is displayed to circulation staff in the patron account -summary in the web client. (Staff may also add, edit, and remove -entries via the patron editor.) - -You can use the library setting called "Allow others to use patron account (privacy -waiver)," to enable or disable this feature. - diff --git a/docs-antora/modules/admin/pages/patron_self_registration.adoc b/docs-antora/modules/admin/pages/patron_self_registration.adoc deleted file mode 100644 index 96dc1e3ac5..0000000000 --- a/docs-antora/modules/admin/pages/patron_self_registration.adoc +++ /dev/null @@ -1,51 +0,0 @@ -= Patron self-registration administration = -:toc: - -== Library Settings == - -Three Library Settings are specific to patron self-registration: - - * OPAC: Allow Patron Self-Registration must be set to `True` to enable use of this feature. - - * OPAC: Patron Self-Reg. Expire Interval allows each library to set the amount of time after which pending patron accounts should be deleted. - - * OPAC: Patron Self-Reg. Display Timeout allows each library to set the amount of time after which the patron self-registration screen will timeout in the OPAC. The default is 5 minutes. - -Several more Library Settings can be used to determine if a field should be required or hidden in the self-registration form: - - * GUI: Require day_phone field on patron registration - - * GUI: Show day_phone on patron registration - - * GUI: Require dob (date of birth) field on patron registration - - * GUI: Show dob field on patron registration - - * GUI: Require email field on patron registration - - * GUI: Show email field on patron registration - - * GUI: Require State field on patron registration - - * GUI: Show State field on patron registration - - * GUI: Require county field on patron registration - - * GUI: Show county field on patron registration [New Setting] - -Several more Library Settings can be used to verify values in certain fields and provide examples for data format on the registration form: - - * Global: Patron username format - - * GUI: Regex for phone fields on patron registration OR GUI: Regex for day_phone field on patron registration - - * GUI: Regex for email field on patron registration - - * GUI: Regex for post_code field on patron registration - - * GUI: Example for email field on patron registration - - * GUI: Example for post_code field on patron registration - - * GUI: Example for day_phone field on patron registration OR GUI: Example for phone fields on patron registration - diff --git a/docs-antora/modules/admin/pages/permissions.adoc b/docs-antora/modules/admin/pages/permissions.adoc deleted file mode 100644 index aff5dc8bdb..0000000000 --- a/docs-antora/modules/admin/pages/permissions.adoc +++ /dev/null @@ -1,87 +0,0 @@ -= User and Group Permissions = -:toc: - -It is essential to understand how user and group permissions can be used to allow -staff to fulfill their roles while ensuring that they only have access to the -appropriate level. - -Permissions in Evergreen are applied to a specific location and system depth -based on the home library of the user. The user will only have that permission -within the scope provided by the Depth field in relation to his/her working -locations. - -Evergreen provides group application permissions in order to restrict which -staff members have the ability to assign elevated permissions to a user, and -which staff members have the ability to edit users in particular groups. - -== Staff Accounts == - -New staff accounts are created in much the same way as patron accounts, using -_Circulation -> Register Patron_ or *Shift+F1*. Select one of the staff -profiles from the _Profile Group_ drop-down menu. - -image::media/permissions_1a.png[Permission Group dropdown in patron account] - -Each new staff account must be assigned a _Working Location_ which determines -its access level in staff client interfaces. - -. To assign a working location, open the newly created staff account using *F1* -(retrieve patron) or *F4* (patron search). -. Select _Other -> User Permission Editor_ -+ -image::media/permissions_1.png[Click User Permission Editor in the Patron's Other menu] -+ -. Place a check in the box next to the desired working location, then scroll to -the bottom of the display and click _Save_. -+ -NOTE: In multi-branch libraries it is possible to assign more than one working -location - -=== Staff Account Permissions === - -To view a detailed list of permissions for a particular Evergreen account go to -_Administration -> User Permission Editor_ in the staff client. - -=== Granting Additional Permissions === - -A _Local System Administrator (LSA)_ may selectively grant _LSA_ permissions to -other staff accounts. In the example below a _Circ +Full Cat_ account is granted -permission to process offline transactions, a function which otherwise requires -an _LSA_ login. - -. Log in as a Local System Administrator. -. Select _Administration -> User Permission Editor_ and enter the staff account -barcode when prompted -+ -OR -+ -Retrieve the staff account first, then select _Other -> User Permission -Editor_ -+ -. The User Permission Editor will load (this may take a few seconds). Greyed-out -permissions cannot be edited because they are either a) already granted to the -account, or b) not available to any staff account, including LSAs. -+ -image::media/profile-5.png[profile-5] -+ -1) List of permission names. -+ -2) If checked the permission is granted to this account. -+ -3) Depth limits application to the staff member's library and should be left at -the default. -+ -4) If checked this staff account will be able to grant the new privilege to -other accounts (not recommended). -+ -. To allow processing of offline transactions check the Applied column next to -_OFFLINE_EXECUTE_. -+ -image::media/profile-6.png[profile-6] -+ -. Scroll down and click Save to apply the changes. -+ -image::media/profile-7.png[profile-7] - - - diff --git a/docs-antora/modules/admin/pages/phonelist.adoc b/docs-antora/modules/admin/pages/phonelist.adoc deleted file mode 100644 index 3969d41176..0000000000 --- a/docs-antora/modules/admin/pages/phonelist.adoc +++ /dev/null @@ -1,186 +0,0 @@ -= Phonelist.pm Module = -:toc: - -== Introduction == - -PhoneList.pm is a mod_perl module for Apache that works with Evergreen -to generate callings lists for patron holds or overdues. It outputs a csv file -that can be fed into an auto-dialer script to call patrons with little -or no staff intervention. It is accessed and configured via a special -URL and passing any parameters as a `Query String` on the URL. The -parameters are listed in the table below. - -.Parameters for the phonelist program: -|===================================== -| user | Your Evergreen login. Typically your library's circ account. If you leave this off, you will be prompted to login. -| passwd | The password for your Evergreen login. If you leave this off you will be prompted to login. -| ws_ou | The ID of the system or branch you want to generate the list for (optional). If your account does not have the appropriate permissions for the location whose ID number you have entered, you will get an error. -| skipemail | If present, skip patrons with email notification (optional). -| addcount | Add a count of items on hold (optional). Only makes sense for holds. -| overdue | Makes a list of patrons with overdues instead of holds. If an additional, numeric parameter is supplied, it will be used as the number of days overdue. If no such extra parameter is supplied, then the default of 14 days is used. -|===================================== - -The URL is - -`https://your.evergreen-server.tld/phonelist` - -A couple of examples follow: - -`https://your.evergreen-server.tld/phonelist?user=circuser&passwd=password&skipemail` - -The above example would sign in as user circuser with password of -`password` and get a list of patrons with holds to call who do not -have email notification turned on. It would run at whatever branch is -normally associated with circuser. - -`https://your.evergreen-server.tld/phonelist?skipemail` - -The above example would do more or less the same, but you would be -prompted by your browser for the user name and password. - -If your browser or download script support it, you may also use -conventional HTTP authentication parameters. - -`https://user:password@your.evergreen-server.tld/phonelist?overdue&ws_ou=2` - -The above logs in as `user` with `password` and runs overdues for location ID 2. - -The following sections provide more information on getting what you want in your output. - -== Adding Parameters == - -If you are not familiar with HTTP/URL query strings, the format is -quite simple. - -You add parameters to the end of the URL, the first parameter is -separated from the URL page with a question mark (`?`) character. If -the parameter is to be given an extra value, then that value follows -the parameter name after an equals sign (`=`). Subsequent parameters -are separated from the previous parameter by an ampersand (`&`). - -Here is an example with 1 parameter that has no value: - -`https://your.evergreen-server.tld/phonelist?skipemail` - -An example of 1 argument with a value: - -`https://your.evergreen-server.tld/phonelist?overdue=21` - -An example of 2 arguments, 1 with a value and 1 without: - -`https://your.evergreen-server.tld/phonelist?overdue=21&skipemail` - -Any misspelled or parameters not listed in the table above will be -ignored by the program. - -== Output == - -On a successful run, the program will return a CSV file named -phone.csv. Depending on your browser or settings you will alternately -be prompted to open or save the file. Your browser may also -automatically save the file in your Downloads or other designated -folder. You should be able to open this CSV file in Excel, LibreOffice -Base, any other spread sheet program, or a text editor. - -If you have made a mistake and have mistyped your user name or -password, or if you supply a ws_ou parameter with an ID where your -user name does not have permission to look up holds or overdue -information, then you will get an error returned in your browser. - -Should your browser appear to do absolutely nothing at all. This is -normal. When there is no information for you to download, the server -will return a 200 NO CONTENT message to your browser. Most browsers -respond to this message by doing nothing at all. It is possible for -there to be no information for you to retrieve if you added the -`skipemail` option and all of your notices for that day were sent via -email, or if you ran this in the morning and then again in the -afternoon and there was no new information to gather. - -The program does indicate that it has already looked at a particular -hold or overdue and will skip it on later runs. This prevents -duplicates to the same patron in the same run. It will, however, -create a `duplicate` for the same patron if a different item is put -on hold for that patron in between two runs. - -The specific content of the CSV file will vary if you are looking at -holds or overdues. The specific contents are described in the -appropriate sections below. - -== Holds == - -The `phonelist` program will return a list of patrons with items on -hold by default, so long as you do not use the `overdue` -parameter. You may optionally get a number of items that patron -currently has on hold by adding the `addcount` parameter. - -As always, you can add the skipemail parameter to skip patrons with -email notifications of their overdues, see xref:#skipping_patrons_with_email_notification_of_holds[Skipping patrons with email notification of holds] as described below. - - -.Columns in the holds CSV file: -|===================================== -| Name | Patron's name first and last. -| Phone | Patron's phone number. -| Barcode | Patron's barcode. -| Count | Number of items on hold, if `addcount` parameter is used, otherwise this column is not present in the file. -|===================================== - -== Overdues == - -If you add the `overdue` parameter, you can get a list of patrons with -overdue items instead of a list of patrons with items on the hold -shelf. By default, this will give you a list of patrons with items -that are 14 days overdue. If you'd like to specify a different number -of days you can add the number after the parameter with an equals -sign: - -`https://your.evergreen-server.tld/phonelist?overdue=21&ws_ou=2` - -The above will retrieve a list of patrons who have items that are 21 -days overdue at the location with ID of 2. - -The number of days is an exact lookup. This means that the program -will look only at patrons who have items exactly 14 days or exactly -the number of days specified overdue. It does not pull up any that are -less than or greater than the number of days specified. - -As always, you can add the skipemail parameter to skip patrons with -email notifications of their overdues, see xref:#skipping_patrons_with_email_notification_of_holds[Skipping patrons with email notification of holds] as described below. - -.Columns in the overdues CSV file: -|================================= -| Name | Patron's name first and last. -| Phone | Patron's phone number. -| Barcode | Patron's barcode. -| Titles | A colon-separated list of titles that the patron has overdue. -|================================= - -[#skipping_patrons_with_email_notification_of_holds] -== Skipping patrons with email notification of holds == - -Skipping patrons who have email notification for their holds or -overdues is very simple. You just need to add the `skipemail` -parameter on the URL query string. Doing so will produce the list -without the patrons who have email notification for overdues, or for -all of their holds. Please note that if a patron has multiple holds -available, and even one of these holds requests a phone-only -notification, then that patron will still show on the list. For this -option to exclude a patron from the holds list, the patron must -request email notification on all of their current holds. In practice, -we find that this is usually the case. - -== Using the ws_ou parameter == - -Generally, you will not need to use the ws_ou parameter when using the -phonelist program. The phonelist will look up the branch where your -login account works and use that location when generating the list. -However, if you are part of a multi-branch systems in a consortium, -then the ws_ou parameter will be of interest to you. You can use it -to specify which branch, or the whole system, you wish to search when -running the program. - -== Automating the download == - -If you'd like to automate the download of these files, you should be -able to do so using any HTTP programming toolkit. Your client must -accept cookies and follow any redirects in order to function. diff --git a/docs-antora/modules/admin/pages/physical_char_wizard_db.adoc b/docs-antora/modules/admin/pages/physical_char_wizard_db.adoc deleted file mode 100644 index c84ddea9e0..0000000000 --- a/docs-antora/modules/admin/pages/physical_char_wizard_db.adoc +++ /dev/null @@ -1,21 +0,0 @@ -= Administering the Physical Characteristics Wizard = -:toc: - -indexterm:[Physical characteristics wizard] -indexterm:[MARC editor,configuring] - -The MARC 007 Field Physical Characteristics Wizard enables catalogers to interact with a -database wizard that leads the user step-by-step through the MARC 007 field positions. -The wizard displays the significance of the current position and provides dropdown lists -of possible values for the various components of the MARC 007 field in a more -user-friendly way. - -The information driving the MARC 007 Field Physical Characteristics Wizard is already a -part of the Evergreen database. This data can be customized by individual sites and / or -updated when the Library of Congress dictates new values or positions in the 007 field. -There are three relevant tables where the information that drives the wizard is stored: - -. *config.marc21_physical_characteristic_type_map* contains the list of materials, or values, for the positions of the 007 field. -. *config.marc21_physical_characteristic_subfield_map* contains rows that list the meaning of the various positions in the 007 field for each Category of Material. -. *config.marc21_physical_characteristic_value_map* lists all of the values possible for all of the positions in the config.marc21_physical_characteristic_subfield_map table. - diff --git a/docs-antora/modules/admin/pages/popularity_badges_web_client.adoc b/docs-antora/modules/admin/pages/popularity_badges_web_client.adoc deleted file mode 100644 index 4d0174eb27..0000000000 --- a/docs-antora/modules/admin/pages/popularity_badges_web_client.adoc +++ /dev/null @@ -1,120 +0,0 @@ -= Statistical Popularity Badges = -:toc: - -Statistical Popularity Badges allow libraries to set popularity parameters that define popularity badges, which bibliographic records can earn if they meet the set criteria. Popularity badges can be based on factors such as circulation and hold activity, bibliographic record age, or material type. The popularity badges that a record earns are used to adjust catalog search results to display more popular titles (as defined by the badges) first. Within the OPAC there are two new sort options called "Most Popular" and "Popularity Adjusted Relevance" which will allow users to sort records based on the popularity assigned by the popularity badges. - -== Popularity Rating and Calculation == - -Popularity badge parameters define the criteria a bibliographic record must meet to earn the badge, as well as which bibliographic records are eligible to earn the badge. For example, the popularity parameter "Circulations Over Time" can be configured to create a badge that is applied to bibliographic records for DVDs. The badge can be configured to look at circulations within the last 2 years, but assign more weight or popularity to circulations from the last 6 months. - -Multiple popularity badges may be applied to a bibliographic record. For each applicable popularity badge, the record will be rated on a scale of 1-5, where a 5 indicates the most popular. Evergreen will then assign an overall popularity rating to each bibliographic record by averaging all of the popularity badge points earned by the record. The popularity rating is stored with the record and will be used to rank the record within search results when the popularity badge is within the scope of the search. The popularity badges are recalculated on a regular and configurable basis by a cron job. Popularity badges can also be recalculated by an administrator directly on the server. - -== Creating Popularity Badges == - -There are two main types of popularity badges: point-in-time popularity (PIT), which looks at the popularity of a record at a specific point in time—such as the number of current circulations or the number of open hold requests; and temporal popularity (TP), which looks at the popularity of a record over a period of time—such as the number of circulations in the past year or the number of hold requests placed in the last six months. - -The following popularity badge parameters are available for configuration: - -* Holds Filled Over Time (TP) -* Holds Requested Over Time (TP) -* Current Hold Count (PIT) -* Circulations Over Time (TP) -* Current Circulation Count (PIT) -* Out/Total Ratio (PIT) -* Holds/Total Ratio (PIT) -* Holds/Holdable Ratio (PIT) -* Percent of Time Circulating (Takes into account all circulations, not specific period of time) -* Bibliographic Record Age (days, newer is better) (TP) -* Publication Age (days, newer is better) (TP) -* On-line Bib has attributes (PIT) -* Bib has attributes and copies (PIT) -* Bib has attributes and copies or URIs (PIT) -* Bib has attributes (PIT) - -To create a new Statistical Popularity Badge: - -. Go to *Administration->Local Administration->Statistical Popularity Badges*. -. Click on *Actions->Add badge*. -. Fill out the following fields as needed to create the badge: -+ -NOTE: only Name, Scope, Weight, Recalculation Interval, Importance Interval, and Discard Value Count are required - - * *Name:* Library assigned name for badge. Each name must be unique. The name will show up in the OPAC record display. For example: Most Requested Holds for Books-Last 6 Months. Required field. - - * *Description*: Further information to provide context to staff about the badge. - - * *Scope:* Defines the owning organization unit of the badge. Badges will be applied to search result sorting when the Scope is equal to, or an ancestor, of the search location. For example, a branch specific search will include badges where the Scope is the branch, the system, and the consortium. A consortium level search, will include only badges where the Scope is set to the consortium. Item specific badges will apply only to records that have items owned at or below the Scope. Required field. - - * *Weight:* Can be used to indicate that a particular badge is more important than the other badges that the record might earn. The weight value serves as a multiplier of the badge rating. Required field with a default value of 1. - - * *Age Horizon:* Indicates the time frame during which events should be included for calculating the badge. For example, a popularity badge for Most Circulated Items in the Past Two Years would have an Age Horizon of '2 years'. The Age Horizon should be entered as a number followed by 'day(s)', 'month(s)', 'year(s)', such as '6 months' or '2 years'. Use with temporal popularity (TP) badges only. - - * *Importance Horizon:* Used in conjunction with Age Horizon, this allows more recent events to be considered more important than older events. A value of zero means that all events included by the Age Horizon will be considered of equal importance. With an Age Horizon of 2 years, an Importance Horizon of '6 months' means that events, such as checkouts, that occurred within the past 6 months will be considered more important than the circulations that occurred earlier within the Age Horizon. - - * *Importance Interval:* Can be used to further divide up the timeframe defined by the Importance Horizon. For example, if the Importance Interval is '1 month, Evergreen will combine all of the events within that month for adjustment by the Importance Scale (see below). The Importance Interval should be entered as a number followed by 'day(s)', 'week(s)', 'month(s)', 'year(s)', such as '6 months' or '2 years'. Required field. - - * *Importance Scale:* The Importance Scale can be used to assign additional importance to events that occurred within the most recent Importance Interval. For example, if the Importance Horizon is '6 months' and the Importance Interval is '1 month', the Importance Scale can be set to '6' to indicate that events that happened within the last month will count 6 times, events that happened 2 months ago will count 5 times, etc. The Importance Scale should be entered as a number followed by 'day(s)', 'week(s)', 'month(s)', 'year(s)', such as '6 months' or '2 years'. - - * *Percentile:* Can be used to assign a badge to only the records that score above a certain percentile. For example, it can be used indicate that you only want to assign the badge to records in the top 5% of results by setting the field to '95'. To optimize the popularity badges, percentile should be set between 95-99 to assign a badge to the top 5%-1% of records. - - * *Attribute Filter:* Can be used to assign a badge to records that contain a specific Record Attribute. Currently this field can be configured by running a report (see note below) to obtain the JSON data that identifies the Record Attribute. The JSON data from the report output can be copied and pasted into this field. A new interface for creating Composite Record Attributes will be implemented with future development of the web client. - ** To run a report to obtain JSON data for the Attribute Filter, use SVF Record Attribute Coded Value Map as the template Source. For Displayed Fields, add Code, ID, and/or Description from the Source; also display the Definition field from the Composite Definition linked table. This field will display the JSON data in the report output. Filter on the Definition from the Composite Definition liked table and set the Operator to 'Is not NULL'. - - * *Circ Mod Filter:* Apply the badge only to items with a specific circulation modifier. Applies only to item related badges as opposed to "bib record age" badges, for example. - - * *Bib Source Filter:* Apply the badge only to bibliographic records with a specific source. - - * *Location Group Filter:* Apply the badge only to items that are part of the specified Shelving Location Group. Applies only to item related badges. - - * *Recalculation Interval:* Indicates how often the popularity value of the badge should be recalculated for bibliographic records that have earned the badge. Recalculation is controlled by a cron job. Required field with a default value of 1 month. - - * *Fixed Rating:* Can be used to set a fixed popularity value for all records that earn the badge. For example, the Fixed Rating can be set to 5 to indicate that records earning the badge should always be considered extremely popular. - - * *Discard Value Count:* Can be used to prevent certain records from earning the badge to make Percentile more accurate by discarding titles that are below the value indicated. For example, if the badge looks at the circulation count over the past 6 months, Discard Value Count can be used to eliminate records that had too few circulations to be considered "popular". If you want to discard records that only had 1-3 circulations over the past 6 months, the Discard Value Count can be set to '3'. Required field with a default value of 0. - - * *Last Refresh Time:* Displays the last time the badge was recalculated based on the Recalculation Interval. - - * *Popularity Parameter:* Types of TP and PIT factors described above that can be used to create badges to assign popularity to bibliographic records. - -. Click *OK* to save the badge. - - -== New Global Flags == - -OPAC Default Sort: can be used to set a default sort option for the catalog. Users can always override the default by manually selecting a different sort option while searching. - -Maximum Popularity Importance Multiplier: used with the Popularity Adjusted Relevance sort option in the OPAC. Provides a scaled adjustment to relevance score based on the popularity rating earned by bibliographic records. See below for more information on how this flag is used. - -== Sorting by Popularity in the OPAC == - -Within the stock OPAC template there is a new option for sorting search results called "Most Popular". Selecting "Most Popular" will first sort the search results based on the popularity rating determined by the popularity badges and will then apply the default "Sort by Relevance". This option will maximize the popularity badges and ensure that the most popular titles appear higher up in the search results. - -There is a second new sort option called "Popularity Adjusted Relevance", which can be used to find a balance between popularity and relevance in search results. For example, it can help ensure that records that are popular, but not necessarily relevant to the search, do not supersede records that are both popular and relevant in the search results. It does this by sorting search results using an adjusted version of Relevance sorting. When sorting by relevance, each bibliographic record is assigned a baseline relevance score between 0 and 1, with 0 being not relevant to the search query and 1 being a perfect match. With "Popularity Adjusted Relevance" the baseline relevance is adjusted by a scaled version of the popularity rating assigned to the bibliographic record. The scaled adjustment is controlled by a Global Flag called "Maximum Popularity Importance Multiplier" (MPIM). The MPIM takes the average popularity rating of a bibliographic record (1-5) and creates a scaled adjustment that is applied to the baseline relevance for the record. The adjustment can be between 1.0 and the value set for the MPIM. For example, if the MPIM is set to 1.2, a record with an average popularity badge score of 5 (maximum popularity) would have its relevance multiplied by 1.2—in effect giving it the maximum increase of 20% in relevance. If a record has an average popularity badge score of 2.5, the baseline relevance of the record would be multiplied by 1.1 (due to the popularity score scaling the adjustment to half way between 1.0 and the MPIM of 1.2) and the record would receive a 10% increase in relevance. A record with a popularity badge score of 0 would be multiplied by 1.0 (due to the popularity score being 0) and would not receive a boost in relevance. - -== Popularity Badge Example == - -A popularity badge called "Long Term Holds Requested" has been created which has the following parameters: - -Popularity Parameter: Holds Requested Over Time -Scope: CONS -Weight: 1 (default) -Age Horizon: 5 years -Percentile: 99 -Recalculation Interval: 1 month (default) -Discard Value Count: 0 (default) - -This popularity badge will rate bibliographic records based on the number of holds that have been placed on it over the past 5 years and will only apply the badge to the top 1% of records (99th percentile). - -If a keyword search for harry potter is conducted and the sort option "Most Popular" is selected, Evergreen will apply the popularity rankings earned from badges to the search results. - -image::media/popbadge1_web_client.PNG[popularity badge search] - -Title search: harry potter. Sort by: Most Popular. - -image::media/popbadge2_web_client.PNG[popularity badge search results] - -The popularity badge also appears in the bibliographic record display in the catalog. The name of the badge earned by the record and the popularity rating are displayed in the Record Details. - -A popularity badge of 5.0/5.0 has been applied to the most popular bibliographic records where the search term "harry potter" is found in the title. In the image above, the popularity badge has identified records from the Harry Potter series by J.K. Rowling as the most popular titles matching the search and has listed them first in the search results. - -image::media/popbadge3_web_client.PNG[popularity badge bib record display] diff --git a/docs-antora/modules/admin/pages/purge_holds.adoc b/docs-antora/modules/admin/pages/purge_holds.adoc deleted file mode 100644 index bb201cf0d8..0000000000 --- a/docs-antora/modules/admin/pages/purge_holds.adoc +++ /dev/null @@ -1,15 +0,0 @@ -== Purging holds == - -Similar to purging circulations one may wish to purge old (filled or canceled) hold information. This feature adds a database function and -settings for doing so. - -Purged holds are moved to the _action.aged_hold_request_ table with patron identifying information scrubbed, much like circulations are moved -to _action.aged_circulation_. - -The settings allow for a default retention age as well as distinct retention ages for holds filled, holds canceled, and holds canceled by -specific cancel causes. The most specific one wins unless a patron is retaining their hold history. In the latter case the patron's holds -are retained either way. - -Note that the function still needs to be called, which could be set up as a cron job or done more manually, say after statistics collection. -You can use the _purge_holds.srfsh_ script to purge holds from cron. - diff --git a/docs-antora/modules/admin/pages/purge_user_activity.adoc b/docs-antora/modules/admin/pages/purge_user_activity.adoc deleted file mode 100644 index bd39954229..0000000000 --- a/docs-antora/modules/admin/pages/purge_user_activity.adoc +++ /dev/null @@ -1,36 +0,0 @@ -== Purge User Activity == - -User activity types are now set to transient by default for new -Evergreen installs. This means only the most recent activity entry per -user per activity type is retained in the database. - -.Use case -**** - -Setting more user activity types collects less patron data, which helps -protect patron privacy. Additionally, the _actor.usr_activity_ table -gets really big really fast if all event types are non-transient. - -**** - -This change does not affect existing activity types, which were set to -non-transient by default. To make an activity type transient, modify the -'Transient' field of the desired type in the staff client under Admin -> -Server Administration -> User Activity Types. - -Setting an activity type to transient means data for a given user will -be cleaned up automatically if and when the user performs the activity -in question. However, administrators can also force an activity -cleanup via SQL. This is useful for ensuring that all old activity -data is deleted and for controlling when the cleanup occurs, which -may be useful on very large actor.usr_activity tables. - -To force clean all activity types: - -[source,sql] ------------------------------------------------------------- -SELECT actor.purge_usr_activity_by_type(etype.id) - FROM config.usr_activity_type etype; ------------------------------------------------------------- - -NOTE: This could take hours to run on a very large actor.usr_activity table. diff --git a/docs-antora/modules/admin/pages/qstore_service.adoc b/docs-antora/modules/admin/pages/qstore_service.adoc deleted file mode 100644 index 62829869db..0000000000 --- a/docs-antora/modules/admin/pages/qstore_service.adoc +++ /dev/null @@ -1,5 +0,0 @@ -== QStore service == - -The QStore service is used by the user buckets feature -in the Web client. - diff --git a/docs-antora/modules/admin/pages/receipt_template_editor.adoc b/docs-antora/modules/admin/pages/receipt_template_editor.adoc deleted file mode 100644 index d88b249249..0000000000 --- a/docs-antora/modules/admin/pages/receipt_template_editor.adoc +++ /dev/null @@ -1,246 +0,0 @@ -= Print (Receipt) Templates = -:toc: - -indexterm:[web client, receipt template editor] -indexterm:[print templates] -indexterm:[web client, print templates] -indexterm:[receipt template editor] -indexterm:[receipt template editor, macros] -indexterm:[receipt template editor, checkout] - -The print templates follow W3C HTML standards (see -http://w3schools.com/html/default.asp) and can make use of CSS and -https://angularjs.org[Angular JS] to a certain extent. - -The Receipt Template Editor can be found at: *Administration -> Workstation -> -Print Templates* - -The Editor can also be found on the default home page of the staff client. - -Receipts come in various types: Bills, checkout, items, holds, transits and -Payments. - -== Receipt Templates == -This is a complete list of the receipts currently in use in Evergreen. - -[horizontal] -.List of Receipts -*Bills, Current*:: Listing of current bills on an account. -*Bills, Historic*:: Listing of bills that have had payments made on them. This - used on the Bill History Transaction screen. -*Bills, Payment*:: Patron payment receipt -*Checkin*:: List of items that have been entered in to the check-in screen. -*Checkout*:: List of items currently checked out by a patron during the transaction. -*Hold Transit Slip*:: This is printed when a hold goes in-transit to another library. -*Hold Shelf Slip*:: This prints when a hold is fulfilled. -*Holds for Bib Record*:: Prints a list of holds on a Title record. -*Holds for Patron*:: Prints a list of holds on a patron record. -*Hold Pull List*:: Prints the Holds Pull List. -*Hold Shelf List*:: Prints a list of hold that are waiting to be picked up. -*In-House Use List*:: Prints a list of items imputed into In-house use. -*Item Status*:: Prints a list of items imputed into Item Status. -*Items Out*:: Prints the list of items a patron has checked out. -*Patron Address*:: Prints the current patrons address. -*Patron Note*:: Prints a note on a patron's record. -*Renew*:: List of items that have been renewed using the Renew Item Screen. -*Transit List*:: Prints the list of items in-transit from the Transit List. -*Transit Slip*:: This is printed when an items goes in-transit to another location. - - -== Editing Receipts == - -To edit a Receipt: - -. Select *Administration -> Workstation -> Print Templates*. - -. Choose the Receipt in the drop down list. -. If you are using Hatch, you can choose different printers for different types - of receipts with the Force Content field. If not, leave that field blank. - Printer Settings can be set at *Administration -> Workstation -> Printer - Settings*. -+ -image::media/receipt1.png[select checkout] -+ -. Make edits to the Receipt on the right hand side. -+ -image::media/receipt2.png[receipt screen] -+ -. Click out of the section you are editing to see what your changes will look - right on the Left hand side. -. Click *Save Locally* in the Upper right hand corner. - - -=== Formatting Receipts === - -Print templates use variables for various pieces of information coming from the -Evergreen database. These variables deal with everything from the library name -to the due date of an item. Information from the database is entered in the -templates with curly brackets `{{term}}`. - -Example: `{{checkout.title}}` - -Some print templates have sections that are repeated for each item in a list. -For example, the portion of the Checkout print template below repeats every item -that is checked out in HTML list format by means of the 'ng-repeat' in the li -tag. - ------- -
    -
  1. -{{checkout.title}}
    -Barcode: {{checkout.copy.barcode}}
    -Due: {{checkout.circ.due_date | date:"short"}}
    -
  2. -
------- - -=== Text Formatting === - -General text formatting -|======================================================================================== -| Goal | Original | Code | Result -| Bold (HTML) | hello | hello | *hello* -| Bold (CSS) | hello | hello | *hello* -| Capitalize | circulation | circulation | Circulation -| Currency | 1 | {{1 \| currency}} | $1.00 -|======================================================================================== - -=== Date Formatting === - -If you do not format dates, they will appear in a system format which isn't -easily readable. - -|=================================================== -| Code | Result -|{{today}} | 2017-08-01T14:18:51.445Z -|{{today \| date:'short'}} | 8/1/17 10:18 AM -|{{today \| date:'M/d/yyyy'}} | 8/1/2017 -|=================================================== - -=== Currency Formatting === - -Add " | currency" after any dollar amount that you wish to display as currency. - -Example: -`{{xact.summary.balance_owed | currency}}` prints as `$2.50` - - -=== Conditional Formatting === - -You can use Angular JS to only print a line if the data matches. For example: - -`
Notify by email: {{patron.email}}
` - -This will only print the "Notify by email:" line if email notification is -enabled for that hold. - -Example for checkout print template that will only print the amount a patron -owes if there is a balance: - -`You owe the library -${{patron_money.balance_owed}}` - -See also: https://docs.angularjs.org/api/ng/directive/ngIf - -=== Substrings === - -To print just a sub-string of a variable, you can use a *limitTo* function. -`{{variable | limitTo:limit:begin}}` where *limit* is the number of characters -you are wanting, and *begin* (optional) is where you want to start printing -those characters. To limit the variable to the first four characters, you can -use `{{variable | limitTo:4}}` to get "vari". To limit to the last five -characters you can use `{{variable | limitTo:-5}}` to get "iable". And -`{{variable | limitTo:3:3}}` will produce "ria". - -|======================================================================================== -| Original | Code | Result -| The Sisterhood of the Traveling Pants | {{checkout.title \| limitTo:10}} | The Sisterhood of th -| 123456789 | {{patron.card.barcode \| limitTo:-5}} | 56789 -| Roberts | {{patron.family_name \| limitTo:3:3}} | ber -|======================================================================================== - - -=== Images === - -You can use HTML and CSS to add an image to your print template if you have the -image uploaded onto a publicly available web server. (It will currently only -work with images on a secure (https) site.) For example: - -`` - -=== Sort Order === - -You can sort the items in an ng-repeat block using orderBy. For example, the -following will sort a list of holds by the shelving location first, then by the -call number: - -`` - -=== Subtotals === - -You can use Angular JS to add information from each iteration of a loop together -to create a subtotal. This involves setting an initial variable before the -ng-repeat loop begins, adding an amount to that variable from within each loop, -and then displaying the final amount at the end. - ------- -
You checked out the following items:
-
-
-
    -
    -
  1. - {{checkout.title}}
    - Barcode: {{checkout.copy.barcode}}
    - Due: {{checkout.circ.due_date | date:"M/d/yyyy"}} -
  2. -
    -
-
Total Amount Owed: {{patron_money.balance_owed | currency}}
-
-You Saved
-{{transactions.subtotal | currency}}
-by borrowing from the library!
------- -<1> This line sets the variable. -<2> This adds the list item's price to the variable. -<3> This prints the total of the variable. - -== Exporting and importing Customized Receipts == - -Once you have your receipts set up on one machine you can export your receipts, -and then load them on to another machine. Just remember to *Save Locally* -once you import the receipts on the new machine. - -=== Exporting templates === -As you can only save a template on to the computer you are working on you will -need to export the template if you have more than one computer that prints out -receipts (i.e., more than one computer on the circulation desk, or another -computer in the workroom that you use to checkin items or capture holds with) - -. Export. -. Select the location to save the template to, name the template, and click -*Save*. -. Click OK. - -=== Importing Templates === - -. Click Import. -. Navigate to and select the template that you want to import. Click Open. -. Click OK. -. Click *Save Locally*. -. Click OK. - - -WARNING: Clearing your browser's cache/temporary files will clear any print -template customizations that you make unless you are using Hatch to store your -customizations. Be sure to export a copy of your customizations as a backup so -that you can import it as needed. - -TIP: If you are modifying your templates and you do not see the updates appear -on your printed receipt, you may need to go into *Administration -> Workstation --> Stored Preferences* and delete the stored preferences related to the print -template that you modified (for example, eg.print.template_context.bills_current). diff --git a/docs-antora/modules/admin/pages/restrict_Z39.50_sources_by_perm_group.adoc b/docs-antora/modules/admin/pages/restrict_Z39.50_sources_by_perm_group.adoc deleted file mode 100644 index d8f0ff5789..0000000000 --- a/docs-antora/modules/admin/pages/restrict_Z39.50_sources_by_perm_group.adoc +++ /dev/null @@ -1,67 +0,0 @@ -= Z39.50 Servers = -:toc: - -== Restrict Z39.50 Sources by Permission Group == - -In Evergreen versions preceding 2.2, all users with cataloging privileges could view all of the Z39.50 servers that were available for use in the staff client. In Evergreen version 2.2, you can use a permission to restrict users' access to Z39.50 servers. You can apply a permission to the Z39.50 servers to restrict access to that server, and then assign that permission to users or groups so that they can access the restricted servers. - -=== Administrative Settings === - -You can add a permission to limit use of Z39.50 servers, or you can use an existing permission. - -NOTE: You must be authorized to add permission types at the database level to add a new permission. - -Add a new permission: - -1) Create a permission at the database level. - -2) Click *Administration -> Server Administration -> Permissions* to add a permission to the staff client. - -3) In the *New Permission* field, enter the text that describes the new permission. - -image::media/Restrict_Z39_50_Sources_by_Permission_Group2.png[Create new permission to limit use of Z39.50 servers] - -4) Click *Add*. - -5) The new permission appears in the list of permissions. - - - -=== Restrict Z39.50 Sources by Permission Group === - -1) Click *Administration -> Server Administration -> Z39.50 Servers* - -2) Click *New Z39.50 Server*, or double click on an existing Z39.50 server to restrict its use. - -3) Select the permission that you added to restrict Z39.50 use from the drop down menu. - -image::media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg[] - -4) Click *Save*. - -5) Add the permission that you created to a user or user group so that they can access the restricted server. - - -image::media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg[] - -6) Users that log in to the staff client and have that permission will be able to see the restricted Z39.50 server. - -NOTE: As an alternative to creating a new permission to restrict use, you can use a preexisting permission. For example, your library uses a permission group called SuperCat, and only members in this group should have access to a restricted Z39.50 source. Identify a permission that is unique to the SuperCat group (e.g. CREATE_MARC) and apply that permission to the restricted Z39.50 server. Because these users are in the only group with the permission, they will be the only group w/ access to the restricted server. - - -== Storing Z39.50 Server Credentials == - -Staff have the option to apply Z39.50 login credentials to each Z39.50 server at different levels of the organizational unit hierarchy. Credentials can be set at the library branch or system level, or for an entire consortium. When credentials are set for a Z39.50 server, searches of the Z39.50 server will use the stored credentials. If a staff member provides alternate credentials in the Z39.50 search interface, the supplied credentials will override the stored ones. Staff have the ability to apply new credentials or clear existing ones in this interface. For security purposes, it is not possible for staff to retrieve or report on passwords. - - -To set up stored credentials for a Z39.50 server: - -1) Go to *Administration -> Server Administration -> Z39.50 Servers*. - -2) Select a *Z39.50 Source* by clicking on the hyperlinked source name. This will take you the Z39.50 Attributes for the source. - -3) At the top of the screen, select the *org unit* for which you would like to configure the credentials. - -4) Enter the *Username* and *Password*, and click *Apply Credentials*. - -image::media/storing_z3950_credentials.jpg[Storing Z39.50 Credentials] diff --git a/docs-antora/modules/admin/pages/schema_bibliographic.adoc b/docs-antora/modules/admin/pages/schema_bibliographic.adoc deleted file mode 100644 index dad062326f..0000000000 --- a/docs-antora/modules/admin/pages/schema_bibliographic.adoc +++ /dev/null @@ -1,14 +0,0 @@ -= Notes about the Bibliographic Schema in the Database = -:toc: - -== Bibliographic fingerprint == - -Evergreen creates a fingerprint for each bib record, which can be found in the `fingerprint` column of the `biblio.record_entry` table. -This fingerprint is used to group together different bib records in a Group Formats & Editions search in the public catalog. - -The bibliographic fingerprint incorporates several subfields to distinguish between different items, including: - -* $n and $p from MARC title fields to better distinguish among records of the same series that may share the same title but have a different part. - -The bibliographic fingerprint distinguishes among the fields contributing to the fingerprint. This helps the system distinguish between a record -for the movie _Blue Steel_ and another record for the book _Blue_ written by Danielle _Steel_. diff --git a/docs-antora/modules/admin/pages/search_interface.adoc b/docs-antora/modules/admin/pages/search_interface.adoc deleted file mode 100644 index 225aec3b36..0000000000 --- a/docs-antora/modules/admin/pages/search_interface.adoc +++ /dev/null @@ -1,118 +0,0 @@ -= Designing the patron search experience = -:toc: - -== Editing the formats select box options in the search interface == - -You may wish to remove, rename or organize the options in the formats select -box. This can be accomplished from the staff client. - -. From the staff client, navigate to *Administration -> Server Administration -> Marc Coded -Value Maps* -. Select _Type_ from the *Record Attribute Type* select box. -. Double click on the format type you wish to edit. - -image::media/coded-value-1.png[Coded Value Map Format Editor] - -To change the label for the type, enter a value in the *Search Label* field. - -To move the option to a top list separated by a dashed line from the others, -check the *Is Simple Selector* check box. - -To hide the type so that it does not appear in the search interface, uncheck the -*OPAC Visible* checkbox. - -Changes will be immediate. - -== Adding and removing search fields in advanced search == - -It is possible to add and remove search fields on the advanced search page by -editing the _opac/parts/config.tt2_ file in your template directory. Look for -this section of the file: - ----- -search.adv_config = [ - {adv_label => l("Item Type"), adv_attr => ["mattype", "item_type"]}, - {adv_label => l("Item Form"), adv_attr => "item_form"}, - {adv_label => l("Language"), adv_attr => "item_lang"}, - {adv_label => l("Audience"), adv_attr => ["audience_group", "audience"], adv_break => 1}, - {adv_label => l("Video Format"), adv_attr => "vr_format"}, - {adv_label => l("Bib Level"), adv_attr => "bib_level"}, - {adv_label => l("Literary Form"), adv_attr => "lit_form", adv_break => 1}, - {adv_label => l("Search Library"), adv_special => "lib_selector"}, - {adv_label => l("Publication Year"), adv_special => "pub_year"}, - {adv_label => l("Sort Results"), adv_special => "sort_selector"}, -]; ----- - -For example, if you delete the line: - ----- -{adv_label => l("Language"), adv_attr => "item_lang"}, ----- - -the language field will no longer appear on your advanced search page. Changes -will appear immediately after you save your changes. - -You can also add fields based on Search Facet Groups that you create in the -staff client's Local Administration menu. This can be helpful if you want to -simplify your patrons' experience by presenting them with only certain -limiters (e.g. the most commonly used languages in your area). To do this, - -. Click *Administration -> Local Administration -> Search Filter Groups*. -. Click *New*. -. Enter descriptive values into the code and label fields. The owner needs to -be set to your consortium. -. Once the Facet Group is created, click on the blue hyperlinked code value. -. Click the *New* button to create the necessary values for your field. -. Go to the _opac/parts/config.tt2_ file, and add a line like the following, -where *Our Library's Field* is the name you'd like to be displayed next to -your field, and *facet_group_code* is the code you've added using the staff -client. -+ ----- - {adv_label => l("Our Library's Field"), adv_filter => "facet_group_code"}, ----- - -== Changing the display of facets and facet groups == - -Facets can be reordered on the search results page by editing the -_opac/parts/config.tt2_ file in your template directory. - -Edit the following section of _config.tt2_, changing the order of the facet -categories according to your needs: - ----- - -facet.display = [ - {facet_class => 'author', facet_order => ['personal', 'corporate']}, - {facet_class => 'subject', facet_order => ['topic']}, - {facet_class => 'series', facet_order => ['seriestitle']}, - {facet_class => 'subject', facet_order => ['name', 'geographic']} -]; - ----- - -You may also change the default number of facets appearing under each category -by editing the _facet.default_display_count_ value in _config.tt2_. The default -value is 5. - -== Facilitating search scope changes == - -Users often search in a limited scope, such as only searching items in their -local library. When they aren't able find materials that meet their needs in -a limited scope, they may wish to repeat their search in a system-wide or -consortium-wide scope. Evergreen provides an optional button and checkbox -to alter the depth of the search to a defined level. - -The button and checkbox are both enabled by default and can be configured -in the Depth Button/Checkbox section of config.tt2. - -Noteworthy settings related to these features include: - -* `ctx.depth_sel_checkbox` -- set this to 1 to display the checkbox, 0 to hide it. -* `ctx.depth_sel_button` -- set this to 1 to display the button, 0 to hide it. -* `ctx.depth_sel_depth` -- the depth that should be applied by the button and -checkbox. A value of 0 would typically search the entire consortium, and 1 would -typically search the library's system. - - diff --git a/docs-antora/modules/admin/pages/search_settings_web_client.adoc b/docs-antora/modules/admin/pages/search_settings_web_client.adoc deleted file mode 100644 index d454ec8e57..0000000000 --- a/docs-antora/modules/admin/pages/search_settings_web_client.adoc +++ /dev/null @@ -1,60 +0,0 @@ -== Adjusting Relevance Ranking and Indexing == - -=== Metabib Class FTS Config Maps === - -NOTE: These settings will apply to all libraries in your -consortium. There is no way to apply these settings to -only one library or branch. - -* _Field Class_ - Reference to a field defined in - Administration > Server Administration > MARC - Search/Facet Classes. -* _Text Search Config_ - Which Text Search config to use -* _Active_ - Check this checkbox to use this configuration - for searching and indexing. -* _Index Weight_ - The FTS index weight to use for this - FTS config. Should be A, B, C, or D, defaults to C. - You can see the exact numeric values for A, B, C, and - D in Administration > Server Administration > MARC - Search/Facet Classes. -* _Index Language_ - An optional 3-letter code - representing the language the record should be set to - in order for this FTS config to be used for indexing. - should be set to in order for this FTS config to be used for indexing -* _Search Language_ - An optional 3-letter code representing - what preferred language search should be selected by the - end-user in order for this FTS config to be applied to - their search. -* _Always Use_ - Check this checkbox to override the - configuration for a more specific field. For example, - if you check this box when entering a setting for the - _author_ metabib class, it will override any settings - you have made for the _author|personal_ field in - the Administration > Server Administration > Metabib - Field FTS Config Maps screen. - -=== Metabib Field FTS Config Maps === - -NOTE: These settings will apply to all libraries in your -consortium. There is no way to apply these settings to -only one library or branch. - -* _Metabib Field_ - Reference to a field defined in - Administration > Server Administration > MARC - Search/Facet Fields. -* _Text Search Config_ - Which Text Search config to use -* _Active_ - Check this checkbox to use this configuration - for searching and indexing. -* _Index Weight_ - The FTS index weight to use for this - FTS config. Should be A, B, C, or D, defaults to C. - You can see the exact numeric values for A, B, C, and - D in Administration > Server Administration > MARC - Search/Facet Classes. -* _Index Language_ - An optional 3-letter code - representing the language the record should be set to - in order for this FTS config to be used for indexing. - should be set to in order for this FTS config to be used for indexing -* _Search Language_ - An optional 3-letter code representing - what preferred language search should be selected by the - end-user in order for this FTS config to be applied to - their search. diff --git a/docs-antora/modules/admin/pages/security.adoc b/docs-antora/modules/admin/pages/security.adoc deleted file mode 100644 index 35414d58cf..0000000000 --- a/docs-antora/modules/admin/pages/security.adoc +++ /dev/null @@ -1,32 +0,0 @@ -= Keeping Evergreen Current and Secure = -:toc: - -== Introduction == - -When it comes to running an Evergreen system, there are two special areas of concern: - -* How and when you decide to upgrade Evergreen software or apply fixes -* How to take care of the actual server(s) that your Evergreen system uses - -The following hints to help you cope with these challenges. - -== Upgrading the Evergreen software == - -The Evergreen community at large have agreed upon an upgrade cycle that produces new major releases twice a year, in Spring and Fall. Major releases can contain new features. The community supports each major release with 12 subsequent monthly minor releases that contain only bug fixes, and continues to provide security fixes if necessary for an additional three months after the end of the regular minor bug fix support, for a total of 15 months of support for each major release. - -As a general rule, as the Evergreen community releases each new version of the Evergreen software, they also provide a guideline on how to upgrade from the previous release as part of the official Evergreen documentation at http://docs.evergreen-ils.org. Follow the instructions exactly and in the order that they are given--and if you run into a problem, report it to the community with as much detail about the error message or symptoms of the problem as you can. - -Keep the Evergreen release schedule in mind when planning your own testing and upgrade schedules. If you participate in testing new Evergreen releases during the release candidate stages, you will prepare your own library for the upgrade process and help flush out any remaining bugs before the major release of the software. This also gives you time to prepare the members of your library for the upcoming changes by giving them the chance, when possible, to familiarize themselves with new features on your test system. You also have the chance to prepare supporting materials, like handouts and other kinds of documentation, to help your users before, during and after each upgrade cycle. - -== Securing the server(s) on which your Evergreen installation runs == - -An Evergreen installation requires interaction between many different components and, depending on the size of your consortium and how many servers you have, it can range from quite complex to extremely. That said, there are a number of standard guidelines that you can follow to secure your server. - -* Keep your server up-to-date. Apply security updates as soon as possible when they come out to prevent your system from being exposed to a known vulnerability. -* Pay close attention to account administration on the server. Do not give any user on the server more power than they need. -* Disable services that you do not need. -* Pay attention to your system's log files to see what kind of activity is happening and notice anything unusual. -* A central idea to server security is to make it unreasonably difficult for anyone who tries to compromise your system. Let them choose targets more vulnerable than yours. - -This topic is very rich and there are many resources available, both in print and on the web. It is worth your time to learn more. - diff --git a/docs-antora/modules/admin/pages/sip_server.adoc b/docs-antora/modules/admin/pages/sip_server.adoc deleted file mode 100644 index 1e8479baa3..0000000000 --- a/docs-antora/modules/admin/pages/sip_server.adoc +++ /dev/null @@ -1,721 +0,0 @@ -= SIP Server = -:toc: - -== About the SIP Protocol == - -indexterm:[Automated Circulation System] -indexterm:[SelfCheck] -indexterm:[Automated Material Handling] - -+SIP+, standing for +Standard Interchange Protocol+, was developed by the +3M corporation+ to be a common -protocol for data transfer between ILS' (referred to in +SIP+ as an _ACS_, or _Automated Circulation System_) and a -third party device. Originally, the protocol was developed for use with _3M SelfCheck_ (often abbreviated SC, not to -be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common -to find +SIP+ in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. Some -examples include: - -* Patron Authentication (computer access, subscription databases) -* Automated Material Handling (AMH) -** The automated sorting of items, often to bins or book carts, based on shelving location or other programmable -criteria - -== Installing the SIP Server == - - - -This is a rough intro to installing the +SIP+ server for Evergreen. - -=== Getting the code === - -Current +SIP+ server code lives at in the Evergreen git repository: - - cd /opt - git clone git://git.evergreen-ils.org/SIPServer.git SIPServer - - -=== Configuring the Server === - -indexterm:[configuration files, oils_sip.xml] - -. Type the following commands from the command prompt: - - $ sudo su opensrf - $ cd /openils/conf - $ cp oils_sip.xml.example oils_sip.xml - -. Edit oils_sip.xml. Change the commented out section to this: - - - -. max_servers will directly correspond to the number of allowed +SIP+ clients. Set the number accordingly, but -bear in mind that too many connections can exhaust memory. On a 4G RAM/4 CPU server (that is also running -evergreen), it is not recommended to exceed 100 +SIP+ client connections. - -==== Setting the encoding ==== - -SIPServer looks for the encoding in the following -places: - -1. An +encoding+ attribute on the +account+ element for the currently active SIP account. -2. The +encoding+ element that is a child of the +institution+ element of the currently active SIP account. -3. The +encoding+ element that is a child of the +implementation_config+ element that is itself a child of the +institution+ element of the currently active SIP account. -4. If none of the above exist, then the default encoding (ASCII) is used. - -Option 3 is a legacy option. It is recommended that you alter your configuration to -move this element out of the +implementation_config+ element and into -its parent +institution+ element. Ideally, SIPServer should *not* look into -the implementation config, and this check may be removed at some time -in the future. - -==== Datatypes ==== - -The `msg64_hold_datatype` setting is similar to `msg64_summary_datatype`, but affects holds instead of circulations. -When set to `barcode`, holds information will be delivered as a set of copy barcodes instead of title strings for -patron info requests. With barcodes, SIP clients can both find the title strings for display (via item info requests) -and make subsequent hold-related action requests, like holds cancellation. - - -=== Adding SIP Users === - -indexterm:[configuration files, oils_sip.xml] - -. Type the following commands from the command prompt: - - $ sudo su opensrf - $ cd /openils/conf - -. In the ++ section, add +SIP+ client login information. Make sure that all ++ use the same -institution attribute, and make sure the institution is listed in ++. All attributes in the -++ section will be used by the +SIP+ client. - -. In Evergreen, create a new profile group called +SIP+. This group should be a sub-group of +Users+ (not +Staff+ -or +Patrons+). Set _Editing Permission_ as *group_application.user.sip_client* and give the group the following -permissions: -+ - COPY_CHECKIN - COPY_CHECKOUT - CREATE_PAYMENT - RENEW_CIRC - VIEW_CIRCULATIONS - VIEW_COPY_CHECKOUT_HISTORY - VIEW_PERMIT_CHECKOUT - VIEW_USER - VIEW_USER_FINES_SUMMARY - VIEW_USER_TRANSACTIONS -+ -OR use SQL like: -+ - - INSERT INTO permission.grp_tree (name,parent,description,application_perm) - VALUES ('SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); - - INSERT INTO - permission.grp_perm_map (grp, perm, depth, grantable) - SELECT - g.id, p.id, 0, FALSE - FROM - permission.grp_tree g, - permission.perm_list p - WHERE - g.name = 'SIP' AND - p.code IN ( - 'COPY_CHECKIN', - 'COPY_CHECKOUT', - 'RENEW_CIRC', - 'VIEW_CIRCULATIONS', - 'VIEW_COPY_CHECKOUT_HISTORY', - 'VIEW_PERMIT_CHECKOUT', - 'VIEW_USER', - 'VIEW_USER_FINES_SUMMARY', - 'VIEW_USER_TRANSACTIONS' - ); -+ -Verify: -+ - - SELECT * - FROM permission.grp_perm_map pgpm - INNER JOIN permission.perm_list ppl ON pgpm.perm = ppl.id - INNER JOIN permission.grp_tree pgt ON pgt.id = pgpm.grp - WHERE pgt.name = 'SIP'; - - - -. For each account created in the ++ section of oils_sip.xml, create a user (via the staff client user -editor) that has the same username and password and put that user into the +SIP+ group. - -[NOTE] -=================== -The expiration date will affect the +SIP+ users' connection so you might want to make a note of this -somewhere. -=================== - -=== Running the server === - -To start the +SIP+ server type the following commands from the command prompt: - - - $ sudo su opensrf - - $ oils_ctl.sh -a [start|stop|restart]_sip - -indexterm:[SIP] - - -=== Logging-SIP === - -==== Syslog ==== - -indexterm:[syslog] - - -It is useful to log +SIP+ requests to a separate file especially during initial setup by modifying your syslog config file. - -. Edit syslog.conf. - - $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf - - -. Add this: - - local6.* -/var/log/SIP_evergreen.log - -. Syslog expects the logfile to exist so create the file. - - $ sudo touch /var/log/SIP_evergreen.log - -. Restart sysklogd. - - $ sudo /etc/init.d/sysklogd restart - - -==== Syslog-NG ==== - -indexterm:[syslog-NG] - -. Edit logging config. - - sudo vi /etc/syslog-ng/syslog-ng.conf - -. Add: - - # +SIP2+ for Evergreen - filter f_eg_sip { level(warn, err, crit) and facility(local6); }; - destination eg_sip { file("var/log/SIP_evergreen.log"); }; - log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; - -. Syslog-ng expects the logfile to exist so create the file. - - $ sudo touch /var/log/SIP_evergreen.log - -. Restart syslog-ng - - $ sudo /etc/init.d/syslog-ng restart - - -indexterm:[SIP] - - -=== Testing Your SIP Connection === - -* In the root directory of the SIPServer code: - - $ cd SIPServer/t - -* Edit SIPtest.pm, change the $instid, $server, $username, and $password variables. This will be -enough to test connectivity. To run all tests, you'll need to change all the variables in the _Configuration_ section. - - $ PERL5LIB=../ perl 00sc_status.t -+ -This should produce something like: -+ - - 1..4 - ok 1 - Invalid username - ok 2 - Invalid username - ok 3 - login - ok 4 - SC status - -* Don't be dismayed at *Invalid Username*. That's just one of the many tests that are run. - -=== More Testing === - -Once you have opened up either the +SIP+ OR +SIP2+ ports to be accessible from outside you can do some testing -via +telnet+. In the following tests: - -* Replace +$server+ with your server hostname (or +localhost+ if you want to - skip testing external access for now); -* Replace +$username+, +$password+, and +$instid+ with the corresponding values - in the ++ section of your SIP configuration file; -* Replace the +$user_barcode+ and +$user_password+ variables with the values - for a valid user. -* Replace the +$item_barcode+ variable with the values for a valid item. - -/////////////// -Comments because we don't want to indent these numbered bullets! -/////////////// - -. Start by testing your ability to log into the SIP server: -+ -[NOTE] -====================== -We are using 6001 here which is associated with +SIP2+ as per our configuration. -====================== -+ - $ telnet $server 6001 - Connected to $server. - Escape character is '^]'. - 9300CN$username|CO$password|CP$instid -+ -If successful, the SIP server returns a +941+ result. A result of +940+, -however, indicates an unsuccessful login attempt. Check the ++ -section of your SIP configuration and try again. - -. Once you have logged in successfully, replace the variables in the following -line and paste it into the telnet session: -+ - 2300120080623 172148AO$instid|AA$user_barcode|AC$password|AD$user_password -+ -If successful, the SIP server returns the patron information for $user_barcode, -similar to the following: -+ - 24 Y 00120100113 170738AEFirstName MiddleName LastName|AA$user_barcode|BLY|CQY - |BHUSD|BV0.00|AFOK|AO$instid| -+ -The response declares it is a valid patron BLY with a valid password CQY and shows the user's +$name+. - -. To test the SIP server's item information response, issue the following request: -+ - 1700120080623 172148AO$instid|AB$item_barcode|AC$password -+ -If successful, the SIP server returns the item information for $item_barcode, -similar to the following: -+ - 1803020120160923 190132AB30007003601852|AJRégion de Kamouraska|CK001|AQOSUL|APOSUL|BHCAD - |BV0.00|BGOSUL|CSCA2 PQ NR46 73R -+ -The response declares it is a valid item, with the title, owning library, -permanent and current locations, and call number. - -indexterm:[SIP] - -== SIP Communication == - -indexterm:[SIP Server, SIP Communication] - -+SIP+ generally communicates over a +TCP+ connection (either raw sockets or over +telnet+), but can also -communicate via serial connections and other methods. In Evergreen, the most common deployment is a +RAW+ socket -connection on port 6001. - -+SIP+ communication consists of strings of messages, each message request and response begin with a 2-digit -``command'' - Requests usually being an odd number and responses usually increased by 1 to be an even number. The -combination numbers for the request command and response is often referred to as a _Message Pair_ (for example, -a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is patron -status message pair). The table in the next section shows the message pairs and a description of them. - -For clarification, the ``Request'' is from the device (selfcheck or otherwise) to the ILS/ACS. The response is… the -response to the request ;). - -Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and -preceded with a 2-character field identifier) are used. The fields vary between message pairs. - -|=========================================================================== -| *Pair* | *Name* | *Supported?* |*Details* -| 01 | Block Patron | Yes |<> - ACS responds with 24 Patron Status Response -| 09-10 | Checkin | Yes (with extensions) |<> -| 11-12 | Checkout | Yes (no renewals) |<> -| 15-16 | Hold | Partially supported |<> -| 17-18 | Item Information | Yes (no extensions) |<> -| 19-20 | Item Status Update | No |<> - Returns Patron Enable response, but doesn't make any changes in EG -| 23-24 | Patron Status | Yes |<> - 63/64 ``Patron Information'' preferred -| 25-26 | Patron Enable | No |<> - Used during system testing and validation -| 29-30 | Renew | Yes |<> -| 35-36 | End Session | Yes |<> -| 37-38 | Fee Paid | Yes |<> -| 63-64 | Patron Information | Yes (no extensions) |<> -| 65-66 | Renew All | Yes |<> -| 93-94 | Login | Yes |<> - Must be first command to Evergreen ACS (via socket) or +SIP+ will terminate -| 97-96 | Resend last message | Yes |<> -| 99-98 | SC-ACS Status | Yes |<> -|=========================================================================== - -[#sip_01_block_patron] - -=== 01 Block Patron === - -indexterm:[SelfCheck] - -A selfcheck will issue a *Block Patron* command if a patron leaves their card in a selfcheck machine or if the -selfcheck detects tampering (such as attempts to disable multiple items during a single item checkout, multiple failed -pin entries, etc). - -In Evergreen, this command does the following: - -* User alert message: _CARD BLOCKED BY SELF-CHECK MACHINE_ (this is independent of the AL _Blocked -Card Message_ field). - -* Card is marked inactive. - -The request looks like: - - 01[fields AO, AL, AA, AC] - -_Card Retained_: A single character field of Y or N - tells the ACS whether the SC has retained the card (ex: left in -the machine) or not. - -_Date_: An 18 character field for the date/time when the block occurred. - -_Format_: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, ``Z'' (3 blanks and a Z) -represents UTC(GMT/Zulu) - -_Fields_: See <> for more details. - -The response is a 24 ``Patron Status Response'' with the following: - -* Charge privileges denied -* Renewal privileges denied -* Recall privileges denied (hard-coded in every 24 or 64 response) -* hold privileges denied -* Screen Message 1 (AF): _blocked_ -* Patron - -[#sip_09-10_checkin] - -=== 09/10 Checkin === - -~The request looks like: - - 09[Fields AP,AO,AB,AC,CH,BI] - -_No Block (Offline)_: A single character field of _Y_ or _N_ - Offline transactions are not currently supported so send _N_. - -_xact date_: an 18 character field for the date/time when the checkin occurred. Format: -YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, ``Z'' (3 blanks and a Z) represents -UTC(GMT/Zulu) - -_Fields_: See <> for more details. - -The response is a 10 ``Checkin Response'' with the following: - - 10[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] - -Example (with a remote hold): - - 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| - - 101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 - |CTBR3|CY373827|DANicholas Richard Woodard|CV02| - -Here you can see a hold alert for patron CY _373827_, named DA _Nicholas Richard Woodard_, to be picked up at CT -``BR3''. Since the transaction is happening at AO ``BR1'', the alert type CV is 02 for _hold at remote library_. The -possible values for CV are: - -* 00: unknown - -* 01: local hold - -* 02: remote hold - -* 03: ILL transfer (not used by EG) - -* 04: transfer - -* 99: other - -indexterm:[magnetic media] - -[NOTE] -=============== -The logic for Evergreen to determine whether the content is magnetic_media comes from -or search_config_circ_modifier. The default is non-magnetic. The same is true for media_type (default -001). Evergreen does not populate the collection_code because it does not really have any, but it will provide -the call_number where available. - -Unlike the +item_id+ (barcode), the +title_id+ is actually a title string, unless the configuration forces the -return of the bib ID. - -Don't be confused by the different branches that can show up in the same response line. - -* AO is where the transaction took place, - -* AQ is the ``permanent location'', and - -* CT is the _destination location_ (i.e., pickup lib for a hold or target lib for a transfer). -=============== - -[#sip_11-12_checkout] - -=== 11/12 Checkout === - - -[#sip_15-16_hold] - -=== 15/16 Hold === - -Evergreen supports the Hold message for the purpose of canceling -holds. It does not currently support creating hold requests via SIP2. - - -[#sip_17-18_item_information] - -=== 17/18 Item Information === - -The request looks like: - - 17[fields: AO,AB,AC] - -The request is very terse. AC is optional. - -The following response structure is for +SIP2+. (Version 1 of the protocol had only 6 total fields.) - - 18 - [fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] - -Example: - - 1720060110 215612AOBR1|ABno_such_barcode| - - 1801010120100609 162510ABno_such_barcode|AJ| - - 1720060110 215612AOBR1|AB1565921879| - - 1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 - |CTBR3|CSQA76.73.P33V76 1996| - -The first case is with a bogus barcode. The latter shows an item with a circulation_status of _10_ for _in transit between -libraries_. The known values of +circulation_status+ are enumerated in the spec. - -indexterm:[Automated Material Handling (AMH)] - -EXTENSIONS: The CT field for _destination location_ and CS _call number_ are used by Automated Material Handling -systems. - - -[#sip_19-20_item_status_update] - -=== 19/20 Item Status Update === - - -[#sip_23-24_patron_status] - -=== 23/24 Patron Status === - -Example: - - 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| - - 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| - - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| - - 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - - 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| - - 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| - -. The BL field (+SIP2+, optional) is _valid patron_, so the _N_ value means _bad_barcode_ doesn't match a patron, the -_Y_ value means 999999 does. - -. The CQ field (+SIP2+, optional) is _valid password_, so the _N_ value means _bad_password_ doesn't match 999999's -password, the _Y_ means _userpassword_ does. - -So if you were building the most basic +SIP2+ authentication client, you would check for _|CQY|_ in the response to -know the user's barcode and password are correct (|CQY| implies |BLY|, since you cannot check the password -unless the barcode exists). However, in practice, depending on the application, there are other factors to consider in -authentication, like whether the user is blocked from checkout, owes excessive fines, reported their card lost, etc. -These limitations are reflected in the 14-character _patron status_ string immediately following the _24_ code. See the -field definitions in your copy of the spec. - - -[#sip_25-26_patron_enable] - -=== 25/26 Patron Enable === - -Not yet supported. - - -[#sip_29-30_renew] - -=== 29/30 Renew === - -Evergreen supports the Renew message. Evergreen checks whether a penalty is specifically configured to block -renewals before blocking any SIP renewal. - - -[#sip_35-36_end_session] - -=== 35/36 End Session === - - 3520100505 115901AOBR1|AA999999| - - 36Y20100507 161213AOCONS|AA999999|AFThank you!| - -The _Y/N_ code immediately after the 36 indicates _success/failure_. Failure is not particularly meaningful or important -in this context, and for evergreen it is hardcoded _Y_. - - - -[#sip_37-38_fee_paid] - -=== 37/38 Fee Paid === - -Evergreen supports the Fee Paid message. - - -[#sip_63-64_patron_information] - -=== 63/64 Patron Information === - -Attempting to retrieve patron info with a bad barcode: - - 6300020060329 201700 AOBR1|AAbad_barcode| - - 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| - -Attempting to retrieve patron info with a good barcode (but bad patron password): - - 6300020060329 201700 AOBR1|AA999999|ADbadpwd| - - 64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 - |BD2 Meadowvale Dr. St Thomas, ON Canada - - 90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons - |PIUnfiltered|AFOK|AOBR1| - -See <> for info on +BL+ and +CQ+ fields. - - - -[#sip_65-66_renew_all] - -=== 65/66 Renew All === - -Evergreen supports the Renew All message. - - -[#sip_93-94_login] - -=== 93/94 Login === - -Example: - - 9300CNsip_01|CObad_value|CPBR1| - - [Connection closed by foreign host.] - ... - - 9300CNsip_01|COsip_01|CPBR1| - - 941 - -_941_ means successful terminal login. _940_ or getting dropped means failure. - -When using a version of SIPServer that supports the feature, the Location (CP) field of the Login (93) message will be used as the workstation name if supplied. Blank or missing location fields will be ignored. This allows users or reports to determine which selfcheck performed a circulation. - - -[#sip_97-96_resend] - -=== 97/96 Resend === - - -[#sip_99-98_sc_and_acs_status] - -=== 99/98 SC and ACS Status === - - 99 - -All 3 fields are required: - -* 0: SC is OK - -* 1: SC is out of paper - -* 2: SC shutting down - -* status code - 1 character - -* max print width - 3 characters - the integer number of characters the client can print - -* protocol version - 4 characters - x.xx - - 98 - - - - - -Example: - - 9910302.00 - - 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| - -The Supported Messages field +BX+ appears only in +SIP2+, and specifies whether 16 different +SIP+ commands are -supported by the +ACS+ or not. - - -[#fields] - -=== Fields === - -All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple -parsing. Variable-length fields are by definition delimited, though there will not necessarily be an initial delimiter -between the last fixed-length field and the first variable-length one. It would be unnecessary, since you should know -the exact position where that field begins already. - - -== Patron privacy and the SIP protocol == - -SIP traffic includes a lot of patron information, and is not -encrypted by default. It is strongly recommended that you -encrypt any SIP traffic. - -=== SIP server configuration === - -On the SIP server, use `iptables` or `etc/hosts` to allow SSH connections on port 22 from the SIP client machine. You will probably want to have very restrictive rules -on which IP addresses can connect to this server. - - -=== SSH tunnels on SIP clients === - -SSH tunnels are a good fit for use cases like self-check machines, because it is relatively easy to automatically open the connection. Using a VPN is another option, -but many VPN clients require manual steps to open the VPN connection. - -. If the SIP client will be on a Windows machine, install cygwin on the SIP client. -. On the SIP client, use `ssh-keygen` to generate an SSH key. -. Add the public key to /home/my_sip_user/.ssh/authorized_keys on your SIP server to enable logins without using the UNIX password. -. Configure an SSH tunnel to open before every connection. You can do this in several ways: -.. If the SIP client software allows you to run an arbitrary command before - each SIP connection, use something like this: -+ -[source,bash] ----- -ssh -f -L 6001:localhost:6001 my_sip_user@my_sip_server.com sleep 10 ----- -+ -.. If you feel confident that the connection won't get interrupted, you can have something like this run at startup: -+ -[source,bash] ----- -ssh -f -N -L 6001:localhost:6001 my_sip_user@my_sip_server.com ----- -+ -.. If you want to constantly poll to make sure that the connection is still running, you can do something like this as a cron job or scheduled task on the SIP client machine: -[source,bash] ----- -#!/bin/bash -instances=`/bin/ps -ef | /bin/grep ssh | /bin/grep -v grep | /bin/wc -l` -if [ $instances -eq 0 ]; then - echo "Restarting ssh tunnel" - /usr/bin/ssh -L 6001:localhost:6001 my_sip_user@my_sip_server.com -f -N -fi ----- - diff --git a/docs-antora/modules/admin/pages/sitemap_admin.adoc b/docs-antora/modules/admin/pages/sitemap_admin.adoc deleted file mode 100644 index ded320bb3a..0000000000 --- a/docs-antora/modules/admin/pages/sitemap_admin.adoc +++ /dev/null @@ -1,39 +0,0 @@ -=== Running the sitemap generator === -The `sitemap_generator` script must be invoked with the following argument: - -* `--lib-hostname`: specifies the hostname for the catalog (for example, - `--lib-hostname https://catalog.example.com`); all URLs will be generated - appended to this hostname - -Therefore, the following arguments are useful for generating multiple sitemaps -per Evergreen instance: - -* `--lib-shortname`: limit the list of record URLs to those which have copies - owned by the designated library or any of its children; -* `--prefix`: provides a prefix for the sitemap index file names - -Other options enable you to override the OpenSRF configuration file and the -database connection credentials, but the default settings are generally fine. - -Note that on very large Evergreen instances, sitemaps can consume hundreds of -megabytes of disk space, so ensure that your Evergreen instance has enough room -before running the script. - -=== Sitemap details === - -The sitemap generator script includes located URIs as well as items - listed in the `asset.opac_visible_copies` materialized view, and checks - the children or ancestors of the requested libraries for holdings as well. - -=== Scheduling === -To enable search engines to maintain a fresh index of your bibliographic -records, you may want to include the script in your cron jobs on a nightly or -weekly basis. - -Sitemap files are generated in the same directory from which the script is -invoked, so a cron entry will look something like: - ------------------------------------------------------------------------- -12 2 * * * cd /openils/var/web && /openils/bin/sitemap_generator ------------------------------------------------------------------------- - diff --git a/docs-antora/modules/admin/pages/staff_client-column_picker.adoc b/docs-antora/modules/admin/pages/staff_client-column_picker.adoc deleted file mode 100644 index 4d047a31aa..0000000000 --- a/docs-antora/modules/admin/pages/staff_client-column_picker.adoc +++ /dev/null @@ -1,44 +0,0 @@ -= Column Picker = -:toc: - -indexterm:[Column Picker] - -From many screens and lists, you can click on the column picker -drop-down menu to change which columns are displayed. - -image::media/column_picker_web.png[Column picker menu options] - - -To show or hide a column, simply click the column name in the menu. For -more advanced control of column visibility and their position in the -grid, choose *Manage Columns* from the menu. The popup saves changes -as they are made. - -Columns at the top of the list will appear at the left end of the grid. - -image::media/column_picker_popup.png[Column picker popup window] - - -To adjust the width of columns, choose *Manage Column Widths* from -the menu, then click the "Expand" or "Shrink" icons in each column. -These can be clicked multiple times to reach the desired width. - -image::media/column_picker_config_widths.png[Column picker manage widths] - - -After customizing the display you may save your changes by choosing -*Save Columns* from the drop-down menu. These settings are stored in the -browser and are not connected with a specific login or registered -workstation. Each computer will need to be configured separately. - -image::media/column_picker_web_save.png[column_picker_web_save] - - -Some lists have a different design, and some of them can also be customized. -Simply right-click the header row of any of the columns, and the column -picker will appear. When you are finished customizing the display, scroll -to the bottom of the Column Picker window and click *Save*. - -image::media/column_picker_dojo.png[column_picker_dojo] - - diff --git a/docs-antora/modules/admin/pages/staff_client-recent_searches.adoc b/docs-antora/modules/admin/pages/staff_client-recent_searches.adoc deleted file mode 100644 index 880ffd1a6f..0000000000 --- a/docs-antora/modules/admin/pages/staff_client-recent_searches.adoc +++ /dev/null @@ -1,40 +0,0 @@ -= Recent Staff Searches = -:toc: - -This feature enables you to view your recent searches as you perform them in the staff client. The number of searches that you can view is configurable. This feature is only available through the staff client; it is not available to patrons in the OPAC. - -== Administrative Settings == - -By default, ten searches will be saved as you search the staff client. If you want to change the number of saved searches, then you can configure the number of searches that you wish to save through the *Library Settings Editor* in the *Admin* module. - -To configure the number of recent staff searches: - -. Click *Administration -> Local Administration -> Library Settings Editor.* -. Scroll to *OPAC: Number of staff client saved searches to display on left side of results and record details pages* -. Click *Edit*. -. Select a *Context* from the drop down menu. -. Enter the number of searches that you wish to save in the *Value* field. -. Click *Update Setting* - -image::media/Saved_Catalog_Searches_2_21.jpg[Saved_Catalog_Searches_2_21] - - -NOTE: To retain this setting, the system administrator must restart the web server. - -If you do not want to save any searches, then you can turn off this feature. - -To deactivate this feature: - -. Follow steps 1-4 (one through four) as listed in the previous section. -. In the *value* field, enter 0 (zero). -. Click *Update Setting.* This will prevent you from viewing any saved searches. - - -== Recent Staff Searches == - -Evergreen will save staff searches that are entered through either the basic or advanced search fields. To view recent staff searches: - -. Enter a search term in either the basic or advanced search fields. -. Your search results for the current search will appear in the middle of the screen. The most recent searches will appear on the left side of the screen. - -image::media/Saved_Catalog_Searches_2_22.jpg[Saved_Catalog_Searches_2_22] diff --git a/docs-antora/modules/admin/pages/staff_client-return_to_results_from_marc.adoc b/docs-antora/modules/admin/pages/staff_client-return_to_results_from_marc.adoc deleted file mode 100644 index b7f6d1139f..0000000000 --- a/docs-antora/modules/admin/pages/staff_client-return_to_results_from_marc.adoc +++ /dev/null @@ -1,7 +0,0 @@ -= Return to Search Results from MARC Record = -:toc: - -This feature enables you to return to your title search results directly from any view of the MARC record, including the OPAC View, MARC Record, MARC Edit, and Holdings Maintenance. You can use this feature to page through records in the MARC Record View or Edit interfaces. You do not have to return to the OPAC View to access title results, simply click the button marked _Back To Results_. - - -image::media/back_to_results.png[Search_Results1] diff --git a/docs-antora/modules/admin/pages/staff_from_command_line.adoc b/docs-antora/modules/admin/pages/staff_from_command_line.adoc deleted file mode 100644 index cae11e25af..0000000000 --- a/docs-antora/modules/admin/pages/staff_from_command_line.adoc +++ /dev/null @@ -1,24 +0,0 @@ -= Managing Staff from the Command Line = - -== Changing passwords == - -If you need to change a patron or staff account password without using the staff client, here is how you can reset it with SQL. - -Connect to your Evergreen database using _psql_ or similar tool, and retrieve and verify your admin username: - -[source, sql] ------------------------------------------------------------------------------- -psql -U -h -d - -SELECT id, usrname, passwd from actor.usr where usrname = 'admin'; ------------------------------------------------------------------------------- - -If you do not remember the username that you set, search for it in the _actor.usr_ table, and then reset the password. - -[source, sql] ------------------------------------------------------------------------------- -UPDATE actor.usr SET passwd = WHERE id=; ------------------------------------------------------------------------------- - -The new password will automatically be hashed. - diff --git a/docs-antora/modules/admin/pages/template_toolkit.adoc b/docs-antora/modules/admin/pages/template_toolkit.adoc deleted file mode 100644 index ac474487ab..0000000000 --- a/docs-antora/modules/admin/pages/template_toolkit.adoc +++ /dev/null @@ -1,284 +0,0 @@ -= TPac Configuration and Customization = -:toc: - -== Template toolkit documentation == - -For more general information about template toolkit see: http://template-toolkit.org/docs/index.html[official -documentation]. - -The purpose of this chapter is to focus on the -Evergreen-specific uses of Template Toolkit ('TT') in the OPAC. - -== TPAC URL == - -The URL for the TPAC on a default Evergreen system is -http://localhost/eg/opac/home (adjust `localhost` to match your hostname or IP -address, naturally!) - -== Perl modules used directly by TPAC == - - * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader.pm` - * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Account.pm` - * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Container.pm` - * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Record.pm` - * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Search.pm` - * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Util.pm` - -== Default templates == - -The source template files are found in `Open-ILS/src/templates/opac`. - -These template files are installed in `/openils/var/templates/opac`. - -.NOTE -You should generally avoid touching the installed default template files, -unless you are contributing changes that you want Evergreen to adopt as a new -default. Even then, while you are developing your changes, consider using -template overrides rather than touching the installed templates until you are -ready to commit the changes to a branch. See below for information on template -overrides. - -== Apache configuration files == - -The base Evergreen configuration file on Debian-based systems can be found in -`/etc/apache2/sites-enabled/eg.conf`. This file defines the basic virtual host -configuration for Evergreen (hostnames and ports), then single-sources the -bulk of the configuration for each virtual host by including -`/etc/apache2/eg_vhost.conf`. - -== TPAC CSS and media files == - -The CSS files used by the default TPAC templates are stored in the repo in -`Open-ILS/web/css/skin/default/opac/` and installed in -`/openils/var/web/css/skin/default/opac/`. - -The media files--mostly PNG images--used by the default TPAC templates are -stored in the repo in `Open-ILS/web/images/` and installed in -`/openils/var/web/images/`. - -== Mapping templates to URLs == - -The mapping for templates to URLs is straightforward. Following are a few -examples, where `` is a placeholder for one or more directories -that will be searched for a match: - - * `http://localhost/eg/opac/home` => `/openils/var//opac/home.tt2` - * `http://localhost/eg/opac/advanced` => `/openils/var//opac/advanced.tt2` - * `http://localhost/eg/opac/results` => `/openils/var//opac/results.tt2` - -The template files themselves can process, be wrapped by, or include other -template files. For example, the `home.tt2` template currently involves a -number of other template files to generate a single HTML file: - -.Example Template Toolkit file: opac/home.tt2 -[source, html] ------------------------------------------------------------------------------- -[% PROCESS "opac/parts/header.tt2"; - WRAPPER "opac/parts/base.tt2"; - INCLUDE "opac/parts/topnav.tt2"; - ctx.page_title = l("Home") %] -
- [% INCLUDE "opac/parts/searchbar.tt2" %] -
-
-
-
- [% INCLUDE "opac/parts/homesearch.tt2" %] -
-
-
-[% END %] ------------------------------------------------------------------------------- - -We will dissect this example in some more detail later, but the important -thing to note is that the file references are relative to the top of the -template directory. - -[#how_to_override_templates] -== How to override templates == - -Overrides for templates go in a directory that parallels the structure of the -default templates directory. The overrides then get pulled in via the Apache -configuration. - -In the following example, we demonstrate how to create a file that overrides -the default "Advanced search page" (`advanced.tt2`) by adding a new templates -directory and editing the new file in that directory. - -.Adding an override for the Advanced search page (example) -[source, bash] ------------------------------------------------------------------------------- -bash$ mkdir -p /openils/var/templates_custom/opac -bash$ cp /openils/var/templates/opac/advanced.tt2 \ - /openils/var/templates_custom/opac/. -bash$ vim /openils/var/templates_custom/opac/advanced.tt2 ------------------------------------------------------------------------------- - -We now need to teach Apache about the new templates directory. Open `eg.conf` -and add the following `` element to each of the `` -elements in which you want to include the overrides. The default Evergreen -configuration includes a `VirtualHost` directive for port 80 (HTTP) and another -one for port 443 (HTTPS); you probably want to edit both, unless you want the -HTTP user experience to be different from the HTTPS user experience. - -.Configuring the custom templates directory in Apache's eg.conf -[source,xml] ------------------------------------------------------------------------------- - - # - - # - absorb the shared virtual host settings - Include eg_vhost.conf - - PerlAddVar OILSWebTemplatePath "/openils/var/templates_algoma" - - - # - ------------------------------------------------------------------------------- - -Finally, reload the Apache configuration to pick up the changes: - -.Reloading the Apache configuration -[source,bash] ------------------------------------------------------------------------------- -bash# /etc/init.d/apache2 reload ------------------------------------------------------------------------------- - -You should now be able to see your change at http://localhost/eg/opac/advanced - -=== Defining multiple layers of overrides === - -You can define multiple layers of overrides, so if you want every library in -your consortium to have the same basic customizations, and then apply -library-specific customizations, you can define two template directories for -each library. - -In the following example, we define the `template_CONS` directory as the set of -customizations to apply to all libraries, and `template_BR#` as the set of -customizations to apply to library BR1 and BR2. - -As the consortial customizations apply to all libraries, we can add the -extra template directory directly to `eg_vhost.conf`: - -.Apache configuration for all libraries (eg_vhost.conf) -[source,xml] ------------------------------------------------------------------------------- -# Templates will be loaded from the following paths in reverse order. -PerlAddVar OILSWebTemplatePath "/openils/var/templates" -PerlAddVar OILSWebTemplatePath "/openils/var/templates_CONS" ------------------------------------------------------------------------------- - -Then we define a virtual host for each library to add the second layer of -customized templates on a per-library basis. Note that for the sake of brevity -we only show the configuration for port 80. - -.Apache configuration for each virtual host (eg.conf) -[source,xml] ------------------------------------------------------------------------------- - - ServerName br1.concat.ca - DocumentRoot /openils/var/web/ - DirectoryIndex index.html index.xhtml - Include eg_vhost.conf - - PerlAddVar OILSWebTemplatePath "/openils/var/templates_BR1" - - - - - ServerName br2.concat.ca - DocumentRoot /openils/var/web/ - DirectoryIndex index.html index.xhtml - Include eg_vhost.conf - - PerlAddVar OILSWebTemplatePath "/openils/var/templates_BR2" - - ------------------------------------------------------------------------------- - -== Changing some text in the TPAC == - -Out of the box, the TPAC includes a number of placeholder text and links. For -example, there is a set of links cleverly named 'Link 1', 'Link 2', and so on -in the header and footer of every page in the TPAC. Let's customize that for -our `templates_BR1` skin. - -To begin with, we need to find the page(s) that contain the text in question. -The simplest way to do that is with the handy utility `ack`, which is much -like `grep` but with built-in recursion and other tricks. On Debian-based -systems, the command is `ack-grep` as `ack` conflicts with an existing utility. -In the following example, we search for files that contain the text "Link 1": - -.Searching for text matching "Link 1" -[source,bash] ------------------------------------------------------------------------------- -bash$ ack-grep "Link 1" /openils/var/templates/opac -/openils/var/templates/opac/parts/topnav_links.tt2 -4: [% l('Link 1') %] ------------------------------------------------------------------------------- - -Next, we copy the file into our overrides directory and edit it with `vim`: - -.Copying the links file into the overrides directory -[source,bash] ------------------------------------------------------------------------------- -bash$ cp /openils/var/templates/opac/parts/topnav_links.tt2 \ - /openils/var/templates_BR1/opac/parts/topnav_links.tt2 -bash$ vim /openils/var/templates_BR1/opac/parts/topnav_links.tt2 ------------------------------------------------------------------------------- - -Finally, we edit the link text in `opac/parts/header.tt2`. - -.Content of the opac/parts/header.tt2 file -[source,html] ------------------------------------------------------------------------------- - ------------------------------------------------------------------------------- - -For the most part, the page looks like regular HTML, but note the `[%_("` -`")%]` that surrounds the text of each link. The `[% ... %]` signifies a TT -block, which can contain one or more TT processing instructions. `l(" ... ");` -is a function that marks text for localization (translation); a separate -process can subsequently extract localized text as GNU gettext-formatted PO -files. - -.NOTE -As Evergreen supports multiple languages, any customizations to Evergreen's -default text must use the localization function. Also, note that the -localization function supports placeholders such as `[_1]`, `[_2]` in the text; -these are replaced by the contents of variables passed as extra arguments to -the `l()` function. - -Once we have edited the link and link text to our satisfaction, we can load -the page in our Web browser and see the live changes immediately (assuming -we are looking at the BR1 overrides, of course). - -== Troubleshooting == - -If there is a problem such as a TT syntax error, it generally shows up as a -an ugly server failure page. If you check the Apache error logs, you will -probably find some solid clues about the reason for the failure. For example, -in the following example the error message identifies the file in which the -problem occurred as well as the relevant line numbers: - -.Example error message in Apache error logs -[source,bash] ------------------------------------------------------------------------------- -bash# grep "template error" /var/log/apache2/error_log -[Tue Dec 06 02:12:09 2011] [warn] [client 127.0.0.1] egweb: template error: - file error - parse error - opac/parts/record/summary.tt2 line 112-121: - unexpected token (!=)\n [% last_cn = 0;\n FOR copy_info IN - ctx.copies;\n callnum = copy_info.call_number_label;\n ------------------------------------------------------------------------------- - diff --git a/docs-antora/modules/admin/pages/user_activity_type.adoc b/docs-antora/modules/admin/pages/user_activity_type.adoc deleted file mode 100644 index 46732c6ec3..0000000000 --- a/docs-antora/modules/admin/pages/user_activity_type.adoc +++ /dev/null @@ -1,30 +0,0 @@ -= User Activity Types = -:toc: - -The User Activity Types feature enables you to specify the user activity that you want to record in the database. You can use this feature for reporting purposes. This function will also display a last activity date in a user's account. - -== Enabling this Feature == - -Click *Administration* -> *Server Administration* -> *User Activity Types* to access the default set of user activity types and to add new ones. The default set of user activity types records user logins to the Evergreen ILS and to third party products that communicate with Evergreen. - -The *Label* is a free text field that enables you to describe the activity that you are tracking. - -The *Event Caller* describes the third party software or Evergreen interface that interacts with the Evergreen database and is responsible for managing the communication between the parties. - -The *Event Type* describes the type of activity that Evergreen is tracking. Currently, this feature only tracks user authentication. - -The *Event Mechanism* describes the framework for communication between the third party software or OPAC and the database. Enter an event mechanism if you want to track the means by which the software communicates with the database. If you do not want to track how the softwares communicate, then leave this field empty. - -The *Enabled* field allows you to specify which types of user activity that you would like to track. - -The *Transient* column enables you to decide how many actions you want to track. If you want to track only the last activity, then enter *True.* If you want to trace all activity by the user, enter *False*. - -image::media/User_Activity_Types1A.jpg[User_Activity_Types1A] - - -== Using this Feature == - -The last activity date for user logins appears in the patron's summary. - -image::media/User_Activity_Types2A.jpg[User_Activity_Types2A] - diff --git a/docs-antora/modules/admin/pages/virtual_index_defs.adoc b/docs-antora/modules/admin/pages/virtual_index_defs.adoc deleted file mode 100644 index 6b20276319..0000000000 --- a/docs-antora/modules/admin/pages/virtual_index_defs.adoc +++ /dev/null @@ -1,53 +0,0 @@ -= Virtual Index Definitions = -:toc: - -Virtual index definitions can be configured in Evergreen to create customized search indexes that make use of data collected by other (real) index definitions. Real index definitions use an XPath expression to indicate the bibliographic data that should be included in the index. Virtual index definitions bring together data collected by other index definitions to create a new, virtual index. They can also use an XPath expression to collect data directly for an index, but they are not required to. - -All index definitions can be modified by having other indexes map to them. For example, Genre could be added to the All Subjects field definition in the Subject index. This would allow users to search Genre as part of a Subject search. - -== Keyword Virtual Index Definition == - -Evergreen now uses a virtual index definition for the Keyword index. This allows libraries to customize the keyword search index by specifying which fields are included in the keyword index, as well as how each field should be weighted for relevance ranking in search results. By default, the keyword index contains all of the search fields other than the keyword definition itself. Each field is assigned a weight of 1, with the exception of Title Proper, which is assigned a weight of 8. A match on the Title Proper within a keyword search will be given the higher weight and therefore a higher relevance ranking within search results. - -. To view the stock virtual index definition for keyword searches, go to *Administration>Server Administration>MARC Search/Facet Fields* and select the *Keyword* Search Class. -. Locate the field labeled "All searchable fields". This is the general keyword index. -. The weight of a field can be modified by selecting the field and going to *Actions>Edit Record* or right-clicking and selecting *Edit Record*. -.. The Metabib Field Virtual Map modal will appear. Increase the weight of the field and click *Save*. - -== Configuring Virtual Index Definitions == - -. To configure a virtual index definition, go to *Administration>Server Administration>MARC Search/Facet Fields*. -.. This interface now has a _Search Class_ filter that allows users to easily select which search class they want to view. -. Next, locate the field for which you want to create a virtual index definition and click *Manage* under the column labeled _Data Suppliers_. - -image::media/vid1.PNG[] - -. A new tab will open that contains the interface for configuring a virtual index definition. This interface can be used to map real index definitions for inclusion in the virtual index. - -image::media/vid2.PNG[] - -. To create a mapping, click *New Record*. A modal called _Metabib Field Virtual Map_ will appear. -. Select the _Real_ index definition and the _Virtual_ index definition to which it should be mapped. -. Assign a _Weight_ to the mapping. This allows Evergreen to calculate the weight that should be applied to each field when searched using the virtual index. -.. The weight assigned to a field within a virtual index can be different than the weight assigned when searching that field directly. For example, the Title Proper field can have a weight of 2 when a user performs a Title search, but a weight of 5 when a user performs a Keyword search (using the virtual index). This can help move title matches on keyword searches higher up in the search results list. -. Click *Save*. -. Repeat steps 4-7 until all desired fields are mapped to the virtual index definition. - -image::media/vid3.PNG[] - -Note: A service restart is required after definitions and mapping are changed. Changes to weight only do not require a restart as they are calculated in real time. - -== Search Term Highlighting in Search Results == - -Search terms are now highlighted on the main OPAC search results page, the bibliographic record detail page, and the metarecord grouped results page. This will help users discern why a certain record was included in the search result set, as well as its relevance to the search. Search terms will be highlighted in both real and virtual fields that were searched. Terms that were stemmed or normalized during searching will also be highlighted. Search term highlighting can be turned off within the OPAC by selecting the checkbox to "Disable Highlighting" in the search results interface. - -A keyword search for "piano" returns a set of search results: - -image::media/vid4.PNG[] - -The search term is highlighted in the search results and indicates why the records were included in the search result set. In this example, the search results interface shows the first three records had matching terms in the title field. - -Within the record detail page for "The five piano concertos", we can see the search term also matched on the General Note and Subject fields within the bibliographic record. - -image::media/vid5.PNG[] - diff --git a/docs-antora/modules/admin/pages/web-client-browser-best-practices.adoc b/docs-antora/modules/admin/pages/web-client-browser-best-practices.adoc deleted file mode 100644 index cd14a827d7..0000000000 --- a/docs-antora/modules/admin/pages/web-client-browser-best-practices.adoc +++ /dev/null @@ -1,66 +0,0 @@ -= Best Practices for Using the Browser = -:toc: - -== Pop-up Blockers == - -Before using the web client, be sure to disable pop-up blockers for your -Evergreen system's domain. - -- In Chrome, select _Settings_ from the Chrome menu and click on _Content -settings_ in the advanced section. Select _Popups_ and then add your domain to -the _Allowed_ list. -- In Firefox, select _Preferences_ from the Firefox menu and then select the -_Content_ panel. Click the _Exceptions_ button and add your domain to the -_Allowed Sites_ list. - - -== Setting Browser Defaults for Web Client == - -To ensure that staff can easily get to the web client portal page on login -without additional steps, you can set the browser's home page to default to the -web client. - -=== Setting the Web Client as the Home Page in Chrome === - -. In the top-right corner of your browser, click the Chrome menu. -. Select *Settings*. -. In the _On startup_ section, select _Open a specific page or set of pages._ -. Click the *Set Pages* link. -. Add _https://localhost/eg/staff/_ to the _Enter URL_ box and click *OK*. - -=== Setting the Web Client as the Home Page in Firefox === - -. In the top-right corner of your browser, click the menu button. -. Click *Options*. -. In the _When Firefox starts:_ dropdown menu, select _Show my home page_. -. In the _Home Page_ box, add _https://localhost/eg/staff/_ and click *OK*. - -include::partial$turn-off-print-headers-firefox.adoc[] - -include::partial$turn-off-print-headers-chrome.adoc[] - -== Tab Buttons and Keyboard Shortcuts == - -Now that the client will be loaded in a web browser, users can use browser-based -tab controls and keyboard shortcuts to help with navigation. Below are some -tips for browser navigation that can be used in Chrome and Firefox on Windows -PCs. - -- Use CTRL-T or click the browser's new tab button to open a new tab. -- Use CTRL-W or click the x in the tab to close the tab. -- Undo closing a tab by hitting CTRL-Shift-T. -- To open a link from the web client in a new tab, CTRL-click the link or -right-click the link and select *Open Link in New Tab*. Using this method, you -can also open options from the web client's dropdown menus in a new tab -- Navigate to the next tab using CTRL-Tab. Go to the previous tab with CTRL-Shift-Tab. - -=== Setting New Tab Behavior === - -Some users may want to automatically open the web client's portal page in a new -tab. Neither Chrome nor Firefox will open your home page by default when you -open a new tab. However, both browsers have optional add-ons that will allow you -to set the browsers to automatically open the home page whenever open opening a -new tab. These add-ons may be useful for those libraries that want the new tab -to open to the web client portal page. - - diff --git a/docs-antora/modules/admin/pages/web_client-login.adoc b/docs-antora/modules/admin/pages/web_client-login.adoc deleted file mode 100644 index 65f4ceca05..0000000000 --- a/docs-antora/modules/admin/pages/web_client-login.adoc +++ /dev/null @@ -1,52 +0,0 @@ -= Logging into Evergreen = -:toc: - -== Registering a Workstation == -[#register_workstation] -indexterm:[staff client, registering a workstation] - -Before logging into Evergreen, you must first register a workstation from your -browser. - -[NOTE] -=============== -You will need the permissions to add workstations to your network. If you do -not have these permissions, ask your system administrator for assistance. -=============== - -. When you login for the first time, you will arrive at a screen asking that you -register your workstation -+ -image::media/web_client_workstation_registration.png[] -+ -. Create a unique workstation name. -. Click _Register_ -. After confirming the new workstation is listed in the _Workstations Registered -With This Browser_ menu, click _Use Now_ to return to the login page. Your -newly-registered workstation should be selected by default on the login page. - -== Basic Login == - -indexterm:[staff client, logging in] - -. The default URL to log into the client is _https://localhost/eg/staff/login_ -. Enter your _Username_ and _Password_. -. Verify that the correct workstation is selected and click *Sign In*. - -[[browser_defaults]] - - -== Logging Out == - -indexterm:[staff client, logging out] - -To log out of the client: - -. Click the menu button to the right of your user name in the top-right corner -of the window. -. Select *Log Out* - -[CAUTION] -Exiting all browser windows will automatically log you out of the web client. If -you only close the tab where the web client is loaded, you will remain logged in. - diff --git a/docs-antora/modules/admin/pages/workstation_admin.adoc b/docs-antora/modules/admin/pages/workstation_admin.adoc deleted file mode 100644 index 162f222961..0000000000 --- a/docs-antora/modules/admin/pages/workstation_admin.adoc +++ /dev/null @@ -1,128 +0,0 @@ -= Workstation Administration = -:toc: - -indexterm:[staff client, configuration] -indexterm:[workstation, configuration] -indexterm:[configuration] - -== Copy Editor: Copy Location Name First == - -indexterm:[copy editor, shelving location] - -By default, when editing item records, library code is displayed in front of -shelving location in _Shelving Location_ field. You may reverse the order by going -to *Administration -> Workstation Administration -> Copy Editor: Copy Location Name -First*. -Simply click it to make copy location name displayed first. The setting is saved -on the workstation. - -== Font and Sound Settings == - -indexterm:[staff client, fonts, zooming] -indexterm:[staff client, sounds] - -=== In the Staff Client === - -You may change the size of displayed text or turn staff client sounds on -and off. These settings are specific to each workstation and stored on -local hard disk. They do not affect OPAC font sizes. - -. Select *Administration -> Workstation Administration -> Global Font and Sound -Settings*. -. To turn off the system sounds, like the noise that happens when a patron -with a block is retrieved, check the _disable sound_ box and click _Save -to Disk_. -+ -image::media/workstation_admin-1.jpg[disable sound] -+ -. To change the size of the font, pick the desired option and click _Save -to Disk_. - -image::media/workstation_admin-2.jpg[font size] - -=== In the OPAC === - -It is also possible to zoom in and zoom out when viewing the OPAC in the -staff client, making the font appear larger or smaller. (This will not -affect other screens.) Use *CTRL + +* (plus sign, to zoom in), *CTRL + -* -(minus sign, to zoom out), and *CTRL + 0* (to restore default). The -workstation will remember the setting. - -== Select Hotkeys == - -indexterm:[staff client, hotkeys] - -All or partial hotkeys can be turned on or off. It can be done for a particular -workstation: - -. Navigate to *Administration -> Workstation Administration -> Hotkeys -> Current*. -. Select _Default_, _Minimal_, and _None_. -+ -image::media/workstation_admin-3.png[select hotkeys] -+ -* *Default*: including all hotkeys -* *Minimal*: including those hotkeys using CTRL key -* *None*: excluding all hotkeys -+ -. Go back to the above menu. -. Click *Set Workstation Default to Current*. - -To clear the existing default click *Clear Workstation Default*. - -You can use the *Toggle Hotkeys* button, included in some toolbars, on top right -corner, to switch your selected Hotkeys _on_ or -_off_ for the current login session. -It has the same effect as when you click *Disable Hotkeys* on the _Hotkeys_ menu. - -== Configure Printers == - -indexterm:[staff client, printers] - -Use the Printer Settings Editor to configure printer output for each -workstation. If left unconfigured Evergreen will use the default printer set in -the workstation's operating system (Windows, OSX, Ubuntu, etc). - -Evergreen printing works best if you are using recent, hardware-specific printer -drivers. - -. Select *Administration -> Workstation Administration -> Printer Settings Editor*. -. Select the _Printer Context_. At a minimum set the _Default_ context on each -Evergreen workstation. Repeat the procedure for other contexts if they differ -from the default (e.g. if spine labels should output to a different printer. -+ -image::media/workstation_admin-4.png[printer context] -+ -* *Default*: Default settings for staff client print functions (set for each -workstation). -* *Receipt*: Settings for printing receipts. -* *Label*: Printer settings for spine and pocket labels. -* *Mail*: Settings for printing mailed notices (not yet active). -* *Offline*: Applies to all printing from the Offline Interface. -+ -. After choosing _Printer Context_ click *Set Default Printer* and *Print Test -Page* and follow the prompts. If successful, test output will print to your chosen -printer. -+ -image::media/workstation_admin-5.png[set default printer] -+ -. (optional) To further format or customize printed output click *Page Settings* and -adjust settings. When finished click *OK* and print another test page to view -changes. - -image::media/workstation_admin-6.jpg[page setup] - -=== Advanced Settings === - -If you followed the steps above and still cannot print there are two alternate -print strategies: - -* DOS LPTI Print (sends unformatted text directly to the parallel port) -* Custom/External Print (configuration required) - -[NOTE] -==================================== -Evergreen cannot print using the Windows Generic/Text Only driver. If this -driver is the only one available try one of the alternate print strategies -instead. -==================================== - diff --git a/docs-antora/modules/admin/partials/turn-off-print-headers-chrome.adoc b/docs-antora/modules/admin/partials/turn-off-print-headers-chrome.adoc deleted file mode 100644 index 32dda5d0d8..0000000000 --- a/docs-antora/modules/admin/partials/turn-off-print-headers-chrome.adoc +++ /dev/null @@ -1,16 +0,0 @@ -=== Turning off print headers and footers in Chrome === - -indexterm:[printing,headers] -indexterm:[printing,footers] - -If you are not using Hatch for printing, you will probably want to configure -your browser so that Chrome does not add headers and footers to items printed -on certain printers. For example, if you are printing spine labels, you likely -will not want Chrome to add a date or URL to the margins of your label. - -You can turn off these headers and footers using the following steps: - -. In the Chrome menu, click _Print..._ to open the print preview screen. -. Click _More Settings_. -. Uncheck _Headers and Footers_. - diff --git a/docs-antora/modules/admin/partials/turn-off-print-headers-firefox.adoc b/docs-antora/modules/admin/partials/turn-off-print-headers-firefox.adoc deleted file mode 100644 index 44bdd2fcd9..0000000000 --- a/docs-antora/modules/admin/partials/turn-off-print-headers-firefox.adoc +++ /dev/null @@ -1,30 +0,0 @@ -=== Turning off print headers and footers in Firefox === - -indexterm:[printing,headers] -indexterm:[printing,footers] - -If you are not using Hatch for printing, you will probably want to configure -your browser so that Firefox does not add headers and footers to items printed -on certain printers. For example, if you are printing spine labels, you likely -will not want Firefox to add a date or URL to the margins of your label. - -You can turn off these headers and footers using the following steps: - -. In the Firefox menu, click _Print..._ to open the print preview screen. -. Click the _Page Setup..._ button. -. Go to the _Margins & Header/Footer_ tab. -. Make sure that all dropdown menus are set to _--blank--_. - -If you only want to turn off those headers and footers for a specific -printer, use these steps: - -. In the Firefox address bar, type link:about:config[]. -. If a warning appears, click _I accept the risk_. -. Type _print_header_ into this screen's search box. -. Double-click on the relevant _print_headerleft_, _print_headerright_, and -_print_headercenter_ entries in the grid. -. Delete any existing data for that setting and click OK. -. Type _print_footer_ into the screen's search box and repeat these steps -for the footer settings. - - diff --git a/docs-antora/modules/admin_initial_setup/_attributes.adoc b/docs-antora/modules/admin_initial_setup/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/admin_initial_setup/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel1.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel1.png deleted file mode 100644 index 6ec0e1455f..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel1.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel2.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel2.png deleted file mode 100644 index c2570ec127..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel2.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel3.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel3.png deleted file mode 100644 index 44eee8b549..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel3.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel4.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel4.png deleted file mode 100644 index 8c6fa31a37..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel4.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel5.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel5.png deleted file mode 100644 index a49288640c..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel5.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel6.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel6.png deleted file mode 100644 index e4106c0673..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel6.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel7.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel7.png deleted file mode 100644 index 5e71110c9d..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel7.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/carousel8.png b/docs-antora/modules/admin_initial_setup/assets/images/carousel8.png deleted file mode 100644 index 85e15412e8..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/carousel8.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/batch_import_profile.png b/docs-antora/modules/admin_initial_setup/assets/images/media/batch_import_profile.png deleted file mode 100644 index 748d36b285..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/batch_import_profile.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_duration_rules.jpg b/docs-antora/modules/admin_initial_setup/assets/images/media/circ_duration_rules.jpg deleted file mode 100644 index f9d3962b6d..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_duration_rules.jpg and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example1.png b/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example1.png deleted file mode 100644 index 265d05d59a..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example1.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example2.png b/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example2.png deleted file mode 100644 index 652eeb34f5..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example2.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example3.png b/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example3.png deleted file mode 100644 index fcb62fbf29..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_example3.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_max_fine_rules.jpg b/docs-antora/modules/admin_initial_setup/assets/images/media/circ_max_fine_rules.jpg deleted file mode 100644 index f8f9a32025..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_max_fine_rules.jpg and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_recurring_fine_rules.jpg b/docs-antora/modules/admin_initial_setup/assets/images/media/circ_recurring_fine_rules.jpg deleted file mode 100644 index 280325e371..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/circ_recurring_fine_rules.jpg and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-1.png b/docs-antora/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-1.png deleted file mode 100644 index 14f32e49ff..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-1.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-2.jpg b/docs-antora/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-2.jpg deleted file mode 100644 index ec154836aa..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-2.jpg and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/copy_locations_editor.png b/docs-antora/modules/admin_initial_setup/assets/images/media/copy_locations_editor.png deleted file mode 100644 index 18b91ad1da..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/copy_locations_editor.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/create_match_sets.png b/docs-antora/modules/admin_initial_setup/assets/images/media/create_match_sets.png deleted file mode 100644 index 1b92a17620..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/create_match_sets.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/order_record_loading.png b/docs-antora/modules/admin_initial_setup/assets/images/media/order_record_loading.png deleted file mode 100644 index 160af6a5fd..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/order_record_loading.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/record_quality_metrics.png b/docs-antora/modules/admin_initial_setup/assets/images/media/record_quality_metrics.png deleted file mode 100644 index fd7b80c3a8..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/record_quality_metrics.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-1_web_client.png b/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-1_web_client.png deleted file mode 100644 index 1f270dc3e6..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-1_web_client.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-2_web_client.png b/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-2_web_client.png deleted file mode 100644 index f99a481a1f..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-2_web_client.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-3.png b/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-3.png deleted file mode 100644 index 271d3c11f0..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-3.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-4_web_client.png b/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-4_web_client.png deleted file mode 100644 index b69f9c4a26..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-4_web_client.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-5_web_client.png b/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-5_web_client.png deleted file mode 100644 index fed27cebaa..0000000000 Binary files a/docs-antora/modules/admin_initial_setup/assets/images/media/sup-permissions-5_web_client.png and /dev/null differ diff --git a/docs-antora/modules/admin_initial_setup/nav.adoc b/docs-antora/modules/admin_initial_setup/nav.adoc deleted file mode 100644 index e475ef4f4e..0000000000 --- a/docs-antora/modules/admin_initial_setup/nav.adoc +++ /dev/null @@ -1,27 +0,0 @@ -* xref:admin_initial_setup:introduction.adoc[System Configuration and Customization] -** xref:admin_initial_setup:describing_your_organization.adoc[Describing your organization] -** xref:admin_initial_setup:describing_your_people.adoc[Describing your people] -** xref:admin_initial_setup:migrating_patron_data.adoc[Migrating Patron Data] -** xref:admin_initial_setup:migrating_your_data.adoc[Migrating from a legacy system] -** xref:admin_initial_setup:importing_via_staff_client.adoc[Importing materials in the staff client] -** xref:admin_initial_setup:ordering_materials.adoc[Ordering materials] -** xref:admin_initial_setup:designing_your_catalog.adoc[Designing your catalog] -** xref:admin:search_interface.adoc[Designing the patron search experience] -** xref:admin_initial_setup:borrowing_items.adoc[Borrowing items: who, what, for how long] -** xref:admin:autorenewals.adoc[Autorenewals in Evergreen] -** xref:admin_initial_setup:hard_due_dates.adoc[Hard due dates] -** xref:admin:template_toolkit.adoc[TPac Configuration and Customization] -** xref:admin_initial_setup:carousels.adoc[Carousels] -** xref:opac:new_skin_customizations.adoc[Creating a New Skin: the Bare Minimum] -** xref:admin:auto_suggest_search.adoc[Auto Suggest in Catalog Search] -** xref:admin:authentication_proxy.adoc[Authentication Proxy] -** xref:admin_initial_setup:KidsOPAC.adoc[Kid's OPAC Configuration] -** xref:admin:patron_address_by_zip_code.adoc[Patron Address City/State/County Pre-Populate by ZIP Code] -** xref:admin:phonelist.adoc[Phonelist.pm Module] -** xref:admin:sip_server.adoc[SIP Server] -** xref:admin:apache_rewrite_tricks.adoc[Apache Rewrite Tricks] -** xref:admin:apache_access_handler.adoc[Apache Access Handler Perl Module] -** xref:admin:ebook_api_service.adoc[ebook_api service] -** xref:admin:hold_targeter_service.adoc[hold-targeter service] -** xref:admin:backups.adoc[Backing up your Evergreen System] - diff --git a/docs-antora/modules/admin_initial_setup/pages/KidsOPAC.adoc b/docs-antora/modules/admin_initial_setup/pages/KidsOPAC.adoc deleted file mode 100644 index 0c572e46be..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/KidsOPAC.adoc +++ /dev/null @@ -1,132 +0,0 @@ -= Kid's OPAC Configuration = -:toc: - -== Configuration == - -=== Apache === - -The KPAC is already included and ready to be used with new Evergreen installs. So you only need to change the apache config -if you need to change template locations or if you want to use a different *kpac.xml* config file. The defaults for the KPAC are set -in */etc/apache2/eg_vhosts.conf*. - ------------------------------------------------------------------------------- - - PerlSetVar OILSWebContextLoader "OpenILS::WWW::EGKPacLoader" - PerlSetVar KPacConfigFile "/openils/conf/kpac.xml.example" - ------------------------------------------------------------------------------- - -=== XML Configuration File === - - * The XML configuration file defines the layout of the kid's OPAC. - * It is read with each restart/reload of the Apache web server. - * The file lives by default at /openils/conf/kpac.xml.example - * There are two top-level elements: and . - * The layout defines the owning org unit and the start page, both by ID. - * At runtime, the layout is determined by the context org unit. If no - configuration is defined for the context org unit, the layout for the - closest ancestor is used. - -[source, xml] ------------------------------------------------------------------------------- - ------------------------------------------------------------------------------- - - * The pages section is a container for elements. - * Each page defines an ID, the number of columns to display for the page, - the page name, and an icon. - -[source, xml] ------------------------------------------------------------------------------- - ------------------------------------------------------------------------------- - - * Each page is a container of cells - * Each cell defines - ** type (topic, search, link) - ** name - ** icon - ** content - * The content for type="topic" cells is the ID of the page this topic - jumps to. The name and img for the referenced page is used as the - display content. - -[source, xml] ------------------------------------------------------------------------------- -12 ------------------------------------------------------------------------------- - - * The content for type="search" cells is the search query. The name and - img are used for the display content. - -[source, xml] ------------------------------------------------------------------------------- -su:piano ------------------------------------------------------------------------------- - - * The content for type="link" cells is the URL. The name and img are used - for the display content. - -[source, xml] ------------------------------------------------------------------------------- -http://en.wikipedia.org/wiki/Clarinet ------------------------------------------------------------------------------- - - -=== Skin Configuration === - -The following example enables you to configure the alternate skin (Monster Skin, kpac2) for the Kids -Catalog. - -You should be familiar with how the xref:admin:template_toolkit.adoc#how_to_override_templates[Evergreen TPAC handles template folders] -before you make these changes. - -If you already have a custom template directory setup you can copy the *Open-ILS/examples/web/templates/kpac* -files to that directory instead, and then skip any Apache config changes. - -[source, bash] ------------------------------------------------------------------------------- -% cp -r Open-ILS/examples/web/css/skin/kpac2 /openils/var/web/css/skin/ -% cp -r Open-ILS/examples/web/images/kpac/* /openils/var/web/images/kpac/ #does not clobber -% mkdir /openils/var/templates_kpac2 -% cp -r Open-ILS/examples/web/templates/kpac /openils/var/templates_kpac2/ -% cp -r /openils/var/web/css/skin/default/kpac/fonts /openils/var/web/css/skin/kpac2/kpac ------------------------------------------------------------------------------- - -Then set up 443/80 vhosts for serving the alternate skin in eg.conf, something -along the lines of: - ------------------------------------------------------------------------------- - - ServerName xyz.dev198.esilibrary.com:80 - DocumentRoot /openils/var/web/ - DirectoryIndex index.html index.xhtml - Include eg_vhost.conf - - #Point to a different kpac.xml config file if needed - #PerlSetVar KPacConfigFile "/openils/conf/kpac.xml.example" - PerlAddVar OILSWebTemplatePath "/openils/var/templates_kpac2" - - ------------------------------------------------------------------------------- - -== Considerations for Community Adoption == - -The templates for the Kid's OPAC were developed long before the TPAC was -integrated into Evergreen and it has many of the same limitations that -were part of the TPAC. - - * Fixed width elements (divs, images, etc.), which complicates the - addition of new features and local customizations. - * Images with text, which prevents l10n/i18n. - * While the KPAC does not attempt to match the color scheme of any one - institution, it's inconsistent with the standard Evergreen color - palette. Creating an additional skin to act as the Evergreen default - my be necessary. - -== Outstanding Development (Unsponsored) == - - ** Port the XML configuration file to a DB structure, complete with UI for - managing the various components and upgrade path. - diff --git a/docs-antora/modules/admin_initial_setup/pages/_attributes.adoc b/docs-antora/modules/admin_initial_setup/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/admin_initial_setup/pages/borrowing_items.adoc b/docs-antora/modules/admin_initial_setup/pages/borrowing_items.adoc deleted file mode 100644 index 4ed0bc72b5..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/borrowing_items.adoc +++ /dev/null @@ -1,243 +0,0 @@ -= Borrowing items: who, what, for how long = -:toc: - -Circulation policies pull together user, library, and item data to determine how -library materials circulate, such as: which patrons, from what libraries can -borrow what types of materials, for how long, and with what overdue fines. - -Individual elements of the circulation policies are configured using specific -interfaces, and should be configured prior to setting up the circulation -policies. - -== Data elements that affect your circulation policies == - -There are a few data elements which must be considered when setting up your -circulation policies. - -=== Copy data === - -Several fields set via the holdings editor are commonly used to affect the -circulation of an item. - -* *Circulation modifier* - Circulation modifiers are fields used to control -circulation policies on specific groups of items. They can be added to items -during the cataloging process. New circulation modifiers can be created in the -staff client by navigating to *Administration -> Server Administration -> Circulation -Modifiers*. -* *Circulate?* flag - The circulate? flag in the holdings editor can be set to False -to disallow an item from circulating. -* *Reference?* flag - The reference? flag in the holdings editor can also be used as -a data element in circulation policies. - -=== Shelving location data === - -* To get to the Shelving Locations Editor, navigate to *Administration -> -Local Administration -> Shelving Locations Editor*. -* Set _OPAC Visible_ to "No" to hide all items in a shelving location from the -public catalog. (You can also hide individual items using the Copy Editor.) -* Set _Hold Verify_ to "Yes" if when an item checks in you want to always ask for -staff confirmation before capturing a hold. -* Set _Checkin Alert_ to "Yes" to allow routing alerts to display when items -are checked in. -* Set _Holdable_ to "No" to prevent items in an entire shelving location from -being placed on hold. -* Set _Circulate_ to "No" to disallow circulating items in an entire shelving -location. -* If you delete a shelving location, it will be removed from display in the staff -client and the catalog, but it will remain in the database. This allows you to -treat a shelving location as deleted without losing statistical information for -circulations related to that shelving location. - -image::media/copy_locations_editor.png[screenshot of Shelving Location Editor] - -* Shelving locations can also be used as a data element in circulation policies. - -=== User data === - -Finally, several characteristics of specific patrons can affect circulation -policies. You can modify these characteristics in a patron's record (*Search -> -Search for Patrons*, select a patron, choose *Edit* tab) or when registering a -new patron (*Circulation -> Register Patron*). - -* The user permission group is also commonly used as a data element in -circulation policies. -* Other user data that can be used for circulation policies include the -*juvenile* flag in the user record. - -== Circulation Rules == - -*Loan duration* describes the length of time for a checkout. You can also -identify the maximum renewals that can be placed on an item. - -You can find Circulation Duration Rules by navigating to *Administration --> Server Administration -> Circulation Duration Rules*. - -image::media/circ_duration_rules.jpg[] - -*Recurring fine* describes the amount assessed for daily and hourly fines as -well as fines set for other regular intervals. You can also identify any grace -periods that should be applied before the fine starts accruing. - -You can find Recurring Fine Rules by navigating to *Administration -> Server -Administration -> Circulation Recurring Fine Rules*. - -image::media/circ_recurring_fine_rules.jpg[] - -*Max fine* describes the maximum amount of fines that will be assessed for a -specific circulation. Set the *Is Percent* field to True if the maximum fine -should be a percentage of the item's price. - -You can find Circ Max Fine Rules by navigating to *Administration -> Server -Administration -> Circulation Max Fine Rules*. - -image::media/circ_max_fine_rules.jpg[] - -These rules generally cause the most variation between organizational units. - -Loan duration and recurring fine rate are designed with 3 levels: short, normal, -and extended loan duration, and low, normal, and high recurring fine rate. These -values are applied to specific items, when item records are created. - -When naming these rules, give them a name that clearly identifies what the rule -does. This will make it easier to select the correct rule when creating your -circ policies. - -=== Circulation Limit Sets === - -Circulation Limit Sets allow you to limit the maximum number of items for -different types of materials that a patron can check out at one time. Evergreen -supports creating these limits based on circulation modifiers, shelving locations, -or circulation limit groups, which allow you to create limits based on MARC data. -The below instructions will allow you to create limits based on circulation -modifiers. - -* Configure the circulation limit sets by selecting *Administration -> Local -Administration -> Circulation Limit Sets*. -* *Items Out* - The maximum number of items circulated to a patron at the same -time. -* *Min Depth* - Enter the minimum depth in the org tree that -Evergreen will consider as valid circulation libraries for counting items out. -The min depth is based on org unit type depths. For example, if you want the -items in all of the circulating libraries in your consortium to be eligible for -restriction by this limit set when it is applied to a circulation policy, then -enter a zero (0) in this field. -* *Global* - Check the box adjacent to Global if you want all of the org -units in your consortium to be restricted by this limit set when it is applied -to a circulation policy. Otherwise, Evergreen will only apply the limit to the -direct ancestors and descendants of the owning library. -* *Linked Limit Groups* - Add any circulation modifiers, shelving locations, or circ -limit groups that should be part of this limit set. - -*Example* -Your library (BR1) allows patrons to check out up to 5 videos at one time. This -checkout limit should apply when your library's videos are checked out at any -library in the consortium. Items with DVD, BLURAY, and VHS circ modifiers should -be included in this maximum checkout count. - -To create this limit set, you would add 5 to the *Items Out* field, 0 to the -*Min Depth* field and select the *Global* flag. Add the DVD, BLURAY and VHS circ -modifiers to the limit set. - -== Creating Circulation Policies == - -Once you have identified your data elements that will drive circulation policies -and have created your circulation rules, you are ready to begin creating your -circulation policies. - -If you are managing a small number of rules, you can create and manage -circulation policies in the staff client via *Administration -> Local Administration -> -Circulation Policies*. However, if you are managing a large number of policies, -it is easier to create and locate rules directly in the database by updating -*config.circ_matrix_matchpoint*. - -The *config.circ_matrix_matchpoint* table is central to the configuration of -circulation parameters. It collects the main set of data used to determine what -rules apply to any given circulation. It is useful for us to think of their -columns in terms of 'match' columns, those that are used to match the -particulars of a given circulation transaction, and 'result' columns, those that -return the various parameters that are applied to the matching transaction. - -* Circulation policies by checkout library or owning library? - - If your policies should follow the rules of the library that checks out the -item, select the checkout library as the *Org Unit (org_unit)*. - - If your policies should follow the rules of the library that owns the item, -select the consortium as the *Org Unit (org_unit)* and select the owning library -as the *Item Circ Lib (copy_circ_lib)*. -* Renewal policies can be created by setting *Renewals? (is_renewal)* to True. -* You can apply the duration rules, recurring fine rules, maximum fine rules, -and circulation sets created in the above sets when creating the circulation -policy. - -=== Best practices for creating policies === - -* Start by replacing the default consortium-level circ policy with one that -contains a majority of your libraries' duration, recurring fine, and max fine -rules. This first rule will serve as a default for all materials and permission -groups. -* If many libraries in your consortium have rules that differ from the default -for particular materials or people, set a consortium-wide policy for that circ -modifier or that permission group. -* After setting these consortium defaults, if a library has a circulation rule -that differs from the default, you can then create a rule for that library. You -only need to change the parameters that are different from the default -parameters. The rule will inherit the values for the other parameters from that -default consortium rule. -* Try to avoid unnecessary repetition. -* Try to get as much agreement as possible among the libraries in your -consortium. - -*Example 1* - -image::media/circ_example1.png[] - -In this example, the consortium has decided on a 21_day_2_renew loan rule for -general materials, i.e. books, etc. Most members do not charge overdue fines. -System 1 charges 25 cents per day to a maximum of $3.00, but otherwise uses the -default circulation duration. - -*Example 2* - -image::media/circ_example2.png[] - -This example includes a basic set of fields and creates a situation where items -with a circ modifier of "book" or "music" can be checked out, but "dvd" items -will not circulate. The associated rules would apply during checkouts. - -*Example 3* - -image::media/circ_example3.png[] - -This example builds on the earlier example and adds some more complicated -options. - -It is still true that "book" and "music" items can be checked out, while "dvd" -is not circulated. However, now we have added new rules that state that "Adult" -patrons of "SYS1" can circulate "dvd" items. - -=== Settings Relevant to Circulation === - -The following circulation settings, available via *Administration --> Local Administration -> Library Settings Editor*, can -also affect your circulation duration, renewals and fine policy. - -* *Auto-Extend Grace Periods* - When enabled, grace periods will auto-extend. -By default this will be only when they are a full day or more and end on a -closed date, though other options can alter this. -* *Auto-Extending Grace Periods extend for all closed dates* - If enabled and -Grace Periods auto-extending is turned on, grace periods will extend past all -closed dates they intersect, within hard-coded limits. -* *Auto-Extending Grace Periods include trailing closed dates* - If enabled and -Grace Periods auto-extending is turned on, grace periods will include closed -dates that directly follow the last day of the grace period. -* *Checkout auto renew age* - When an item has been checked out for at least -this amount of time, an attempt to check out the item to the patron that it is -already checked out to will simply renew the circulation. -* *Cap Max Fine at Item Price* - This prevents the system from charging more -than the item price in overdue fines. -* *Lost Item Billing: New Min/Max Price Settings* - Patrons will be billed -at least the Min Price and at most the Max price, even if the item's price -is outside that range. To set a fixed price for all lost items, set min and -max to the same amount. -* *Charge fines on overdue circulations when closed* - Normally, fines are not -charged when a library is closed. When set to True, fines will be charged during -scheduled closings and normal weekly closed days. diff --git a/docs-antora/modules/admin_initial_setup/pages/carousels.adoc b/docs-antora/modules/admin_initial_setup/pages/carousels.adoc deleted file mode 100644 index 26351a0d4d..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/carousels.adoc +++ /dev/null @@ -1,256 +0,0 @@ -= Adding Carousels to Your Public Catalog = -:toc: - -This feature fully integrates the creation and management of book carousels into Evergreen, allowing for the display of book cover images on a library’s public catalog home page. Carousels may be animated or static. They can be manually maintained by staff or automatically maintained by Evergreen. Titles can appear in carousels based on newly cataloged items, recent returns, popularity, etc. Titles must have copies that are visible to the public catalog, be circulating, and holdable to appear in a carousel. Serial titles cannot be displayed in carousels. - -image::carousel1.png[Book carousel on public catalog front screen] - -There are three administrative interfaces used to create and manage carousels and their components: - -* <> - used to define different types of carousels -* <> - used to create and manage specific carousel definitions -* <> - used to manage which libraries will display specific carousels, as well as the default display order on a library’s public catalog home page - -Each of these interfaces are detailed below. - -[[carousel_types]] -== CAROUSEL TYPES == - -The Carousel Types administrative interface is used to create, edit, or delete carousel types. Carousel Types define the attributes of a carousel, such as whether it is automatically managed and how it is filtered. A carousel must be associated with a carousel type to function properly. - -There are five stock Carousel Types: - -* *Newly Cataloged Items* - titles appear automatically based on the active date of the title’s copies -* *Recently Returned Items* - titles appear automatically based on the mostly recently circulated copy’s check-in scan date and time -* *Top Circulated Titles* - titles appear automatically based on the most circulated copies in the Item Libraries identified in the carousel definition; titles are chosen based on the number of action.circulation rows created during an interval specified in the carousel definition and includes both circulations and renewals -* *Newest Items by Shelving Location* - titles appear automatically based on the active date and shelving location of the title’s copies -* *Manual* - titles are added and managed manually by library staff - -Additional types can be created in the Carousel Types Interface. Types can also be modified or deleted. Access the interface by going to Administration > Server Administration > Carousel Types. - -The interface displays the list of carousel types in a grid format. The grid displays the Carousel Type ID, name of the carousel type, and the characteristics of each type by default. The Actions Menu is used to edit or delete a carousel type. - -image::carousel2.png[Carousel Types configuration screen] - -=== Attributes of Carousel Types === - -Each Carousel Type defines attributes used to add titles to the carousels associated with the type. Filters apply only to automatically managed carousels. - -* *Automatically Managed* - when set to true, Evergreen uses a cron job to add titles to a carousel automatically based on a set of criteria established in the carousel definition. When set to false, library staff must enter the contents of a carousel manually. -* *Filter by Age* - when set to true, the type includes or excludes titles based on the age of their attached items -* *Filter by Item Owning Library* - when set to true, the type includes or excludes titles based the owning organizational unit of their attached items -* *Filter by Item Location* - when set to true, the type includes or excludes titles based on the shelving locations of their attached items - -=== Creating a Carousel Type === - -. Go to Administration > Server Administration > Carousel Types -. Select the *New Carousel Type* button -. Enter a name for the carousel type -. Use the checkboxes to apply filtering characteristics to the carousel type; filters for age, item owning library, and location are applied only to automatically managed carousels - .. Automatically Managed? - .. Filter by Age? - .. Filter by Item Owning Library? - .. Filter by Item Location? - -image::carousel3.png[Carousel Types Editor screen] - -=== Editing a Carousel Type === - -Users can rename a carousel type or change the characteristics of existing types. - -. Go to Administration > Server Administration > Carousel Types -. Select the type you wish to edit with the checkbox at the beginning of the row for that type -. Select the Actions Button (or right-click on the type’s row) and choose Edit Type - -=== Deleting a Carousel Type === - -Carousel types can be deleted with the Actions Menu - -. Go to Administration > Server Administration > Carousel Types -. Select the type you wish to delete with the checkbox at the beginning of the row for that type -. Select the Actions button (or right-click on the type’s row) and choose Delete Type; carousel types cannot be deleted if there are carousels attached - -[[carousel_definitions]] -== CAROUSEL DEFINITIONS == - -The Carousels administration page is used to define the characteristics of the carousel, such as the carousel type, which libraries will be able to display the carousel, and which shelving locations should be used to populate the carousel. - -The Carousels administration page is accessed through Administration > Server Administration > Carousels. (Please note that in the community release, this page will eventually move to Local Administration.) The interface displays existing carousels in a grid format. The grid can be filtered by organizational unit, based on ownership. The filter may include ancestor or descendent organization units, depending on the scope chosen. The columns displayed correspond to attributes of the carousel. The following are displayed by default: Carousel ID, Carousel Type, Owner, Name, Last Refresh Time, Active, Maximum Items. - -image::carousel4.png[Carousels configuration screen] - -Additional columns may be added to the display with the column picker, including the log in of the creator and/or editor, the carousel’s creation or edit time, age limit, item libraries, shelving locations, or associated record bucket. - -=== Attributes of a Carousel Definition === - -* *Carousel ID* - unique identifier assigned by Evergreen when the carousel is created -* *Carousel Type* - identifies the carousel type associated with the carousel -* *Owner* - identifies the carousel’s owning library organizational unit -* *Name* - the name or label of the carousel -* *Bucket* - once the carousel is created, this field displays a link to the carousel’s corresponding record bucket -* *Age Limit* - defines the age limit for the items (titles) that are displayed in the carousel -* *Item Libraries* - identifies which libraries should be used for locating items/titles to add to the carousel; this attribute does not check organizational unit inheritance, so include all libraries that should be used -* *Shelving Locations* - sets which shelving locations can/should be used to find titles for the carousel -* *Last Refresh Time* - identifies the last date when the carousel was refreshed, either automatically or manually. This is currently read-only value. -* *Is Active* - when set to true, the carousel is visible to the public catalog; automatically-maintained carousels are refreshed regularly (inactive automatic carousels are not refreshed) -* *Maximum Items* - defines the maximum number of titles that should appear in the carousel; this attribute is enforced only for automatically maintained carousels - - -=== Creating a Carousel from the Carousels Administration Page === - -. Go to Administration > Server Administration > Carousels -. Select the *New Carousels* button -. A popup will open where you will enter information about the carousel -. Choose the Carousel Type from the drop-down menu -. Choose the Owning Library from the drop-down -. Enter the Name of the carousel -. Enter the Age limit - this field accepts values such as “6 mons or months,” “21 days,” etc. -. Choose the Item Libraries - this identifies the library from which items are pulled to include in the carousel - .. Click the field. A list of available organizational units will appear. - .. Select the organizational unit(s) - ... The owning and circulating libraries must be included on this list for titles/items to appear in the carousel. For libraries with items owned at one organizational unit (e.g., the library system), but circulating at a different organizational unit (e.g., a branch), both would need to be included in the list. - .. Click Add -. Shelving Locations - this identifies the shelving locations from which items are pulled to include in the carousel. Please note that this field is not applicable when creating a carousel of the Newly Cataloged carousel type. For creating a carousel of newly cataloged items with shelving location filters, use the Newest Items by Shelving Location type instead. - .. Click the field. A list of available shelving locations will appear. - .. Select the shelving location - the library that “owns” the shelving location does not have to be included in the list of Item Libraries - .. Click Add -. Last Refresh Time - not used while creating carousels - display the date/time when the carousel was most recently refreshed -. Is Active - set to true for the carousel to be visible to the public catalog -. Enter the Maximum Number of titles to display in the carousel -. Click Save - -image::carousel5.png[Carousel editor screen] - -=== Carousels and Record Buckets === - -When a carousel is created, a corresponding record bucket is also created. The bucket is owned by the staff user who created the carousel; however, access to the carousel is controlled by the carousel’s owning library. The bucket is removed if the carousel is deleted. - -=== View a Carousel Bucket from Record Buckets === - -A record bucket linked to a carousel can be displayed in the Record Bucket interface through the Shared Bucket by ID action. - -. Go to Cataloging > Record Buckets -. Select the Buckets button -. Enter the bucket number of the carousel’s bucket; this can be found on the Carousels administration page. “Bucket” is one of the column options for the grid. It displays the bucket number. -. The contents of the carousel and bucket will be displayed - -Users can add or remove records from the bucket. If the associated carousel is automatically maintained, any changes to the bucket’s contents are subject to being overwritten by the next automatic update. Users are warned of this when making changes to the bucket contents. - -=== Create a Carousel from a Record Bucket === - -A carousel can be created from a record bucket. - -. Go to Cataloging > Record Buckets -. The Bucket View tab opens. Select the Buckets button and choose one of the existing buckets to open. The list of titles in the bucket will display on the screen. -. Select the Buckets button and choose Create Carousel from Bucket - -image::carousel6.png[Record Bucket Actions button - Create Carousel from Bucket] - -TIP: The Create Carousel from Bucket option is visible in both Record Query and Pending Buckets; however, initiating the creation of a carousel from either of these two tabs creates an empty bucket only. It will not pull titles from either to add contents to the carousel. - -=== Manually Adding Contents to a Carousel from Record Details Page === - -Titles can be added to a manually maintained carousel through the record details page. - -. Go to the details page for a title record -. Select the Other Actions button -. Choose Add to Carousel -+ -image::carousel7.png[Actions button on Record Summary page - Add to Carousel] -+ -. A drop-down with a list of manually maintained carousels that have been shared to at least one of the user’s working locations will appear -. Choose the carousel from the list -. Click Add to Selected Carousel - -TIP: The Add to Carousel menu item is disabled if no qualifying carousels are available - -[[carousel_mapping]] -== CAROUSEL LIBRARY MAPPING == - -The Carousel Library Mapping administration page is used to manage which libraries will display specific carousels, as well as the default display order on a library’s public catalog. - -The visibility of a carousel at a given organizational unit is not automatically inherited by the descendants of that unit. The carousel’s owning organizational unit is automatically added to the list of display organizational units. - -The interface is accessed by going to Administration > Server Administration > Carousel Library Mapping. (Please note that in the community release, this page will eventually move to Local Administration.) The interface produces a grid display with a list of the current mapping. The grid can be filtered by organizational unit, based on ownership. The filter may include ancestor or descendent organizational units, depending on the scope chosen. - -WARNING: If a carousel is deleted, its mappings are deleted. - -=== Attributes of Carousel Library Mapping === - -* *ID* - this is a unique identifier automatically generated by the database -* *Carousel* - this is the carousel affected by the mapping -* *Override Name* - this creates a name for automatically managed carousels that will be used in the public catalog display of the carousel instead of the carousel’s name -* *Library* - this is the organizational unit associated with the particular mapping; excludes descendent units -* *Sequence Number* - this is the order in which carousels will be displayed, starting with “0” (Example: Carousel 0 at consortial level will display first. Carousel 1 set at the consortial level will appear just below Carousel 0.) - -=== Create a New Carousel Mapping === - -. Go to Administration > Server Administration > Carousel Library Mapping -. Select *New Carousels Visible at Library* -. Choose the Carousel you wish to map from the Carousel drop-down menu -. If you want the title of the carousel on the public catalog home screen to be different from the carousel’s name, enter your desired name in the Override Name field -. Click on the Library field to choose on which library organizational unit’s public catalog home screen the carousel will appear -. Enter a number in sequence number to indicate in which order the carousel should appear on the library public catalog home screen. “0” is the top level. “1” is the subsequent level, etc. - -image::carousel8.png[Carousel mapping editor screen] - - -== CAROUSELS - OTHER ADMINISTRATIVE FEATURES == - -=== New Staff Permissions === - -Includes new staff permissions: - -* ADMIN_CAROUSEL_TYPES - allows users to create, edit, or delete carousel types -* ADMIN_CAROUSELS - allows users to create, edit, or delete carousels -* REFRESH_CAROUSEL - allows users to perform a manual refresh of carousels - -=== New Database Tables === - -A new table was added to the database to specify the carousel and how it is to be populated, including the name, owning library, details about the most recent refresh, and a link to the Record Bucket and its contents. - -Another new table defines carousel types and includes the name, whether the carousel is manually or automatically maintained, and a link to the QStore query specifying the foundation database query used to populate the carousel. - -A third new table defines the set of organizational units at which the carousel is visible and the display order in which carousels should be listed at each organizational unit. - -=== OPAC Templates === - -Carousels display on the public catalog home page by default. Administrators can modify the public catalog templates to display carousels where desired. - -A new Template Toolkit macro called “carousels” allows the Evergreen administrator to inject the contents of one or more carousels into any point in the OPAC. The macro will accept the following parameters: - -* carousel_id -* dynamic (Boolean, default value false) -* image_size (small, medium, or large) -* width (number of titles to display on a “pane” of the carousel) -* animated (Boolean to specify whether the carousel should automatically cycle through its panes) -* animation_interval (the interval (in seconds) to wait before advancing to the next pane) - -If the carousel_id parameter is supplied, the carousel with that ID will be displayed. If carousel_id is not supplied, all carousels visible to the public catalog's physical_loc organizational unit is displayed. - -The dynamic parameter controls whether the entire contents of the carousel should be written in HTML (dynamic set to false) or if the contents of the carousel should be asynchronously fetched using JavaScript. - -A set of CSS classes for the carousels and their contents will be exposed in style.css.tt2. Lightweight JavaScript was used for navigating the carousels, based either on jQuery or native JavaScript. The carousels are responsive. - -=== Accessibility Features === - -* Users can advance through the carousel using only a keyboard -* Users can navigate to a title from the carousel using only a keyboard -* Users pause animated carousels -* Changes in the state of the carousel are announced to screen readers. - -=== OpenSRF === - -Several Evergreen APIs are used to support the following operations: - -* refreshing the contents of an individual carousel -* refreshing the contents of all automatically-maintained carousels that are overdue for refresh -* retrieving the names and contents of a carousel or all visible ones -* creating a carousel by copying and existing record bucket - -The retrieval APIs allow for anonymous access to permit Evergreen admins to create alternative implementation of the carousel display or to share the carousels with other systems. - -=== Cron Job === - -The carousels feature includes a cronjob added to the example crontab to perform automatic carousel refreshes. It is implemented as a srfsh script that invokes open-ils.storage.carousel.refresh_all. - diff --git a/docs-antora/modules/admin_initial_setup/pages/describing_your_organization.adoc b/docs-antora/modules/admin_initial_setup/pages/describing_your_organization.adoc deleted file mode 100644 index 444dccc4e3..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/describing_your_organization.adoc +++ /dev/null @@ -1,99 +0,0 @@ -= Describing your organization = -:toc: - -Your Evergreen system is almost ready to go. You'll need to add each of the -libraries that will be using your Evergreen system. If you're doing this for a -consortium, you'll have to add your consortium as a whole, and all the -libraries and branches that are members of the consortium. In this chapter, -we'll talk about how to get the Evergreen system to see all your libraries, how -to set each one up, and how to edit all the details of each one. - -== Organization Unit Types == - -The term _Organization Unit Types_ refers to levels in the hierarchy of your -library system(s). Examples could include: All-Encompassing Consortium, Library -System, Branch, Bookmobile, Sub-Branch, etc. - -You can add or remove organizational unit types, and rename them as needed to -match the organizational hierarchy that matches the libraries using your -installation of Evergreen. Organizational unit types should never have proper -names since they are only generic types. - -When working with configuration, settings, and permissions, it is very -important to be careful of the Organization Unit *Context Location* - this is the -organizational unit to which the configuration settings are being applied. If, -for example, a setting is applied at the Consortium context location, all child -units will inherit that setting. If a specific branch location is selected, -only that branch and its child units will have the setting applied. The levels -of the hierarchy to which settings can be applied are often referred to in -terms of "depth" in various configuration interfaces. In a typical hierarchy, -the consortium has a depth of 0, the system is 1, the branch is 2, and any -bookmobiles or sub-branches is 3. - -=== Create and edit Organization Unit Types === - -. Open *Administration > Server Administration > Organization Types*. -. In the left panel, expand the *Organization Unit Types* hierarchy. -. Click on a organization type to edit the existing type or to add a new - organization unit. -. A form opens in the right panel, displaying the data for the selected - organization unit. -. Edit the fields as required and click *Save*. - -To create a new dependent organization unit, click *New Child*. The new child -organization unit will appear in the left panel list below the parent. -Highlight the new unit and edit the data as needed, click *Save* - -== Organizational Units == - -'Organizational Units' are the specific instances of the organization unit types -that make up your library's hierarchy. These will have distinctive proper names -such as Main Street Branch or Townsville Campus. - -=== Remove or edit default Organizational Units === - -After installing the Evergreen software, the default CONS, SYS1, BR1, etc., -organizational units remain. These must be removed or edited to reflect actual -library entities. - -=== Create and edit Organizational Units === - -. Open *Administration > Server Administration > Organizational Units*. -. In the left panel, expand the the Organizational Units hierarchy, select a - unit. -. A form opens in the right panel, displaying the data for the selected - organizational unit. -. To edit the existing, default organizational unit, enter system or library - specific data in the form; complete all three tabs: Main Settings, Hours - of Operation, Addresses. -. Click *Save*. - -To create a new dependent organizational unit, click *New Child*. The new child -will appear in the hierarchy list below the parent unit. Click on the new unit -and edit the data, click *Save* - -=== Organizational Unit data === - -The *Addresses* tab allows you to enter library contact information. Library -Phone number, email address, and addresses are used in patron email -notifications, hold slips, and transit slips. The Library address tab is broken -out into four address types: Physical Address, Holds Address, Mailing Address, -ILL Address. - -The *Hours of Operation* tab is where you enter regular, weekly hours. Holiday -and other closures are set in the *Closed Dates Editor*. Hours of operation and -closed dates impact due dates and fine accrual. - -=== After Changing Organization Unit Data === - -After you change Org Unit data, you must run the autogen.sh script. -This script updates the Evergreen organization tree and fieldmapper IDL. -You will get unpredictable results if you don't run this after making changes. - -Run this script as the *opensrf* Linux account. - -[source, bash] ------------------------------------------------------------------------------- -autogen.sh ------------------------------------------------------------------------------- - diff --git a/docs-antora/modules/admin_initial_setup/pages/describing_your_people.adoc b/docs-antora/modules/admin_initial_setup/pages/describing_your_people.adoc deleted file mode 100644 index 2d8b476bc0..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/describing_your_people.adoc +++ /dev/null @@ -1,368 +0,0 @@ -= Describing your people = -:toc: - -Many different members of your staff will use your Evergreen system to perform -the wide variety of tasks required of the library. - -When the Evergreen installation was completed, a number of permission groups -should have been automatically created. These permission groups are: - -* Users -* Patrons -* Staff -* Catalogers -* Circulators -* Acquisitions -* Acquisitions Administrator -* Cataloging Administrator -* Circulation Administrator -* Local Administrator -* Serials -* System Administrator -* Global Administrator -* Data Review -* Volunteers - -Each of these permission groups has a different set of permissions connected to -them that allow them to do different things with the Evergreen system. Some of -the permissions are the same between groups; some are different. These -permissions are typically tied to one or more working location (sometimes -referred to as a working organizational unit or work OU) which affects where a -particular user can exercise the permissions they have been granted. - -== Setting the staff user's working location == -To grant a working location to a staff user in the staff client: - -. Search for the patron. Select *Search > Search for Patrons* from the top menu. -. When you retrieve the correct patron record, select *Other > User Permission - Editor* from the upper right corner. The permissions associated with this - account appear in the right side of the client, with the *Working Location* - list at the top of the screen. -. The *Working Location* list displays the Organizational Units in your - consortium. Select the check box for each Organization Unit where this user - needs working permissions. Clear any other check boxes for Organization Units - where the user no longer requires working permissions. -. Scroll all the way to the bottom of the page and click *Save*. This user - account is now ready to be used at your library. - -As you scroll down the page you will come to the *Permissions* list. These are -the permissions that are given through the *Permission Group* that you assigned -to this user. Depending on your own permissions, you may also have the ability -to grant individual permissions directly to this user. - -== Comparing approaches for managing permissions == -The Evergreen community uses two different approaches to deal with managing -permissions for users: - -* *Staff Client* -+ -Evergreen libraries that are most comfortable using the staff client tend to -manage permissions by creating different profiles for each type of user. When -you create a new user, the profile you assign to the user determines their -basic set of permissions. This approach requires many permission groups that -contain overlapping sets of permissions: for example, you might need to create -a _Student Circulator_ group and a _Student Cataloger_ group. Then if a new -employee needs to perform both of these roles, you need to create a third -_Student Cataloger / Circulator_ group representing the set of all of the -permissions of the first two groups. -+ -The advantage to this approach is that you can maintain the permissions -entirely within the staff client; a drawback to this approach is that it can be -challenging to remember to add a new permission to all of the groups. Another -drawback of this approach is that the user profile is also used to determine -circulation and hold rules, so the complexity of your circulation and hold -rules might increase significantly. -+ -* *Database Access* -+ -Evergreen libraries that are comfortable manipulating the database directly -tend to manage permissions by creating permission groups that reflect discrete -roles within a library. At the database level, you can make a user belong to -many different permission groups, and that can simplify your permission -management efforts. For example, if you create a _Student Circulator_ group and -a _Student Cataloger_ group, and a new employee needs to perform both of these -roles, you can simply assign them to both of the groups; you do not need to -create an entirely new permission group in this case. An advantage of this -approach is that the user profile can represent only the user's borrowing -category and requires only the basic _Patrons_ permissions, which can simplify -your circulation and hold rules. - -Permissions and profiles are not carved in stone. As the system administrator, -you can change them as needed. You may set and alter the permissions for each -permission group in line with what your library, or possibly your consortium, -defines as the appropriate needs for each function in the library. - -== Managing permissions in the staff client == -In this section, we'll show you in the staff client: - -* where to find the available permissions -* where to find the existing permission groups -* how to see the permissions associated with each group -* how to add or remove permissions from a group - -We also provide an appendix with a listing of suggested minimum permissions for -some essential groups. You can compare the existing permissions with these -suggested permissions and, if any are missing, you will know how to add them. - -=== Where to find existing permissions and what they mean === -In the staff client, in the upper right corner of the screen, click on -*Administration > Server Administration > Permissions*. - -The list of available permissions will appear on screen and you can scroll down -through them to see permissions that are already available in your default -installation of Evergreen. - -There are over 500 permissions in the permission list. They appear in two -columns: *Code* and *Description*. Code is the name of the permission as it -appear in the Evergreen database. Description is a brief note on what the -permission allows. All of the most common permissions have easily -understandable descriptions. - -=== Where to find existing Permission Groups === -In the staff client, in the upper right corner of the screen, navigate to -*Administration > Server Administration > Permission Groups*. - -Two panes will open on your screen. The left pane provides a tree view of -existing Permission Groups. The right pane contains two tabs: Group -Configuration and Group Permissions. - -In the left pane, you will find a listing of the existing Permission Groups -which were installed by default. Click on the + sign next to any folder to -expand the tree and see the groups underneath it. You should see the Permission -Groups that were listed at the beginning of this chapter. If you do not and you -need them, you will have to create them. - -=== Adding or removing permissions from a Permission Group === -First, we will remove a permission from the Staff group. - -. From the list of Permission Groups, click on *Staff*. -. In the right pane, click on the *Group Permissions* tab. You will now see a - list of permissions that this group has. -. From the list, choose *CREATE_CONTAINER*. This will now be highlighted. -. Click the *Delete Selected* button. CREATE_CONTAINER will be deleted from the - list. The system will not ask for a confirmation. If you delete something by - accident, you will have to add it back. -. Click the *Save Changes* button. - -You can select a group of individual items by holding down the _Ctrl_ key and -clicking on them. You can select a list of items by clicking on the first item, -holding down the _Shift_ key, and clicking on the last item in the list that -you want to select. - -Now, we will add the permission we just removed back to the Staff group. - -. From the list of Permission Groups, click on *Staff*. -. In the right pane, click on the *Group Permissions* tab. -. Click on the *New Mapping* button. The permission mapping dialog box will - appear. -. From the Permission drop down list, choose *CREATE_CONTAINER*. -. From the Depth drop down list, choose *Consortium*. -. Click the checkbox for *Grantable*. -. Click the *Add Mapping* button. The new permission will now appear in the - Group Permissions window. -. Click the *Save Changes* button. - -If you have saved your changes and you don't see them, you may have to click -the Reload button in the upper left side of the staff client screen. - -== Managing role-based permission groups in the staff client == - -Main permission groups are granted in the staff client through Edit in the patron record using the Main (Profile) Permission Group field. Additional permission -groups can be granted using secondary permission groups. - -[[secondaryperms]] -=== Secondary Group Permissions === - -The _Secondary Groups_ button functionality enables supplemental permission -groups to be added to staff accounts. The *CREATE_USER_GROUP_LINK* and -*REMOVE_USER_GROUP_LINK* permissions are required to display and use this -feature. - -In general when creating a secondary permission group do not grant the -permission to login to Evergreen. - -==== Granting Secondary Permissions Groups ==== - - -. Open the account of the user you wish to grant secondary permission group to. -. Click _Edit_. -. Click _Secondary Groups_, located to the right of the _Main (Profile) Permission Group_. -+ -image::media/sup-permissions-1_web_client.png[Secondary Permissions Group] -+ -. From the dropdown menu select one of the secondary permission groups. -+ -image::media/sup-permissions-2_web_client.png[Secondary Permission Group List] -+ -. Click _Add_. -. Click _Apply Changes_. -+ -image::media/sup-permissions-3.png[Secondary Permission Group Save] -+ -. Click _Save_ in the top right hand corner of the _Edit Screen_ to save the user's account. - - -==== Removing Secondary Group Permissions ==== -. Open the account of the user you wish to remove the secondary permission group from. -. Click _Edit_. -. Click _Secondary Groups_, located to the right of the _Main (Profile) Permission Group_. -+ -image::media/sup-permissions-1_web_client.png[Secondary Permissions Group] -+ -. Click _Delete_ beside the permission group you would like to remove. -+ -image::media/sup-permissions-4_web_client.png[Secondary Permissions Group Delete] -+ -. Click _Apply Changes_. -+ -image::media/sup-permissions-5_web_client.png[Secondary Permissions Group Save] -+ -. Click _Save_ in the top right hand corner of the _Edit Screen_ to save the user's account. - -== Managing role-based permission groups in the database == -While the ability to assign a user to multiple permission groups has existed in -Evergreen for years, a staff client interface is not currently available to -facilitate the work of the Evergreen administrator. However, if you or members -of your team are comfortable working directly with the Evergreen database, you -can use this approach to separate the borrowing profile of your users from the -permissions that you grant to staff, while minimizing the amount of overlapping -permissions that you need to manage for a set of permission groups that would -otherwise multiply exponentially to represent all possible combinations of -staff roles. - -In the following example, we create three new groups: - -* a _Student_ group used to determine borrowing privileges -* a _Student Cataloger_ group representing a limited set of cataloging - permissions appropriate for students -* a _Student Circulator_ group representing a limited set of circulation - permissions appropriate for students - -Then we add three new users to our system: one who needs to perform some -cataloging duties as a student; one who needs perform some circulation duties -as a student; and one who needs to perform both cataloging and circulation -duties. This section demonstrates how to add these permissions to the users at -the database level. - -To create the Student group, add a new row to the _permission.grp_tree_ table -as a child of the _Patrons_ group: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO permission.grp_tree (name, parent, usergroup, description, application_perm) -SELECT 'Students', pgt.id, TRUE, 'Student borrowers', 'group_application.user.patron.student' -FROM permission.grp_tree pgt - WHERE name = 'Patrons'; ------------------------------------------------------------------------------- - -To create the Student Cataloger group, add a new row to the -_permission.grp_tree_ table as a child of the _Staff_ group: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO permission.grp_tree (name, parent, usergroup, description, application_perm) -SELECT 'Student Catalogers', pgt.id, TRUE, 'Student catalogers', 'group_application.user.staff.student_cataloger' -FROM permission.grp_tree pgt -WHERE name = 'Staff'; ------------------------------------------------------------------------------- - -To create the Student Circulator group, add a new row to the -_permission.grp_tree_ table as a child of the _Staff_ group: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO permission.grp_tree (name, parent, usergroup, description, application_perm) -SELECT 'Student Circulators', pgt.id, TRUE, 'Student circulators', 'group_application.user.staff.student_circulator' -FROM permission.grp_tree pgt -WHERE name = 'Staff'; ------------------------------------------------------------------------------- - -We want to give the Student Catalogers group the ability to work with MARC -records at the consortial level, so we assign the UPDATE_MARC, CREATE_MARC, and -IMPORT_MARC permissions at depth 0: - -[source,sql] ------------------------------------------------------------------------------- -WITH pgt AS ( - SELECT id - FROM permission.grp_tree - WHERE name = 'Student Catalogers' -) -INSERT INTO permission.grp_perm_map (grp, perm, depth) -SELECT pgt.id, ppl.id, 0 -FROM permission.perm_list ppl, pgt -WHERE ppl.code IN ('UPDATE_MARC', 'CREATE_MARC', 'IMPORT_MARC'); ------------------------------------------------------------------------------- - -Similarly, we want to give the Student Circulators group the ability to check -out items and record in-house uses at the system level, so we assign the -COPY_CHECKOUT and CREATE_IN_HOUSE_USE permissions at depth 1 (overriding the -same _Staff_ permissions that were granted only at depth 2): - -[source,sql] ------------------------------------------------------------------------------- -WITH pgt AS ( - SELECT id - FROM permission.grp_tree - WHERE name = 'Student Circulators' -) INSERT INTO permission.grp_perm_map (grp, perm, depth) -SELECT pgt.id, ppl.id, 1 -FROM permission.perm_list ppl, pgt -WHERE ppl.code IN ('COPY_CHECKOUT', 'CREATE_IN_HOUSE_USE'); ------------------------------------------------------------------------------- - -Finally, we want to add our students to the groups. The request may arrive in -your inbox from the library along the lines of "Please add Mint Julep as a -Student Cataloger, Bloody Caesar as a Student Circulator, and Grass Hopper as a -Student Cataloguer / Circulator; I've already created their accounts and given -them a work organizational unit." You can translate that into the following SQL -to add the users to the pertinent permission groups, adjusting for the -inevitable typos in the names of the users. - -First, add our Student Cataloger: - -[source,sql] ------------------------------------------------------------------------------- -WITH pgt AS ( - SELECT id FROM permission.grp_tree - WHERE name = 'Student Catalogers' -) -INSERT INTO permission.usr_grp_map (usr, grp) -SELECT au.id, pgt.id -FROM actor.usr au, pgt -WHERE first_given_name = 'Mint' AND family_name = 'Julep'; ------------------------------------------------------------------------------- - -Next, add the Student Circulator: - -[source,sql] ------------------------------------------------------------------------------- -WITH pgt AS ( - SELECT id FROM permission.grp_tree - WHERE name = 'Student Circulators' -) -INSERT INTO permission.usr_grp_map (usr, grp) -SELECT au.id, pgt.id -FROM actor.usr au, pgt -WHERE first_given_name = 'Bloody' AND family_name = 'Caesar'; ------------------------------------------------------------------------------- - -Finally, add the all-powerful Student Cataloger / Student Circulator: - -[source,sql] ------------------------------------------------------------------------------- - WITH pgt AS ( - SELECT id FROM permission.grp_tree - WHERE name IN ('Student Catalogers', 'Student Circulators') -) -INSERT INTO permission.usr_grp_map (usr, grp) -SELECT au.id, pgt.id -FROM actor.usr au, pgt -WHERE first_given_name = 'Grass' AND family_name = 'Hopper'; ------------------------------------------------------------------------------- - -While adopting this role-based approach might seem labour-intensive when -applied to a handful of students in this example, over time it can help keep -the permission profiles of your system relatively simple in comparison to the -alternative approach of rapidly reproducing permission groups, overlapping -permissions, and permissions granted on a one-by-one basis to individual users. diff --git a/docs-antora/modules/admin_initial_setup/pages/designing_your_catalog.adoc b/docs-antora/modules/admin_initial_setup/pages/designing_your_catalog.adoc deleted file mode 100644 index 43b8ffc53c..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/designing_your_catalog.adoc +++ /dev/null @@ -1,716 +0,0 @@ -= Designing your catalog = -:toc: - -When people want to find things in your Evergreen system, they will check the -catalog. In Evergreen, the catalog is made available through a web interface, -called the _OPAC_ (Online Public Access Catalog). In the latest versions of the -Evergreen system, the OPAC is built on a set of programming modules called the -Template Toolkit. You will see the OPAC sometimes referred to as the _TPAC_. - -In this chapter, we'll show you how to customize the OPAC, change it from its -default configuration, and make it your own. - -== Configuring and customizing the public interface == - -The public interface is referred to as the TPAC or Template Toolkit (TT) within -the Evergreen community. The template toolkit system allows you to customize the -look and feel of your OPAC by editing the template pages (.tt2) files as well as -the associated style sheets. - -=== Locating the default template files === - -The default URL for the TPAC on a default Evergreen system is -_http://localhost/eg/opac/home_ (adjust _localhost_ to match your hostname or IP -address). - -The default template file is installed in _/openils/var/templates/opac_. - -You should generally avoid touching the installed default template files, unless -you are contributing changes for Evergreen to adopt as a new default. Even then, -while you are developing your changes, consider using template overrides rather -than touching the installed templates until you are ready to commit the changes -to a branch. See below for information on template overrides. - -=== Mapping templates to URLs === - -The mapping for templates to URLs is straightforward. Following are a few -examples, where __ is a placeholder for one or more directories that -will be searched for a match: - -* _http://localhost/eg/opac/home => /openils/var//opac/home.tt2_ -* _http://localhost/eg/opac/advanced => -/openils/var//opac/advanced.tt2_ -* _http://localhost/eg/opac/results => -/openils/var//opac/results.tt2_ - -The template files themselves can process, be wrapped by, or include other -template files. For example, the _home.tt2_ template currently involves a number -of other template files to generate a single HTML file. - -Example Template Toolkit file: _opac/home.tt2_. ----- -[% PROCESS "opac/parts/header.tt2"; - WRAPPER "opac/parts/base.tt2"; - INCLUDE "opac/parts/topnav.tt2"; - ctx.page_title = l("Home") %] -
- [% INCLUDE "opac/parts/searchbar.tt2" %] -
-
-
-
- [% INCLUDE "opac/parts/homesearch.tt2" %] -
-
-
-[% END %] ----- -Note that file references are relative to the top of the template directory. - -=== How to override template files === - -Overrides for template files or TPAC pages go in a directory that parallels the -structure of the default templates directory. The overrides then get pulled in -via the Apache configuration. - -The following example demonstrates how to create a file that overrides the -default "Advanced search page" (_advanced.tt2_) by adding a new -_templates_custom_ directory and editing the new file in that directory. - ----- -bash$ mkdir -p /openils/var/templates_custom/opac -bash$ cp /openils/var/templates/opac/advanced.tt2 \ - /openils/var/templates_custom/opac/. -bash$ vim /openils/var/templates_custom/opac/advanced.tt2 ----- - -=== Configuring the custom templates directory in Apache's eg.conf === - -You now need to teach Apache about the new custom template directory. Edit -_/etc/apache2/sites-available/eg.conf_ and add the following __ -element to each of the __ elements in which you want to include the -overrides. The default Evergreen configuration includes a VirtualHost directive -for port 80 (HTTP) and another one for port 443 (HTTPS); you probably want to -edit both, unless you want the HTTP user experience to be different from the -HTTPS user experience. - ----- - - # - - # - absorb the shared virtual host settings - Include eg_vhost.conf - - PerlAddVar OILSWebTemplatePath "/openils/var/templates_custom" - - - # - ----- - -Finally, reload the Apache configuration to pick up the changes. You should now -be able to see your change at _http://localhost/eg/opac/advanced_ where -_localhost_ is the hostname of your Evergreen server. - -=== Adjusting colors for your public interface === - -You may adjust the colors of your public interface by editing the _colors.tt2_ -file. The location of this file is in -_/openils/var/templates/opac/parts/css/colors.tt2_. When you customize the -colors of your public interface, remember to create a custom file in your custom -template folder and edit the custom file and not the file located in your default -template. - -=== Adjusting fonts in your public interface === - -Font sizes can be changed in the _colors.tt2_ file located in -_/openils/var/templates/opac/parts/css/_. Again, create and edit a custom -template version and not the file in the default template. - -Other aspects of fonts such as the default font family can be adjusted in -_/openils/var/templates/opac/css/style.css.tt2_. - -=== Media file locations in the public interface === -The media files (mostly PNG images) used by the default TPAC templates are stored -in the repository in _Open-ILS/web/images/_ and installed in -_/openils/var/web/images/_. - -=== Changing some text in the public interface === - -Out of the box, TPAC includes a number of placeholder text and links. For -example, there is a set of links cleverly named Link 1, Link 2, and so on in the -header and footer of every page in TPAC. Here is how to customize that for a -_custom templates_ skin. - -To begin with, find the page(s) that contain the text in question. The simplest -way to do that is with the grep -s command. In the following example, search for -files that contain the text "Link 1": - ----- -bash$ grep -r "Link 1" /openils/var/templates/opac -/openils/var/templates/opac/parts/topnav_links.tt2 -4: [% l('Link 1') %] ----- - - -Next, copy the file into our overrides directory and edit it with vim. - -Copying the links file into the overrides directory. - ----- -bash$ cp /openils/var/templates/opac/parts/topnav_links.tt2 \ -/openils/var/templates_custom/opac/parts/topnav_links.tt2 -bash$ vim /openils/var/templates_custom/opac/parts/topnav_links.tt2 ----- - -Finally, edit the link text in _opac/parts/header.tt2_. Content of the -_opac/parts/header.tt2_ file. - ----- - ----- - -For the most part, the page looks like regular HTML, but note the `[%_(" ")%]` -that surrounds the text of each link. The `[% ... %]` signifies a TT block, -which can contain one or more TT processing instructions. `l(" ... ");` is a -function that marks text for localization (translation); a separate process can -subsequently extract localized text as GNU gettext-formatted PO (Portable -Object) files. - -As Evergreen supports multiple languages, any customization to Evergreen's -default text must use the localization function. Also, note that the -localization function supports placeholders such as `[_1]`, `[_2]` in the text; -these are replaced by the contents of variables passed as extra arguments to the -`l()` function. - -Once the link and link text has been edited to your satisfaction, load the page -in a Web browser and see the live changes immediately. - -=== Adding translations to PO file === - -After you have added custom text in translatable form to a TT2 template, you need to add the custom strings and its translations to the PO file containing the translations. Evergreen PO files are stored in _/openils/var/template/data/locale/_ - -The PO file consists of pairs of the text extracted from the code: Message ID denoted as _msgid_ and message string denoted as _msgstr_. When adding the custom string to PO file: - -* The line with English expressions must start with _msgid_. The English term must be enclosed in double apostrophes. -* The line with translation start with /msgstr/. The translation to local language must be and enclosed in enclosed in double apostrophes. -* It is recommended to add a note in which template and on which line the particular string is located. The lines with notes must be marked as comments i.e., start with number sign (#) - -Example: - ----- - -# --------------------------------------------------------------------- -# The lines below contains the custom strings manually added to the catalog -# --------------------------------------------------------------------- - -#: ../../Open-ILS/src/custom_templates/opac/parts/topnav_links.tt2:1 -msgid "Union Catalog of the Czech Republic" -msgstr "Souborný katalog České republiky" - - -#: ../../Open-ILS/src/custom_templates/opac/parts/topnav_links.tt2:1 -msgid "Uniform Information Gateway " -msgstr "Jednotná informační brána" - ----- - -[NOTE] -==== -It is good practice to save backup copy of the original PO file before changing it. -==== - -After making changes, restart Apache to make the changes take effect. As root run the command: - ----- -service apache2 restart ----- - -=== Adding and removing MARC fields from the record details display page === - -It is possible to add and remove the MARC fields and subfields displayed in the -record details page. In order to add MARC fields to be displayed on the details -page of a record, you will need to map the MARC code to variables in the -_/openils/var/templates/opac/parts/misc_util.tt2 file_. - -For example, to map the template variable _args.pubdates_ to the date of -publication MARC field 260, subfield c, add these lines to _misc_util.tt2_: - ----- -args.pubdates = []; -FOR sub IN xml.findnodes('//*[@tag="260"]/*[@code="c"]'); - args.pubdates.push(sub.textContent); -END; -args.pubdate = (args.pubdates.size) ? args.pubdates.0 : '' ----- - -You will then need to edit the -_/openils/var/templates/opac/parts/record/summary.tt2_ file in order to get the -template variable for the MARC field to display. - -For example, to display the date of publication code you created in the -_misc_util.tt2_ file, add these lines: - ----- -[% IF attrs.pubdate; %] - -[% END; %] ----- - -You can add any MARC field to your record details page. Moreover, this approach -can also be used to display MARC fields in other pages, such as your results -page. - -==== Using bibliographic source variables ==== - -For bibliographic records, there is a "bib source" that can be associated with -every record. This source and its ID are available as record attributes called -_bib_source.source_ and _bib_source.id_. These variables do not present -themselves in the catalog display by default. - -.Example use case -**** - -In this example, a library imports e-resource records from a third party and -uses the bib source to indicate where the records came from. Patrons can place -holds on these titles, but they must be placed via the vendor website, not in -Evergreen. By exposing the bib source, the library can alter the Place Hold -link for these records to point at the vendor website. - -**** - -== Setting the default physical location for your library environment == - -_physical_loc_ is an Apache environment variable that sets the default physical -location, used for setting search scopes and determining the order in which -copies should be sorted. This variable is set in -_/etc/apache2/sites-available/eg.conf_. The following example demonstrates the -default physical location being set to library ID 104: - ----- -SetEnv physical_loc 104 ----- - -[#setting_a_default_language_and_adding_optional_languages] -== Setting a default language and adding optional languages == - -_OILSWebLocale_ adds support for a specific language. Add this variable to the -Virtual Host section in _/etc/apache2/eg_vhost.conf_. - -_OILSWebDefaultLocale_ specifies which locale to display when a user lands on a -page in TPAC and has not chosen a different locale from the TPAC locale picker. -The following example shows the _fr_ca_ locale being added to the locale picker -and being set as the default locale: - ----- -PerlAddVar OILSWebLocale "fr_ca" -PerlAddVar OILSWebLocale "/openils/var/data/locale/opac/fr-CA.po" -PerlAddVar OILSWebDefaultLocale "fr-CA" ----- - -Below is a table of the currently supported languages packaged with Evergreen: - -[options="header"] -|=== -|Language| Code| PO file -|Arabic - Jordan| ar_jo | /openils/var/data/locale/opac/ar-JO.po -|Armenian| hy_am| /openils/var/data/locale/opac/hy-AM.po -|Czech| cs_cz| /openils/var/data/locale/opac/cs-CZ.po -|English - Canada| en_ca| /openils/var/data/locale/opac/en-CA.po -|English - Great Britain| en_gb| /openils/var/data/locale/opac/en-GB.po -|*English - United States| en_us| not applicable -|French - Canada| fr_ca| /openils/var/data/locale/opac/fr-CA.po -|Portuguese - Brazil| pt_br| /openils/var/data/locale/opac/pt-BR.po -|Spanish| es_es| /openils/var/data/locale/opac/es-ES.po -|=== -*American English is built into Evergreen so you do not need to set up this -language and there are no PO files. - -=== Updating translations in Evergreen using current translations from Launchpad === - -Due to Evergreen release workflow/schedule, some language strings may already have been translated in Launchpad, -but are not yet packaged with Evergreen. In such cases, it is possible to manually replace the PO file in -Evergreen with an up-to-date PO file downloaded from Launchpad. - -. Visit the Evergreen translation site in https://translations.launchpad.net/evergreen[Launchpad] -. Select required language (e.g. _Czech_ or _Spanish_) -. Open the _tpac_ template and then select option _Download translation_. Note: to be able to download the translation file you need to be logged in to Launchpad. -. Select _PO format_ and submit the _request for download_ button. You can also request for download of all existing templates and languages at once, see https://translations.launchpad.net/evergreen/master/+export. The download link will be sent You to email address provided. -. Download the file and name it according to the language used (e.g., _cs-CZ.po_ for Czech or _es-ES.po_ for Spanish) -. Copy the downloaded file to _/openils/var/template/data/locale_. It is a good practice to backup the original PO file before. -. Be sure that the desired language is set as default, using the xref:#setting_a_default_language_and_adding_optional_languages[Default language] procedures. - -Analogously, to update the web staff client translations, download the translation template _webstaff_ and copy it to _openils/var/template/data/locale/staff_. - - -Changes require web server reload to take effect. As root run the command - ----- -service apache2 restart ----- - -== Change Date Format in Patron Account View == -Libraries with same-day circulations may want their patrons to be able to view -the due *time* as well as due date when they log in to their OPAC account. To -accomplish this, go to _opac/myopac/circs.tt2_. Find the line that reads: - ----- -[% date.format(due_date, DATE_FORMAT) %] ----- - -Replace it with: - ----- -[% date.format(due_date, '%D %I:%M %p') %] ----- - - -== Including External Content in Your Public Interface == - -The public interface allows you to include external services and content in your -public interface. These can include book cover images, user reviews, table of -contents, summaries, author notes, annotations, user suggestions, series -information among other services. Some of these services are free while others -require a subscription. - -The following are some of the external content services which you can configure -in Evergreen. - -=== OpenLibrary === - -The default install of Evergreen includes OpenLibrary book covers. The settings -for this are controlled by the section of -_/openils/conf/opensrf.xml_. Here are the key elements of this configuration: - ----- -OpenILS::WWW::AddedContent::OpenLibrary ----- - -This section calls the OpenLibrary perl module. If you wish to link to a -different book cover service other than OpenLibrary, you must refer to the -location of the corresponding Perl module. You will also need to change other -settings accordingly. - ----- -1 ----- - -Max number of seconds to wait for an added content request to return data. Data -not returned within the timeout is considered a failure. ----- -600 ----- - -This setting is the amount of time to wait before we try again. - ----- -15 ----- - -Maximum number of consecutive lookup errors a given process can have before -added content lookups are disabled for everyone. To adjust the site of the cover -image on the record details page edit the config.tt2 file and change the value -of the record.summary.jacket_size. The default value is "medium" and the -available options are "small", "medium" and "large." - -=== ChiliFresh === - -ChiliFresh is a subscription-based service which allows book covers, reviews and -social interaction of patrons to appear in your catalog. To activate ChiliFresh, -you will need to open the Apache configuration file _/etc/apache2/eg_vhost.conf_ -and edit several lines: - -. Uncomment (remove the "#" at the beginning of the line) and add your ChiliFresh -account number: - ----- -#SetEnv OILS_CHILIFRESH_ACCOUNT ----- - -. Uncomment this line and add your ChiliFresh Profile: - ----- -#SetEnv OILS_CHILIFRESH_PROFILE ----- - -Uncomment the line indicating the location of the Evergreen JavaScript for -ChiliFresh: - ----- -#SetEnv OILS_CHILIFRESH_URL http://chilifresh.com/on-site /js/evergreen.js ----- - -. Uncomment the line indicating the secure URL for the Evergreen JavaScript : - ----- -#SetEnv OILS_CHILIFRESH_HTTPS_URL https://secure.chilifresh.com/on-site/js/evergreen.js ----- - -[id="_content_cafe"] -Content Café -~~~~~~~~~~~~ - -Content Café is a subscription-based service that can add jacket images, -reviews, summaries, tables of contents and book details to your records. - -In order to activate Content Café, edit the _/openils/conf/opensrf.xml_ file and -change the __ element to point to the ContentCafe Perl Module: - ----- -OpenILS::WWW::AddedContent::ContentCafe ----- - -To adjust settings for Content Café, edit a couple of fields with the -__ Section of _/openils/conf/opensrf.xml_. - -Edit the _userid_ and _password_ elements to match the user id and password for -your Content Café account. - -This provider retrieves content based on ISBN or UPC, with a default preference -for ISBNs. If you wish for UPCs to be preferred, or wish one of the two identifier -types to not be considered at all, you can change the "identifier_order" option -in opensrf.xml. When the option is present, only the identifier(s) listed will -be sent. - -=== Obalkyknih.cz === - -==== Setting up Obalkyknih.cz account ==== - -If your library wishes to use added content provided by Obalkyknih.cz, a service based in the Czech Republic, you have to http://obalkyknih.cz/signup[create an Obalkyknih.cz account]. -Please note that the interface is only available in Czech. After logging in your Obalkyknih.cz account, you have to add your IP address and Evergreen server address to your account settings. -(In case each library uses an address of its own, all of these addresses have to be added.) - -==== Enabling Obalkyknih.cz in Evergreen ==== - -Set obalkyknih_cz.enabled to true in '/openils/var/templates/opac/parts/config.tt2': - -[source,perl] ----- -obalkyknih_cz.enabled = 'true'; ----- - -Enable added content from Obalkyknih.cz in '/openils/conf/opensrf.xml' configuration file (and – at the same time – disable added content from Open Library, i.e., Evergreen's default added content provider): - -[source,xml] ----- - -OpenILS::WWW::AddedContent::ObalkyKnih ----- - -Using default settings for Obalkyknih.cz means all types of added content from Obalkyknih.cz are visible in your online catalog. -If the module is enabled, book covers are always displayed. Other types of added content (summaries, ratings or tables of contents) can be: - -* switched off using _false_ option, -* switched on again using _true_ option. - -The following types of added content are used: - -* summary (or annotation) -* tocPDF (table of contents available as image) -* tocText (table of contents available as text) -* review (user reviews) - -An example of how to switch off summaries: - -[source,xml] ----- -false ----- - - -=== Google Analytics === - -Google Analytics is a free service to collect statistics for your Evergreen -site. Statistic tracking is disabled by default through the Evergreen -client software when library staff use your site within the client, but active -when anyone uses the site without the client. This was a preventive measure to -reduce the potential risks for leaking patron information. In order to use Google -Analytics you will first need to set up the service from the Google Analytics -website at http://www.google.com/analytics/. To activate Google Analytics you -will need to edit _config.tt2_ in your template. To enable the service set -the value of google_analytics.enabled to true and change the value of -_google_analytics.code_ to be the code in your Google Analytics account. - -=== NoveList === - -Novelist is a subscription-based service providing reviews and recommendation -for books in you catalog. To activate your Novelist service in Evergreen, open -the Apache configuration file _/etc/apache2/eg_vhost.conf_ and edit the line: - ----- -#SetEnv OILS_NOVELIST_URL ----- - -You should use the URL provided by NoveList. - -=== RefWorks === - -RefWorks is a subscription-based online bibliographic management tool. If you -have a RefWorks subscription, you can activate RefWorks in Evergreen by editing -the _config.tt2_ file located in your template directory. You will need to set -the _ctx.refworks.enabled_ value to _true_. You may also set the RefWorks URL by -changing the _ctx.refworks.url_ setting on the same file. - -=== SFX OpenURL Resolver === - -An OpenURL resolver allows you to find electronic resources and pull them into -your catalog based on the ISBN or ISSN of the item. In order to use the SFX -OpenURL resolver, you will need to subscribe to the Ex Libris SFX service. To -activate the service in Evergreen edit the _config.tt2_ file in your template. -Enable the resolver by changing the value of _openurl.enabled_ to _true_ and -change the _openurl.baseurl_ setting to point to the URL of your OpenURL -resolver. - -=== Syndetic Solutions === - -Syndetic Solutions is a subscription service providing book covers and other -data for items in your catalog. In order to activate Syndetic, edit the -_/openils/conf/opensrf.xml_ file and change the __ element to point to -the Syndetic Perl Module: - ----- -OpenILS::WWW::AddedContent::Syndetic ----- - -You will also need to edit the __ element to be the user id provided to -you by Syndetic. - -Then, you will need to uncomment and edit the __ element so that it -points to the Syndetic service: - ----- -http://syndetics.com/index.aspx ----- - -For changes to be activated for your public interface you will need to restart -Evergreen and Apache. - -The Syndetic Solutions provider retrieves images based on the following identifiers -found in bibliographic records: - -* ISBN -* UPC -* ISSN - - -=== Clear External/Added Content Cache === - -On the catalog's record summary page, there is a link for staff that will forcibly clear -the cache of the Added Content for that record. This is helpful for when the Added Content -retrieved the wrong cover jacket art, summary, etc. and caches the wrong result. - -image::media/clear-added-content-cache-1.png[Clear Cache Link] - -Once clicked, there is a pop up that will display what was cleared from the cache. - -image::media/clear-added-content-cache-2.jpg[Example Popup] - -You will need to reload the record in the staff client to obtain the new images from your -Added Content Supplier. - - -=== Configure a Custom Image for Missing Images === - -You can configure a "no image" image other than the standard 1-pixel -blank image. The example eg_vhost.conf file provides examples in the -comments. Note: Evergreen does not provide default images for these. - - -== Including Locally Hosted Content in Your Public Interface == - -It is also possible to show added content that has been generated locally -by placing the content in a specific spot on the web server. It is -possible to have local book jackets, reviews, TOC, excerpts or annotations. - -=== File Location and Format === - -By default the files will need to be placed in directories under -*/openils/var/web/opac/extras/ac/* on the server(s) that run Apache. - -The files need to be in specific folders depending on the format of the -added content. Local Content can only be looked up based on the -record ID at this time. - -.URL Format: -\http://catalog/opac/extras/ac/*\{type}/\{format}/r/\{recordid}* - - * *type* is one of *jacket*, *reviews*, *toc*, *excerpt* or *anotes*. - * *format* is type dependent: - - for jacket, one of small, medium or large - - others, one of html, xml or json ... html is the default for non-image added content - * *recordid* is the bibliographic record id (bre.id). - -=== Example === - -If you have some equipment that you are circulating such as a -laptop or eBook reader and you want to add an image of the equipment -that will show up in the catalog. - -[NOTE] -============= -If you are adding jacket art for a traditional type of media -(book, CD, DVD) consider adding the jacket art to the http://openlibrary.org -project instead of hosting it locally. This would allow other -libraries to benefit from your work. -============= - -Make note of the Record ID of the bib record. You can find this by -looking at the URL of the bib in the catalog. -http://catalog/eg/opac/record/*123*, 123 is the record ID. -These images will only show up for one specific record. - -Create 3 different sized versions of the image in png or jpg format. - - * *Small* - 80px x 80px - named _123-s.jpg_ or _123-s.png_ - This is displayed in the browse display. - * *Medium* - 240px x 240px - named _123-m.jpg_ or _123-m.png_ - This is displayed on the summary page. - * *Large* - 400px x 399px - named _123-l.jpg_ or _123-l.png_ - This is displayed if the summary page image is clicked on. - -[NOTE] -The image dimensions are up to you, use what looks good in your catalog. - -Next, upload the images to the evergreen server(s) that run apache, -and move/rename the files to the following locations/name. -You will need to create directories that are missing. - - * Small - Move the file *123-s.jpg* to */openils/var/web/opac/extras/ac/jacket/small/r/123* - * Medium - Move the file *123-m.jpg* to */openils/var/web/opac/extras/ac/jacket/medium/r/123*. - * Large - Move the file *123-l.jpg* to */openils/var/web/opac/extras/ac/jacket/large/r/123*. - -[NOTE] -The system doesn't need the file extension to know what kind of file it is. - -Reload the bib record summary in the web catalog and your new image will display. - -== Styling the searchbar on the homepage == - -The `.searchbar-home` class is added to the div that -contains the searchbar when on the homepage. This allows -sites to customize the searchbar differently on the -homepage than in search results pages, and other places the -search bar appears. For example, adding the following CSS -would create a large, Google-style search bar on the homepage only: - -[source,css] ----- -.searchbar-home .search-box { - width: 80%; - height: 3em; -} - -.searchbar-home #search_qtype_label, -.searchbar-home #search_itype_label, -.searchbar-home #search_locg_label { - display:none; -} ----- - diff --git a/docs-antora/modules/admin_initial_setup/pages/hard_due_dates.adoc b/docs-antora/modules/admin_initial_setup/pages/hard_due_dates.adoc deleted file mode 100644 index e2a162f2d5..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/hard_due_dates.adoc +++ /dev/null @@ -1,30 +0,0 @@ -= Hard due dates = -:toc: - -This feature allows you to specify a specific due date within your circulation policies. This is particularly useful for academic and school libraries, who may wish to make certain items due at the end of a semester or term. - -NOTE: To work with hard due dates, you will need the CREATE_CIRC_DURATION, UPDATE_CIRC_DURATION, and DELETE_CIRC_DURATION permissions at the _consortium_ level. - -== Creating a hard due date == -Setting up hard due dates is a two-step process. You must first create a hard due date, and then populate it with specific values. - -To create a hard due date: - -. Click *Administration -> Server Administration -> Hard Due Date Changes*. -. Click *New Hard Due Date*. -. In the *Name* field, enter a name for your hard due date. Note that each hard due date can have multiple values, so it's best to use a generic name here, such as "End of semester." -. In the *Owner* field, select the appropriate org unit for your new hard due date. -. In the *Current Ceiling Date* field, select any value. This field is required, but its value will be overwritten in subsequent steps, so you may enter an arbitrary date here. -. Check the *Always Use?* checkbox if you want items to only receive the due dates you specify, regardless of when they would ordinarily be due. If you leave this box unchecked, your specified due dates will serve as "ceiling" values that limit, rather than override, other circulation rules. In other words, with this box checked, items may be due only on the specified dates. With the box unchecked, items may be due _on or before_ the specified dates, simply not after. -. Click *Save*. - -To add date values to your hard due date: - -. Click the hyperlinked name of the due date you just created. -. Click on *New Hard Due Date Value* -. In the *Ceiling Date* field, enter the specific date you would like items to be due. -. In the *Active Date* field, enter the date you want this specific due date value to take effect. -. Click *Save*. -. Each Hard Due Date can include multiple values. For example, you can repeat these steps to enter specific due dates for several semesters using this same screen. - -After creating a hard due date and assigning it values, you can apply it by adding it to a circulation policy. diff --git a/docs-antora/modules/admin_initial_setup/pages/importing_via_staff_client.adoc b/docs-antora/modules/admin_initial_setup/pages/importing_via_staff_client.adoc deleted file mode 100644 index 30a1248afa..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/importing_via_staff_client.adoc +++ /dev/null @@ -1,185 +0,0 @@ -= Importing materials in the staff client = -:toc: - -Evergreen exists to connect users to the materials represented by bibliographic -records, call numbers, and copies -- so getting these materials into your -Evergreen system is vital to a successful system. There are two primary means -of getting materials into Evergreen: - -* The Evergreen staff client offers the *MARC Batch Importer*, which is a - flexible interface primarily used for small batches of records; -* Alternately, import scripts can load data directly into the database, which is - a highly flexible but much more complex method of loading materials suitable - for large batches of records such as the initial migration from your legacy - library system. - -== Staff client batch record imports == -The staff client has a utility for importing batches of bibliographic and item -records available through *Cataloging > MARC Batch Import/Export*. In addition -to importing new records, this interface can be used to match incoming records -to existing records in the database, add or overlay MARC fields in the existing -record, and add copies to those records. - -The MARC Batch Import interface may also be colloquially referred to as -"Vandelay" in the Evergreen community, referring to this interface's internals -in the system.You will also see this name used in several places in the editor. -For instance, when you click on the *Record Match Sets*, the title on the screen -will be *Vandelay Match Sets*. - -=== When to use the MARC Batch Importer === - -* When importing in batches of up to 500 to 1,000 records. -* When you need the system to match those incoming records to existing records - and overlay or add fields to the existing record. -* When you need to add items to existing records in the system. - -WARNING: If you are importing items that do not have barcodes or call numbers, you -must enable the _Vandelay Generate Default Barcodes_ and _Vandelay Default -Barcode Prefix (vandelay.item.barcode.prefix)_ settings. - -=== Record Match Sets === -Click the *Record Match Sets* button to identify how Evergreen should match -incoming records to existing records in the system. - -These record match sets can be used when importing records through the MARC -Batch Importer or when importing order records through the Acquisitions Load -MARC Order Records interface. - -Common match points used when creating a match set include: - -* MARC tag 020a (ISBN) -* MARC tag 022a (ISSN) -* MARC tag 024a (UPC) -* MARC tag 028a (Publisher number) - -=== Create Match Sets === -. On the *Record Match Sets* screen, click *New Match Set* to create a set of - record match points. Give the set a *Name*. Assign the *Owning Library* from - the dropdown list. The *Match Set Type* should remain as *biblio*. Click - *Save*. -. If you don't see your new set in the list, in the upper left corner of the - staff client window, click the *Reload* button. -. If you had to reload, click the *Record Match Sets* button to get back to - that screen. Find your new set in the list and click its name. (The name will - appear to be a hyperlink.) This will bring you to the *Vandelay Match Set - Editor*. -. Create an expression that will define the match points for the incoming - record. You can choose from two areas to create a match: Record Attribute (MARC - fixed fields) or MARC Tag and Subfield. You can use the Boolean operators AND - and OR to combine these elements to create a match set. -. When adding a Record Attribute or MARC tag/subfield, you also can enter a - Match Score. The Match Score indicates the relative importance of that match - point as Evergreen evaluates an incoming record against an existing record. You - can enter any integer into this field. The number that you enter is only - important as it relates to other match points. -+ -Recommended practice is that you create a match score of one (1) for the least -important match point and assign increasing match points to the power of 2 to -working points in increasing importance. -. After creating a match point, drag the completed match point under the folder - with the appropriately-named Boolean folder under the Expression tree. -+ -image::media/create_match_sets.png[Creating a Match Point] -. Click *Save Changes to Expression*. - -=== Quality Metrics === -* Quality metrics provide a mechanism for Evergreen to measure the quality of -records and to make importing decisions based on quality. -* Metrics are configured in the match set editor. -* Quality metrics are not required when creating a match set. -* You can use a value in a record attribute (MARC fixed fields) or a MARC tag - as your quality metric. -* The encoding level record attribute can be one indicator of record quality. - -image::media/record_quality_metrics.png[Quality Metric Grid] - -=== Import Item Attributes === -If you are importing items with your records, you will need to map the data in -your holdings tag to fields in the item record. Click the *Holdings Import -Profile* button to map this information. - -. Click the *New Definition* button to create a new mapping for the holdings tag. -. Add a *Name* for the definition. -. Use the *Tag* field to identify the MARC tag that contains your holdings - information. -. Add the subfields that contain specific item information to the appropriate - item field. -. At a minimum, you should add the subfields that identify the *Circulating -Library*, the *Owning Library*, the *Call Number* and the *Barcode*. -. For more details, see the full list of import fields. - -NOTE: All fields (except for Name and Tag) can contain a MARC subfield code -(such as "a") or an XPATH query. You can also use the -xref:admin:librarysettings.adoc#lse-vandelay[related library settings] to set defaults for some of these fields. - -image::media/batch_import_profile.png[Partial Screenshot of a Holdings Import Profile] - - -=== Overlay/Merge Profiles === -If Evergreen finds a match for an incoming record in the database, you need to -identify which fields should be replaced, which should be preserved, and which -should be added to the record. Click the *Merge/Overlay Profiles* button to -create a profile that contains this information. - -These overlay/merge profiles can be used when importing records through the -MARC Batch Importer or when importing order records through the Acquisitions -Load MARC Order Records interface. - -Evergreen comes pre-installed with two default profiles: - -* *Default merge* - No fields from incoming record are added to match. This - profile is useful for item loads or for order record uploads. -* *Default overlay* - Incoming record will replace existing record. - -You can customize the overlay/merge behavior with a new profile by clicking the -*New Merge Profile* button. Available options for handling the fields include: - -* *Preserve specification* - fields in the existing record that should be - preserved. -* *Replace specification* - fields in existing record that should be replaced - by those in the incoming record. -* *Add specification* - fields from incoming record that should be added to - existing record (in addition to any already there.) -* *Remove specification* - fields that should be removed from incoming record. - -You can add multiple tags to these specifications, separating each tag with a -comma. - -=== Importing the records === -After making the above configurations, you are now ready to import your -records. - -. Click the *Import Records* button -. Provide a unique name for the queue where the records will be loaded -. Identify the match set that should be used for matching -. If you are importing items, identify the *Import Item Attributes* definition - in the Holdings Import Profile -. Select a record source -. Select the overlay/merge profile that defines which fields should be - replaced, added or preserved -. Identify which records should be imported, the options are: - ** *Import Non-Matching Records* will automatically import records that have - no match in the system - ** *Merge on Exact Match* will automatically import records that match on the - 901c (record ID) - ** *Merge on Single Match* will automatically import records when there is - only one match in the system - ** *Merge on Best Match* will automatically import records for the best match - in the system; the best match will be determined by the combined total of the - records match point scores - -You do not need to select any of these import options at this step. You may also opt to review the records first in the import queue and then import them. - -* *Best Single Match Minimum Quality Ratio* should only be changed if quality metrics were used in the match set - - ** Set to 0.0 to import a record regardless of record quality - ** Set to 1.0 if the incoming record must be of equal or higher quality than - the existing record to be imported - ** Set to 1.1 if the incoming record must be of higher quality than the - existing record to be imported - ** *Insufficient Quality Fall-Through Profile* can also be used with quality - metrics. If an incoming record does not meet the standards of the minimum - quality ratio, you can identify a back-up merge profile to be used for - those records. For example, you may want to use the default overlay - profile for high-quality records but use the default merge profile for - lower quality records. diff --git a/docs-antora/modules/admin_initial_setup/pages/introduction.adoc b/docs-antora/modules/admin_initial_setup/pages/introduction.adoc deleted file mode 100644 index 188ec35e36..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/introduction.adoc +++ /dev/null @@ -1,7 +0,0 @@ -= Introduction = -:toc: - -The Evergreen system allows a free range of customizations to every aspect of -the system. Use this part of the documentation to become familiar with the tools -for configuring the system as well as customizing the catalog and staff client. - diff --git a/docs-antora/modules/admin_initial_setup/pages/migrating_patron_data.adoc b/docs-antora/modules/admin_initial_setup/pages/migrating_patron_data.adoc deleted file mode 100644 index 3f5a7f70c1..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/migrating_patron_data.adoc +++ /dev/null @@ -1,263 +0,0 @@ -= Migrating Patron Data = -:toc: - -== Introduction == - -This section will explain the task of migrating your patron data from comma -delimited files into Evergreen. It does not deal with the process of exporting -from the non-Evergreen system since this process may vary depending on where you -are extracting your patron records. Patron could come from an ILS or it could -come from a student database in the case of academic records. - -When importing records into Evergreen you will need to populate 3 tables in your -Evergreen database: - -* actor.usr - The main table for user data -* actor.card - Stores the barcode for users; Users can have more than 1 card but -only 1 can be active at a given time; -* actor.usr_address - Used for storing address information; A user can -have more than one address. - -Before following the procedures below to import patron data into Evergreen, it -is a good idea to examine the fields in these tables in order to decide on a -strategy for data to include in your import. It is important to understand the -data types and constraints on each field. - -. Export the patron data from your existing ILS or from another source into a -comma delimited file. The comma delimited file used for importing the records -should use Unicode (UTF8) character encoding. - -. Create a staging table. A staging table will allow you to tweak the data before -importing. Here is an example sql statement: -+ -[source,sql] ----------------------------------- - CREATE TABLE students ( - student_id int, barcode text, last_name text, first_name text, email text, - address_type text, street1 text, street2 text, - city text, province text, country text, postal_code text, phone text, profile - int DEFAULT 2, ident_type int, home_ou int, claims_returned_count int DEFAULT - 0, usrname text, net_access_level int DEFAULT 2, password text - ); ----------------------------------- -+ -NOTE: The _default_ variables allow you to set default for your library or to populate -required fields in Evergreen if your data includes NULL values. -+ -The data field profile in the above SQL script refers to the user group and should be an -integer referencing the id field in permission.grp_tree. Setting this value will affect -the permissions for the user. See the values in permission.grp_tree for possibilities. -+ -ident_type is the identification type used for identifying users. This is a integer value -referencing config.identification_type and should match the id values of that table. The -default values are 1 for Drivers License, 2 for SSN or 3 for other. -+ -home_ou is the home organizational unit for the user. This value needs to match the -corresponding id in the actor.org_unit table. -+ -. Copy records into staging table from a comma delimited file. -+ -[source,sql] ----------------------------------- - COPY students (student_id, last_name, first_name, email, address_type, street1, street2, - city, province, country, postal_code, phone) - FROM '/home/opensrf/patrons.csv' - WITH CSV HEADER; ----------------------------------- -+ -The script will vary depending on the format of your patron load file (patrons.csv). -+ -. Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example -of sql to adjust phone numbers in the staging table to fit the evergreen field: -+ -[source,sql] ----------------------------------- - UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || - substring(phone from 10), '(', ''), ')', ''), ' ', '-'); ----------------------------------- -+ -Data ``massaging'' will be required to fit formats used in Evergreen. -+ -. Insert records from the staging table into the actor.usr Evergreen table: -+ -[source,sql] ----------------------------------- - INSERT INTO actor.usr ( - profile, usrname, email, passwd, ident_type, ident_value, first_given_name, - family_name, day_phone, home_ou, claims_returned_count, net_access_level) - SELECT profile, students.usrname, email, password, ident_type, student_id, - first_name, last_name, phone, home_ou, claims_returned_count, net_access_level - FROM students; ----------------------------------- -+ -. Insert records into actor.card from actor.usr . -+ -[source,sql] ----------------------------------- - INSERT INTO actor.card (usr, barcode) - SELECT actor.usr.id, students.barcode - FROM students - INNER JOIN actor.usr - ON students.usrname = actor.usr.usrname; ----------------------------------- -+ -This assumes a one to one card patron relationship. If your patron data import has multiple cards -assigned to one patron more complex import scripts may be required which look -for inactive or active flags. -+ -. Update actor.usr.card field with actor.card.id to associate active card with the user: -+ -[source,sql] ----------------------------------- - UPDATE actor.usr - SET card = actor.card.id - FROM actor.card - WHERE actor.card.usr = actor.usr.id; ----------------------------------- -+ -. Insert records into actor.usr_address to add address information for users: -+ -[source,sql] ----------------------------------- - INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) - SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, - students.country, students.postal_code - FROM students - INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; ----------------------------------- -+ -. Update actor.usr.address with address id from address table. - -[source,sql] ----------------------------------- - UPDATE actor.usr - SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id - FROM actor.usr_address - WHERE actor.usr.id = actor.usr_address.usr; ----------------------------------- - -This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. - -== Creating an sql Script for Importing Patrons == - -The procedure for importing patron can be automated with the help of an sql script. Follow these -steps to create an import script: - -. Create an new file and name it import.sql -. Edit the file to look similar to this: - -[source,sql] ----------------------------------- - BEGIN; - - -- Remove any old staging table. - DROP TABLE IF EXISTS students; - - -- Create staging table. - CREATE TABLE students ( - student_id text, barcode text, last_name text, first_name text, email text, address_type text, - street1 text, street2 text, city text, province text, country text, postal_code text, phone - text, profile int, ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, - net_access_level int DEFAULT 2, password text, already_exists boolean DEFAULT FALSE - ); - - --Copy records from your import text file - COPY students (student_id, last_name, first_name, email, address_type, street1, street2, city, province, - country, postal_code, phone, password) - FROM '/home/opensrf/patrons.csv' WITH CSV HEADER; - - --Determine which records are new, and which are merely updates of existing patrons - --You may with to also add a check on the home_ou column here, so that you don't - --accidentally overwrite the data of another library in your consortium. - --You may also use a different matchpoint than actor.usr.ident_value. - UPDATE students - SET already_exists = TRUE - FROM actor.usr - WHERE students.student_id = actor.usr.ident_value; - - --Update the names of existing patrons, in case they have changed their name - UPDATE actor.usr - SET first_given_name = students.first_name, family_name=students.last_name - FROM students - WHERE actor.usr.ident_value=students.student_id - AND (first_given_name != students.first_name OR family_name != students.last_name) - AND students.already_exists; - - --Update email addresses of existing patrons - --You may wish to update other fields as well, while preserving others - --actor.usr.passwd is an example of a field you may not wish to update, - --since patrons may have set the password to something other than the - --default. - UPDATE actor.usr - SET email=students.email - FROM students - WHERE actor.usr.ident_value=students.student_id - AND students.email != '' - AND actor.usr.email != students.email - AND students.already_exists; - - --Insert records from the staging table into the actor.usr table. - INSERT INTO actor.usr ( - profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, - day_phone, home_ou, claims_returned_count, net_access_level) - SELECT profile, students.usrname, email, password, ident_type, student_id, first_name, - last_name, phone, home_ou, claims_returned_count, net_access_level - FROM students WHERE NOT already_exists; - - --Insert records from the staging table into the actor.card table. - INSERT INTO actor.card (usr, barcode) - SELECT actor.usr.id, students.barcode - FROM students - INNER JOIN actor.usr - ON students.usrname = actor.usr.usrname - WHERE NOT students.already_exists; - - --Update actor.usr.card field with actor.card.id to associate active card with the user: - UPDATE actor.usr - SET card = actor.card.id - FROM actor.card - WHERE actor.card.usr = actor.usr.id; - - --INSERT records INTO actor.usr_address from staging table. - INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) - SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, - students.country, students.postal_code - FROM students - INNER JOIN actor.usr ON students.usrname = actor.usr.usrname - WHERE NOT students.already_exists; - - - --Update actor.usr mailing address with id from actor.usr_address table.: - UPDATE actor.usr - SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id - FROM actor.usr_address - WHERE actor.usr.id = actor.usr_address.usr; - - COMMIT; ----------------------------------- - -Placing the sql statements between BEGIN; and COMMIT; creates a transaction -block so that if any sql statements fail, the entire process is canceled and the -database is rolled back to its original state. Lines beginning with -- are -comments to let you you what each sql statement is doing and are not processed. - -== Batch Updating Patron Data == - -For academic libraries, doing batch updates to add new patrons to the Evergreen -database is a critical task. The above procedures and import script can be -easily adapted to create an update script for importing new patrons from -external databases. If the data import file contains only new patrons, then, the -above procedures will work well to insert those patrons. However, if the data -load contains all patrons, a second staging table and a procedure to remove -existing patrons from that second staging table may be required before importing -the new patrons. Moreover, additional steps to update address information and -perhaps delete inactive patrons may also be desired depending on the -requirements of the institution. - -After developing the scripts to import and update patrons have been created, -another important task for library staff is to develop an import strategy and -schedule which suits the needs of the library. This could be determined by -registration dates of your institution in the case of academic libraries. It is -important to balance the convenience of patron loads and the cost of processing -these loads vs staff adding patrons manually. - diff --git a/docs-antora/modules/admin_initial_setup/pages/migrating_your_data.adoc b/docs-antora/modules/admin_initial_setup/pages/migrating_your_data.adoc deleted file mode 100644 index 0c89278b61..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/migrating_your_data.adoc +++ /dev/null @@ -1,350 +0,0 @@ -= Migrating from a legacy system = -:toc: - -== Introduction == - -When you migrate to Evergreen, you generally want to migrate the bibliographic -records and item information that existed in your previous library system. For -anything more than a few thousand records, you should import the data directly -into the database rather than use the tools in the staff client. While the data -that you can extract from your legacy system varies widely, this section -assumes that you or members of your team have the ability to write scripts and -are comfortable working with SQL to manipulate data within PostgreSQL. If so, -then the following section will guide you towards a method of generating common -data formats so that you can then load the data into the database in bulk. - -== Making electronic resources visible in the catalog == -Electronic resources generally do not have any call number or item information -associated with them, and Evergreen enables you to easily make bibliographic -records visible in the public catalog within sections of the organizational -unit hierarchy. For example, you can make a set of bibliographic records -visible only to specific branches that have purchased licenses for the -corresponding resources, or you can make records representing publicly -available electronic resources visible to the entire consortium. - -Therefore, to make a record visible in the public catalog, modify the records -using your preferred MARC editing approach to ensure the 856 field contains the -following information before loading records for electronic resources into -Evergreen: - -.856 field for electronic resources: indicators and subfields -[width="100%",options="header"] -|============================================================================= -|Attribute | Value | Note -|Indicator 1 |4 | -|Indicator 2 |0 or 1 | -|Subfield u |URL for the electronic resource | -|Subfield y |Text content of the link | -|Subfield z |Public note | Normally displayed after the link -|Subfield 9 |Organizational unit short name | The record will be visible when - a search is performed specifying this organizational unit or one of its - children. You can repeat this subfield as many times as you need. -|============================================================================= - -Once your electronic resource bibliographic records have the required -indicators and subfields for each 856 field in the record, you can proceed to -load the records using either the command-line bulk import method or the MARC -Batch Importer in the staff client. - -== Migrating your bibliographic records == -Convert your MARC21 binary records into the MARCXML format, with one record per -line. You can use the following Python script to achieve this goal; just -install the _pymarc_ library first, and adjust the values of the _input_ and -_output_ variables as needed. - -[source,python] ------------------------------------------------------------------------------- -#!/usr/bin/env python -# -*- coding: utf-8 -*- -import codecs -import pymarc - -input = 'records_in.mrc' -output = 'records_out.xml' - -reader = pymarc.MARCReader(open(input, 'rb'), to_unicode=True) -writer = codecs.open(output, 'w', 'utf-8') -for record in reader: - record.leader = record.leader[:9] + 'a' + record.leader[10:] - writer.write(pymarc.record_to_xml(record) + "\n") ------------------------------------------------------------------------------- - -Once you have a MARCXML file with one record per line, you can load the records -into your Evergreen system via a staging table in your database. - -. Connect to the PostgreSQL database using the _psql_ command. For example: -+ ------------------------------------------------------------------------------- -psql -U -h -d ------------------------------------------------------------------------------- -+ -. Create a staging table in the database. The staging table is a temporary - location for the raw data that you will load into the production table or - tables. Issue the following SQL statement from the _psql_ command line, - adjusting the name of the table from _staging_records_import_, if desired: -+ -[source,sql] ------------------------------------------------------------------------------- -CREATE TABLE staging_records_import (id BIGSERIAL, dest BIGINT, marc TEXT); ------------------------------------------------------------------------------- -+ -. Create a function that will insert the new records into the production table - and update the _dest_ column of the staging table. Adjust - "staging_records_import" to match the name of the staging table that you plan - to create when you issue the following SQL statement: -+ -[source,sql] ------------------------------------------------------------------------------- -CREATE OR REPLACE FUNCTION staging_importer() RETURNS VOID AS $$ -DECLARE stage RECORD; -BEGIN -FOR stage IN SELECT * FROM staging_records_import ORDER BY id LOOP - INSERT INTO biblio.record_entry (marc, last_xact_id) VALUES (stage.marc, 'IMPORT'); - UPDATE staging_records_import SET dest = currval('biblio.record_entry_id_seq') - WHERE id = stage.id; - END LOOP; - END; - $$ LANGUAGE plpgsql; ------------------------------------------------------------------------------- -+ -. Load the data from your MARCXML file into the staging table using the COPY - statement, adjusting for the name of the staging table and the location of - your MARCXML file: -+ -[source,sql] ------------------------------------------------------------------------------- -COPY staging_records_import (marc) FROM '/tmp/records_out.xml'; ------------------------------------------------------------------------------- -+ -. Load the data from your staging table into the production table by invoking - your staging function: -+ -[source,sql] ------------------------------------------------------------------------------- -SELECT staging_importer(); ------------------------------------------------------------------------------- - -When you leave out the _id_ value for a _BIGSERIAL_ column, the value in the -column automatically increments for each new record that you add to the table. - -Once you have loaded the records into your Evergreen system, you can search for -some known records using the staff client to confirm that the import was -successful. - -Migrating your call numbers, items, and parts ----------------------------------------------- -'Holdings', comprised of call numbers, items, and parts, are the set of -objects that enable users to locate and potentially acquire materials from your -library system. - -'Call numbers' connect libraries to bibliographic records. Each call number has a -'label' associated with a classification scheme such as a the Library of Congress -or Dewey Decimal systems, and can optionally have either or both a label prefix -and a label suffix. Label prefixes and suffixes do not affect the sort order of -the label. - -'Copies' connect call numbers to particular instances of that resource at a -particular library. Each item has a barcode and must exist in a particular item -location. Other optional attributes of items include circulation modifier, -which may affect whether that item can circulate or for how long it can -circulate, and OPAC visibility, which controls whether that particular item -should be visible in the public catalog. - -'Parts' provide more granularity for items, primarily to enable patrons to -place holds on individual parts of a set of items. For example, an encyclopedia -might be represented by a single bibliographic record, with a single call -number representing the label for that encyclopedia at a given library, with 26 -items representing each letter of the alphabet, with each item mapped to a -different part such as _A, B, C, ... Z_. - -To migrate this data into your Evergreen system, you will create another -staging table in the database to hold the raw data for your materials from -which the actual call numbers, items, and parts will be generated. - -Begin by connecting to the PostgreSQL database using the _psql_ command. For -example: - ------------------------------------------------------------------------------- -psql -U -h -d ------------------------------------------------------------------------------- - -Create the staging materials table by issuing the following SQL statement: - -[source,sql] ------------------------------------------------------------------------------- -CREATE TABLE staging_materials ( - bibkey BIGINT, -- biblio.record_entry_id - callnum TEXT, -- call number label - callnum_prefix TEXT, -- call number prefix - callnum_suffix TEXT, -- call number suffix - callnum_class TEXT, -- classification scheme - create_date DATE, - location TEXT, -- shelving location code - item_type TEXT, -- circulation modifier code - owning_lib TEXT, -- org unit code - barcode TEXT, -- copy barcode - part TEXT -); ------------------------------------------------------------------------------- - -For the purposes of this example migration of call numbers, items, and parts, -we assume that you are able to create a tab-delimited file containing values -that map to the staging table properties, with one item per line. For example, -the following 5 lines demonstrate how the file could look for 5 different -items, with non-applicable attribute values represented by _\N_, and 3 of the -items connected to a single call number and bibliographic record via parts: - ------------------------------------------------------------------------------- -1 QA 76.76 A3 \N \N LC 2012-12-05 STACKS BOOK BR1 30007001122620 \N -2 GV 161 V8 Ref. Juv. LC 2010-11-11 KIDS DVD BR2 30007005197073 \N -3 AE 5 E363 1984 \N \N LC 1984-01-10 REFERENCE BOOK BR1 30007006853385 A -3 AE 5 E363 1984 \N \N LC 1984-01-10 REFERENCE BOOK BR1 30007006853393 B -3 AE 5 E363 1984 \N \N LC 1984-01-10 REFERENCE BOOK BR1 30007006853344 C ------------------------------------------------------------------------------- - -Once your holdings are in a tab-delimited format--which, for the purposes of -this example, we will name _holdings.tsv_--you can import the holdings file -into your staging table. Copy the contents of the holdings file into the -staging table using the _COPY_ SQL statement: - -[source,sql] ------------------------------------------------------------------------------- -COPY staging_items (bibkey, callnum, callnum_prefix, - callnum_suffix, callnum_class, create_date, location, - item_type, owning_lib, barcode, part) FROM 'holdings.tsv'; ------------------------------------------------------------------------------- - -Generate the item locations you need to represent your holdings: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO asset.copy_location (name, owning_lib) - SELECT DISTINCT location, 1 FROM staging_materials - WHERE NOT EXISTS ( - SELECT 1 FROM asset.copy_location - WHERE name = location - ); ------------------------------------------------------------------------------- - -Generate the circulation modifiers you need to represent your holdings: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO config.circ_modifier (code, name, description, sip2_media_type) - SELECT DISTINCT circmod, circmod, circmod, '001' - FROM staging_materials - WHERE NOT EXISTS ( - SELECT 1 FROM config.circ_modifier - WHERE circmod = code - ); ------------------------------------------------------------------------------- - -Generate the call number prefixes and suffixes you need to represent your -holdings: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO asset.call_number_prefix (owning_lib, label) - SELECT DISTINCT aou.id, callnum_prefix - FROM staging_materials sm - INNER JOIN actor.org_unit aou - ON aou.shortname = sm.owning_lib - WHERE NOT EXISTS ( - SELECT 1 FROM asset.call_number_prefix acnp - WHERE callnum_prefix = acnp.label - AND aou.id = acnp.owning_lib - ) AND callnum_prefix IS NOT NULL; - -INSERT INTO asset.call_number_suffix (owning_lib, label) - SELECT DISTINCT aou.id, callnum_suffix - FROM staging_materials sm - INNER JOIN actor.org_unit aou - ON aou.shortname = sm.owning_lib - WHERE NOT EXISTS ( - SELECT 1 FROM asset.call_number_suffix acns - WHERE callnum_suffix = acns.label - AND aou.id = acns.owning_lib - ) AND callnum_suffix IS NOT NULL; ------------------------------------------------------------------------------- - -Generate the call numbers for your holdings: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO asset.call_number ( - creator, editor, record, owning_lib, label, prefix, suffix, label_class -) - SELECT DISTINCT 1, 1, bibkey, aou.id, callnum, acnp.id, acns.id, - CASE WHEN callnum_class = 'LC' THEN 1 - WHEN callnum_class = 'DEWEY' THEN 2 - END - FROM staging_materials sm - INNER JOIN actor.org_unit aou - ON aou.shortname = owning_lib - INNER JOIN asset.call_number_prefix acnp - ON COALESCE(acnp.label, '') = COALESCE(callnum_prefix, '') - INNER JOIN asset.call_number_suffix acns - ON COALESCE(acns.label, '') = COALESCE(callnum_suffix, '') -; ------------------------------------------------------------------------------- - -Generate the items for your holdings: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO asset.copy ( - circ_lib, creator, editor, call_number, location, - loan_duration, fine_level, barcode -) - SELECT DISTINCT aou.id, 1, 1, acn.id, acl.id, 2, 2, barcode - FROM staging_materials sm - INNER JOIN actor.org_unit aou - ON aou.shortname = sm.owning_lib - INNER JOIN asset.copy_location acl - ON acl.name = sm.location - INNER JOIN asset.call_number acn - ON acn.label = sm.callnum - WHERE acn.deleted IS FALSE -; ------------------------------------------------------------------------------- - -Generate the parts for your holdings. First, create the set of parts that are -required for each record based on your staging materials table: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO biblio.monograph_part (record, label) - SELECT DISTINCT bibkey, part - FROM staging_materials sm - WHERE part IS NOT NULL AND NOT EXISTS ( - SELECT 1 FROM biblio.monograph_part bmp - WHERE sm.part = bmp.label - AND sm.bibkey = bmp.record - ); ------------------------------------------------------------------------------- - -Now map the parts for each record to the specific items that you added: - -[source,sql] ------------------------------------------------------------------------------- -INSERT INTO asset.copy_part_map (target_copy, part) - SELECT DISTINCT acp.id, bmp.id - FROM staging_materials sm - INNER JOIN asset.copy acp - ON acp.barcode = sm.barcode - INNER JOIN biblio.monograph_part bmp - ON bmp.record = sm.bibkey - WHERE part IS NOT NULL - AND part = bmp.label - AND acp.deleted IS FALSE - AND NOT EXISTS ( - SELECT 1 FROM asset.copy_part_map - WHERE target_copy = acp.id - AND part = bmp.id - ); ------------------------------------------------------------------------------- - -At this point, you have loaded your bibliographic records, call numbers, call -number prefixes and suffixes, items, and parts, and your records should be -visible to searches in the public catalog within the appropriate organization -unit scope. diff --git a/docs-antora/modules/admin_initial_setup/pages/ordering_materials.adoc b/docs-antora/modules/admin_initial_setup/pages/ordering_materials.adoc deleted file mode 100644 index eac19dd257..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/ordering_materials.adoc +++ /dev/null @@ -1,232 +0,0 @@ -= Ordering materials = -:toc: - -== Introduction == - -Acquisitions allows you to order materials, track the expenditure of your -collections funds, track invoices and set up policies for manual claiming. In -this chapter, we're going to be describing how to use the most essential -functions of acquisitions in the Evergreen system. - -== When should libraries use acquisitions? == -* When you want to track spending of your collections budget. -* When you want to use Evergreen to place orders electronically with your - vendors. -* When you want to import large batches of records to quickly get your on-order - titles into the system. - -If your library simply wants to add on-order items to the catalog so that -patrons can view and place holds on titles that have not yet arrived, -acquisitions may be more than you need. Adding those on-order records via -cataloging is a simpler option that works well for this use case. - -Below are the basic administrative settings to be configured to get started -with acquisitions. At a minimum, a library must configure *Funding Sources*, -*Funds*, and *Providers* to use acquisitions. - -== Managing Funds == - -=== Funding Sources (Required) === -Funding sources allow you to specify the sources that contribute monies to your -fund(s). You can create as few or as many funding sources as you need. These -can be used to track exact amounts for accounts in your general ledger. - -Example funding sources might be: - -* A municipal allocation for your materials budget; -* A trust fund used for collections; -* A revolving account that is used to replace lost materials; -* Grant funds to be used for collections. - -Funding sources are not tied to fiscal or calendar years, so you can continue -to add money to the same funding source over multiple years, e.g. County -Funding. Alternatively, you can name funding sources by year, e.g. County -Funding 2010 and County Funding 2011, and apply credits each year to the -matching source. - -. To create a funding source, select *Administration -> Acquisitions Administration -> - Funding Sources*. Click the *New Funding Source* button. Give - the funding source a name, an owning library, and code. You should also - identify the type of currency that is used for the fund. -. You must add money to the funding source before you can use it. Click the - hyperlinked name of the funding source and then click the *Apply Credit* - button. Add the amount of funds you need to add. The *Note* field is optional. - -=== Funds (Required) === -Funds allow you to allocate credits toward specific purchases. They typically -are used to track spending and purchases for specific collections. Some -libraries may choose to define very broad funds for their collections (e.g. -children's materials, adult materials) while others may choose to define more -specific funds (e.g. adult non-fiction DVDs for BR1). - -If your library does not wish to track fund accounting, you can create one -large generic fund and use that fund for all of your purchases. - -. To create a fund, select *Administration -> Acquisitions Administration -> - Funds*. Click the *New Fund* button. Give the fund a name and code. -. The *Year* can either be the fiscal or calendar year for the fund. -. If you are a multi-branch library that will be ordering titles for multiple - branches, you should select the system as the owning *Org Unit*, even if this - fund will only be used for collections at a specific branch. If you are a - one-branch library or if your branches do their own ordering, you can select - the branch as the owning *Org Unit*. -. Select the *Currency Type* that will be used for this fund. -. You must select the *Active* checkbox to use the fund. -. Enter a *Balance Stop Percent*. The balance stop percent prevents you from - making purchases when only a specified amount of the fund remains. For example, - if you want to spend 95 percent of your funds, leaving a five percent balance - in the fund, then you would enter 95 in the field. When the fund reaches its - balance stop percent, it will appear in red when you apply funds to copies. -. Enter a *Balance Warning Percent*. The balance warning percent gives you a - warning that the fund is low. You can specify any percent. For example, if you - want to spend 90 percent of your funds and be warned when the fund has only 10 - percent of its balance remaining, then enter 90 in the field. When the fund - reaches its balance warning percent, it will appear in yellow when you apply - funds to copies. -. Check the *Propagate* box to propagate funds. When you propagate a fund, the - system will create a new fund for the following fiscal year with the same - parameters as your current fund. All of the settings transfer except for the - year and the amount of money in the fund. Propagation occurs during the fiscal - year close-out operation. -. Check the *Rollover* box if you want to roll over remaining encumbrances and - funds into the same fund next year. If you need the ability to roll over - encumbrances without rolling over funds, go to the *Library Settings Editor* - (*Administration -> Local Administration -> Library Settings Editor*) and set *Allow - funds to be rolled over without bringing the money along* to *True*. -. You must add money to the fund before you can begin using it. Click the - hyperlinked name of the fund. Click the *Create Allocation button*. Select a - *Funding Source* from which the allocation will be drawn and then enter an - amount for the allocation. The *Note* field is optional. - -=== Fund Tags (Optional) === -You can apply tags to funds so that you can group funds for easy reporting. For -example, you have three funds for children’s materials: Children's Board Books, -Children's DVDs, and Children's CDs. Assign a fund tag of children's to each -fund. When you need to report on the amount that has been spent on all -children's materials, you can run a report on the fund tag to find total -expenditures on children's materials rather than reporting on each individual -fund. - -. To create a fund tag, select *Administration -> Acquisitions Administration -> - Fund Tags*. Click the *New Fund Tag* button. Select a owning library and - add the name for the fund tag. -. To apply a fund tag to a fund, select *Administration -> Acquisitions Administration -> - Funds*. Click on the hyperlinked name for the fund. Click the - *Tags* tab and then click the *Add Tag* button. Select the tag from the - dropdown menu. - -For convenience when propagating or rolling over a fund for a new fiscal year, -fund tags will be copied from the current fund to the new year's fund. - -== Ordering == - -=== Providers (Required) === -Providers are the vendors from whom you order titles. - -. To add a provider record, select *Administration -> Acquisitions Administration -> - Providers*. -. Enter information about the provider. At a minimum, you need to add a - *Provider Name*, *Code*, *Owner*, and *Currency*. You also need to select the - *Active* checkbox to use the provider. - -=== Distribution Formulas (Optional) === -If you are ordering for a multi-branch library system, distribution formulas -are a useful way to specify the number of items that should be distributed to -specific branches and item locations. - -. To create a distribution formula, select *Administration -> Acquisitions - Administration -> Distribution Formulas*. Click the *New Formula* button. Enter - the formula name and select the owning library. Ignore the *Skip Count* field. -. Click *New Entry*. Select an Owning Library from the drop down menu. This - indicates the branch that will receive the items. -. Select a Shelving Location from the drop down menu. -. In the Item Count field, enter the number of items that should be distributed - to that branch and copy location. You can enter the number or use the arrows on - the right side of the field. -. Keep adding entries until the distribution formula is complete. - -=== Helpful acquisitions Library Settings === -There are several acquisitions Library Settings available that will help with -acquisitions workflow. These settings can be found at *Administration -> Local -Administration -> Library Settings Editor*. - -* Default circulation modifier - Automatically applies a default circulation - modifier to all of your acquisitions items. Useful if you use a specific - circulation modifier for on-order items. -* Default copy location - Automatically applies a default item location (e.g. - On Order) to acquisitions items. -* Temporary barcode prefix - Applies a unique prefix to the barcode that is - automatically generated during the acquisitions process. -* Temporary call number prefix - Applies a unique prefix to the start of the - call number that is automatically generated during the acquisitions process. - -=== Preparing for order record loading === -If your library is planning to upload order records in a batch, you need to add -some information to your provider records so that Evergreen knows how to map -the item data contained in the order record. - -. Retrieve the record for the provider that has supplied the order records by - selecting *Administration -> Acquisitions Administration -> Providers*. Click on - the hyperlinked Provider name. -. In the top frame, add the MARC tag that contains your holdings data in the - *Holdings Tag* field (this tag can also be entered at the time you create the - provider record.) -. To map the tag's subfields to the appropriate copy data, click the *Holding - Subfield* tab. Click the *New Holding Subfield* button and select the copy - data that you are mapping. Add the subfield that contains that data and click - *Save*. -+ -image::media/order_record_loading.png[] -+ -. If your vendor is sending other data in a MARC tag that needs to be mapped to -a field in acquisitions, you can do so by clicking the Attribute Definitions -tab. As an example, if you need to import the PO Name, you could set up an -attribute definition by adding an XPath similar to: -+ ------------------------------------------------------------------------------- -code => purchase_order -xpath => //*[@tag="962"]/*[@code="p"] -Is Identifier => false ------------------------------------------------------------------------------- -+ -where 962 is the holdings tag and p is the subfield that contains the PO Name. - -=== Preparing to send electronic orders from Evergreen === -If your library wants to transmit electronic order information to a vendor, you -will need to configure your server to use EDI. You need to install the EDI -translator and EDI scripts on your server by following the instructions in the -command line system administration manual. - -Configure your provider's EDI information by selecting *Administration -> -Acquisitions Administration -> EDI Accounts*. Click *New Account* Button. Give the -account a name in the *Label* box. - -. *Host* is the vendor-assigned FTP/SFTP/SSH hostname. -. *Username* is the vendor-assigned FTP/SFTP/SSH username. -. *Password* is the vendor-assigned FTP/SFTP/SSH password. -. *Account* This field enables you to add a supplemental password for - entry to a remote system after log in has been completed. This field is - optional for the ILS but may be required by your provider. -. *Owner* is the organizational unit who owns the EDI account -. *Last Activity* is the date of last activity for the account -. *Provider* is a link to the codes for the Provider record. -. *Path* is the path on the vendor’s server where Evergreen will deposit its - outgoing order files. -. *Incoming Directory* is the path on the vendor’s server where Evergreen - will retrieve incoming order responses and invoices. -. *Vendor Account Number* is the Vendor assigned account number. -. *Vendor Assigned Code* is usually a sub-account designation. It can be used - with or without the Vendor Account Number. - -You now need to add this *EDI Account* and the *SAN* code to the provider's record. - -. Select *Administration -> Acquisitions Administration -> Providers*. -. Click the hyperlinked Provider name. -. Select the account you just created in the *EDI Default* field. -. Add the vendor-provided SAN code to the *SAN* field. - -The last step is to add your library's SAN code to Evergreen. - -. Select *Administration -> Server Administration -> Organizational Units*. -. Select your library from the organizational hierarchy in the left pane. -. Click the *Addresses* tab and add your library's SAN code to the *SAN* field. diff --git a/docs-antora/modules/admin_initial_setup/pages/troubleshooting_tpac.adoc b/docs-antora/modules/admin_initial_setup/pages/troubleshooting_tpac.adoc deleted file mode 100644 index fa2530e0ff..0000000000 --- a/docs-antora/modules/admin_initial_setup/pages/troubleshooting_tpac.adoc +++ /dev/null @@ -1,19 +0,0 @@ -= Troubleshooting TPAC errors = -:toc: - -If there is a problem such as a TT syntax error, it generally shows up as an -ugly server failure page. If you check the Apache error logs, you will probably -find some solid clues about the reason for the failure. For example, in the -following example, the error message identifies the file in which the problem -occurred as well as the relevant line numbers. - -Example error message in Apache error logs: - ----- -bash# grep "template error" /var/log/apache2/error_log -[Tue Dec 06 02:12:09 2011] [warn] [client 127.0.0.1] egweb: template error: - file error - parse error - opac/parts/record/summary.tt2 line 112-121: - unexpected token (!=)\n [% last_cn = 0;\n FOR copy_info IN - ctx.copies;\n callnum = copy_info.call_number_label;\n ----- - diff --git a/docs-antora/modules/api/_attributes.adoc b/docs-antora/modules/api/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/api/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/api/nav.adoc b/docs-antora/modules/api/nav.adoc deleted file mode 100644 index 2ce92424f3..0000000000 --- a/docs-antora/modules/api/nav.adoc +++ /dev/null @@ -1,5 +0,0 @@ -* xref:api:introduction.adoc[Getting Data from Evergreen] -** xref:development:data_supercat.adoc[Using Supercat] -** xref:development:data_unapi.adoc[Using UnAPI] -** xref:development:data_opensearch.adoc[Using Opensearch as a developer] - diff --git a/docs-antora/modules/api/pages/_attributes.adoc b/docs-antora/modules/api/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/api/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/api/pages/introduction.adoc b/docs-antora/modules/api/pages/introduction.adoc deleted file mode 100644 index 1eb3429a25..0000000000 --- a/docs-antora/modules/api/pages/introduction.adoc +++ /dev/null @@ -1,6 +0,0 @@ -= Introduction = - -You may be interested in re-using data from your Evergreen installation in -another application. This part describes several methods to get the data -you need. - diff --git a/docs-antora/modules/appendix/_attributes.adoc b/docs-antora/modules/appendix/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/appendix/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/appendix/nav.adoc b/docs-antora/modules/appendix/nav.adoc deleted file mode 100644 index 88de8ad056..0000000000 --- a/docs-antora/modules/appendix/nav.adoc +++ /dev/null @@ -1,4 +0,0 @@ -* xref:shared:attributions.adoc[Appendix A. Attributions] -* xref:shared:licensing.adoc[Appendix B. Licensing] -* xref:appendix:glossary.adoc[Glossary] -* xref:shared:index.adoc[Index] diff --git a/docs-antora/modules/appendix/pages/_attributes.adoc b/docs-antora/modules/appendix/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/appendix/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/appendix/pages/glossary.adoc b/docs-antora/modules/appendix/pages/glossary.adoc deleted file mode 100644 index a95e4bfe16..0000000000 --- a/docs-antora/modules/appendix/pages/glossary.adoc +++ /dev/null @@ -1,252 +0,0 @@ -[glossary] -Evergreen Glossary -================== - -xref:A[A] xref:B[B] xref:C[C] xref:D[D] xref:E[E] xref:F[F] xref:G[G] xref:H[H] xref:I[I] xref:J[J] xref:K[K] xref:L[L] xref:M[M] xref:N[N] xref:O[O] xref:P[P] xref:Q[Q] xref:R[R] xref:S[S] xref:T[T] xref:U[U] xref:V[V] xref:W[W] xref:X[X] xref:Y[Y] xref:Z[Z] - -[glossary] -[[A]]AACR2 (Angolo-American Cataloguing Rules, Second Edition):: - AACR2 is a set of cataloging rules for descriptive cataloging of various types of resources. http://www.aacr2.org/ -Acquisitions:: - Processes related to ordering materials and managing expenditures. -Age Protection:: - Allows libraries to prevent holds on new books (on a item by item basis) from outside the owning library's branch or system for a designated amount of time. -Apache:: - Open-source web server software used to serve both static content and dynamic web pages in a secure and reliable way. More information is available at http://apache.org. -Authority Record:: - Records used to control the contents of MARC fields. -[[B]]Balance stop percent:: - A setting in acquisitions that prevents you from making purchases when only a specified amount of the fund remains. -Barcode:: - The code/number attached to the item. This is not the database ID. Barcodes are added to items to facilitate the checking in and out of an item. Barcodes can be changed as needed. Physical barcodes that can be placed on items can follow several different barcode symboligies. -Bibliographic record:: - The record that contains data about a work, such as title, author and copyright date. -Booking:: - Processes relating to reserving cataloged and non- bibliographic items. -Brick:: - A brick is a unit consisting of one or more servers. It refers to a set of servers with ejabberd, Apache, and all applicable Evergreen services. It is possible to run all the software on a single server, creating a “single server brick.” Typically, larger installations will have more than one such brick and, hence, be more robust. -Buckets:: - This is a container of items. See also Record Buckets and Item Buckets. -[[C]]Call number:: - An item's call number is a string of letters and or numbers that work like map coordinates to describe where in a library a particular item "lives." -Catalog:: - The database of titles and objects -Cataloging:: - The process of adding materials to be circulated to the system. -Check-in:: - The process of returning an item. -Check-out:: - The process of loaning and item to a patron. -Circulation:: - The process of loaning an item to an individual. -Circulating library:: - The library which has checked out the item. -Circulation library:: - The library which is the home of the item. -Circulation limit sets:: - Refines circulation policies by limiting the number of items that users can check out. -Circulation modifiers:: - Circulation modifiers pull together Loan Duration, Renewal Limit, Fine Level, Max Fine, and Profile Permission Group to create circulation rules for different types of materials. Circulation Modifiers are also used to determine Hold Policies. -Cloud Computing:: - The use of a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or computer. Terms sush as Software as a Service(SaaS) refer to these kinds of systems. ILS vendors offering hosting where they manage the servers used by the ILS and provide access via the internet is an example of cloud computing. -Commit:: - To make code changes to the software code permanent. In open source software development, the ability to commit is usually limited to a core group. -Community:: - Community in the open source world of software development and use refers to the users and developers who work in concert to develop, communicate, and collaborate to develop the software. -Compiled:: - A compiled software is where the software has been translated to a machine code for use. Compiled software usually targets a specific computer architecture. The code cannot be read by humans. -Consortium:: - A consortium is a organization of two or more individuals, companies, libraries, consortiums, etc. formed to undertake an enterprise beyond the resources of any one member. -Consortial Library System (CLS):: - An ILS designed to run an consortium. A CLS is designed for resource sharing between all members of the consortium, it provides an union catalog for all items in the consortium. -[[copy]]Copy:: - see <> -[[D]]Default Search Library:: - The default search library setting determines what library is searched from the advanced search screen and portal page by default. Manual selection of a search library will override it. One recommendation is to set the search library to the highest point you would normally want to search. -Distribution formulas:: - Used to specify the number of copies that should be distributed to specific branches and item locations in Acquisitions -Due date:: - The due date is the day on or before which an item must be returned to the library in order to avoid being charged an overdue fine. -[[E]]ejabberd:: - ejabberd stands for Erland Jabber Daemon. This is the software that runs <>. ejabberd is used to exchange data between servers. -Electronic data interchange (EDI):: - Transmission of data between organizations using electronic means. This is used for Acquisitions. -Evergreen:: - Evergreen is an open source ILS designed to handle the processing of a geographical dispersed, resource sharing library network. -[[F]]FIFO (First In First Out):: - In a FIFO environment, holds are filled in the order that they are placed. -FUD (Fear, Uncertainty, Doubt):: - FUD is a marketing stratagy to try to install Fear, Uncertainty, and/or Doubt about a competitors product. -Fund tags:: - Tags used in acquisitions to allow you to group Funds. -Funding sources:: - Sources of the monies to fund acquisitions of materials. -Funds:: - Allocations of money used for purchases. -FRBR (Functional Requirements for Bibliographic Records):: - See https://www.loc.gov/cds/downloads/FRBR.PDF[Library of Congress FRBR documentation] -[[G]]Git:: - Git is a versioning control software for tracking changes in the code. It is designed to work with multiple developers. -GNU:: - GNU is a recursive acronym for "GNU's Not Unix". GNU is an open source Unix like operating system. -GNU GPL version 2 (GNU General Public License version 2):: - GNU GPL Version 2 is the license in which Evergreen is licensed. GNU GPL version 2 is a copyleft licence, which means that derivative work must be open source and distributed under the same licence terms. See https://www.gnu.org/licenses/old-licenses/gpl-2.0.html for complete license information. -[[H]]Hatch:: - A additional program that is installed as an extension of your browser to extend printing functionality with Evergreen. -Hold:: - The exclusive right for a patron to checkout a specific item. -Hold boundaries:: - Define which organizational units are available to fill specific holds. -Holdings import profile:: - Identifies the <> definition. -Holding subfield:: - Used in the acquisitions module to map subfields to the appropriate item data. -[[I]]ICL (Inter-Consortium Loans):: - Inter-Consortium Loans are like ILL's, but different in the fact that the loan happens just with in the Consortium. -[[ILS]]ILS (Integrated Library System):: - The Integrated Library System is a set of applications which perform the business and technical aspects of library management, including but not exclusive to acquistions, cataloging, circulation, and booking. -ILL (Inter-Library Loan):: - Inter-Library Loan is the process of one libray borrows materials for a patron from another library. -[[IIA]]Import item attributes:: - Used to map the data in your holdings tag to fields in the item record during a MARC import. -Insufficient quality fall-through profile:: - A back-up merge profile to be used for importing if an incoming record does not meet the standards of the minimum quality ratio. -ISBN (International Standard Book Number):: - The ISBN is a publisher product number that has been used in the book supply industry since 1968. A published book that is a separate product gets its own ISBN. ISBNs are either 10 digits or 13 digits long. They may contain information on the country of publication, the publisher, title, volume or edition of a title. -ISSN (International Standard Serial Number):: - International Standard Serial Number is a unigue 8 digit number assigned by the Internation Serials Data System to identify a specfic Serial Title. -[[item]]Item:: - The actual item. -Item barcode:: - Item barcodes uniquely identify each specific item entered into the Catalog. -Item Buckets:: - This is a container of individual items. -Item Status:: - Item Status allows you to see the status of a item without having to go to the actual Title Record. Item status is a intragal part of Evergreen and how it works. -[[J]][[jabber]]Jabber:: - The communications protocol used for client-server message passing within Evergreen. Now known as <>, it was originally named "Jabber." -Juvenile flag:: - User setting used to specify if a user is a juvenile user for circulation purposes. -[[K]]KPAC (Kids' OPAC):: - Alternate version of the Template Toolkit OPAC that is kid friendly -[[L]]LaunchPad:: - Launchpad is an open source suite of tools that help people and teams to work together on software projects. Launchpad brings together bug reports, wishlist ideas, translations, and blueprints for future development of Evergreen. https://launchpad.net/evergreen -LCCN (Library of Congress Control Number):: - The LCCN is a system of numbering catalog records at the Library of Congress -LMS (Library Management System):: - see <> -Loan duration:: - Loan duration (also sometimes referred to as "loan period") is the length of time a given type of material can circulate. -[[M]]MARC (Machine Readable Cataloging):: - The MARC formats are standards for the representation and communication of bibliographic and related information in machine-readable form. -MARC batch export:: - Mass exporting of MARC records out of a library system. -MARC batch import:: - Mass importing of MARC records into a library system. -MARCXML:: - Framework for working with MARC data in a XML environment. -Match score:: - Indicates the relative importance of that match point as Evergreen evaluates an incoming record against an existing record. -Minimum quality ratio:: - Used to set the acceptable level of quality for a record to be imported. -[[N]]Non-Cataloged:: - Items that have not been cataloged. -[[O]]OPAC (Online Public Access Catalog):: - An OPAC is an online interface to the database of a library's holdings, used to find resources in their collections. It is possibly searchable by keyword, title, author, subject or call number. The public view of the catalog. -OpenSRF (Open Scalable Request Framework):: - Acronym for Open Scalable Request Framework (pronounced 'open surf'). An enterprise class Service Request Framework. It's purpose is to serve as a robust message routing network upon which one may build complex, scalable applications. To that end, OpenSRF attempts to be invisible to the application developer, while providing transparent load balancing and failover with minimal overhead. -Organizational units (Org Unit):: - Organizational Units are the specific instances of the organization unit types that make up your library's hierarchy. -Organization unit type:: - The organization types in the hierarchy of a library system. -Overlay/merge profiles:: - During a MARC import this is used identify which fields should be replaced, which should be preserved, and which should be added to the record. -Owning library:: - The library which has purchased a particular item and created the volume and item records. -[[P]]Parent organizational unit:: - An organizational unit one level above whose policies may be inherited by its child units. -Parts:: - Provide more granularity for copies, primarily to enable patrons to place holds on individual parts of a set of items. -Patron:: - A user of the ILS. Patrons in Evergreen can both be staff and public users. -Patron barcode / library card number:: - Patrons are uniquely identified by their library card barcode number. -Permission Groups:: - A grouping of permissions granted to a group of individuals, i.e. patrons, cataloging, circulation, administration. Permission Groups also set the depth and grantability of permissions. -Pickup library:: - Library designated as the location where requested material is to be picked up. -PostgreSQL:: - A popular open-source object-relational database management system that underpins Evergreen software. -Preferred Library:: - The library that is used to show items and URIs regardless of the library searched. It is recommended to set this to your Workstation library so that local copies always show up first in search Results. -Print Templates:: - Templates that Evergreen uses to print various receipts and tables. -Printer Settings:: - Settings in Evergreen for selected printers. This is a HATCH functionality. -Propagate funds:: - Create a new fund for the following fiscal year with the same parameters as your current fund. -Providers:: - Vendors from whom you order your materials. Set in the Acquisition module. -Purchase Order (PO):: - A document issued by a buyer to a vendor, indicating types, quantities, and prices of materials. -[[Q]]Quality metrics:: - Provide a mechanism for Evergreen to measure the quality of records and to make importing decisions based on quality. -[[R]]RDA (Resource Description & Access):: - RDA is a set of cataloging standards and guidelines based on FRBR and FRAD. RDA is the successor for AACR2. http://rdatoolkit.org/ -Record Bucket:: - This is a container of Title Records. -Record match sets:: - When importing records, this identifies how Evergreen should match incoming records to existing records in the system. -Recurring fine:: - Recurring Fine is the official term for daily or other regularly accruing overdue fines. -Register Patron:: - The process of adding a Patron account with in Evergreen. -Rollover:: - Used to roll over remaining encumbrances and funds into the same fund the following year. -[[S]]SAN (Standard Address Number):: - SAN is an identificatin code for electronic communication with in the publishing industry. SAN uniguely identify an address for location. -Shelving location:: - Shelving location is the area within the library where a given item is shelved. -SIP (Standard Interchange Protocol):: - SIP is a communications protocol used within Evergreen for transferring data to and from other third party devices, such as RFID and barcode scanners that handle patron and library material information. Version 2.0 (also known as "SIP2") is the current standard. It was originally developed by the 3M Corporation. -[[SRU]]SRU (Search & Retrieve URL):: - Acronym for Search & Retrieve URL Service. SRU is a search protocol used in web search and retrieval. It expresses queries in Contextual Query Language (CQL) and transmits them as a URL, returning XML data as if it were a web page. -Staff client:: - The graphical user interface used by library workers to interact with the Evergreen system. Staff use the Staff Client to access administration, acquisitions, circulation, and cataloging functions. -Standing penalties:: - Serve as alerts and blocks when patron records have met certain criteria, commonly excessive overdue materials or fines; standing penalty blocks will prevent circulation and hold transactions. -Statistical categories:: - Allow libraries to associate locally interesting data with patrons and holdings. Also known as stat cats. -[[T]]Template Toolkit (TT):: - A template processing system written in Perl. -TPAC:: - Evergreen's Template Toolkit based OPAC. The web based public interface in Evergreen written using functionality from the Template Toolkit. -[[U]]URI:: - Universal Resource Identifier. A URI is a string of characters that identify a logical or physical resource. Examples are URL an URN -URL (Universal Resource Locator):: - This is the web address. -URN (Universal Resource Number):: - This is a standard number to identify a resource. Example of URNs are ISBN, ISSN, and UPC. -UPC (Universal Product Code):: - The UPC is a number uniguely assigned to an item by the manufacturer. -User Activity Type:: - Different types of activities users do in Evergreen. Examples: Login, Verification of account -[[V]]Vandelay:: - MARC Batch Import/Export tool original name. -[[W]]Wiki:: - The Evergreen Wiki can be found at https://wiki.evergreen-ils.org. The Evergreen Wiki is a knowledge base of information on Evergreen. -Workstation:: - The unique name associated with a specific computer and Org Unit. -[[X]]XML (eXtensible Markup Language):: - Acronym for eXtensible Markup Language, a subset of SGML. XML is a set of rules for encoding information in a way that is both human-readable and machine-readable. It is primarily used to define documents but can also be used to define arbitrary data structures. It was originally defined by the World Wide Web Consortium (W3C). -[[XMPP]]XMPP (Extensible Messaging and Presence Protocol):: - The open-standard communications protocol (based on XML) used for client-server message passing within Evergreen. It supports the concept of a consistent domain of message types that flow between software applications, possibly on different operating systems and architectures. More information is available at http://xmpp.org. - See Also: <>. -xpath:: - The XML Path Language, a query language based on a tree representation of an XML document. It is used to programmatically select nodes from an XML document and to do minor computation involving strings, numbers and Boolean values. It allows you to identify parts of the XML document tree, to navigate around the tree, and to uniquely select nodes. The currently version is "XPath 2.0". It was originally defined by the World Wide Web Consortium (W3C). -[[Y]]YAOUS:: - Yet Another Organization Unit Setting -[[Z]]Z39.50 :: - An international standard client/server protocol for communication between computer systems, primarily library and information related systems. - See Also: <> - diff --git a/docs-antora/modules/cataloging/_attributes.adoc b/docs-antora/modules/cataloging/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/cataloging/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records1.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records1.jpg deleted file mode 100644 index ae0962afbb..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records1.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records10.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records10.jpg deleted file mode 100644 index 089c9a1ed2..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records10.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records12.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records12.jpg deleted file mode 100644 index a8c0d70d4b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records14.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records14.jpg deleted file mode 100644 index f192db2533..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records14.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records15.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records15.jpg deleted file mode 100644 index 06fbf2bd89..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records15.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records2.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records2.jpg deleted file mode 100644 index b022f873ae..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records3.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records3.jpg deleted file mode 100644 index 124712c72e..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records3.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records4.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records4.jpg deleted file mode 100644 index 4d556a1bfb..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records4.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records5.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records5.jpg deleted file mode 100644 index 3410561afa..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records5.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records6.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records6.jpg deleted file mode 100644 index b0dfc8a7a1..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records6.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records7.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records7.jpg deleted file mode 100644 index e26303ea81..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records7.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records8.jpg b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records8.jpg deleted file mode 100644 index c8cf321bc2..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records8.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/batch_import_profile.png b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/batch_import_profile.png deleted file mode 100644 index 748d36b285..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/batch_import_profile.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/marc_batch_import_acq_overlay.png b/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/marc_batch_import_acq_overlay.png deleted file mode 100644 index dfca5c8a9f..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/batch_importing_MARC/marc_batch_import_acq_overlay.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records1.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records1.jpg deleted file mode 100644 index ae0962afbb..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records1.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records10.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records10.jpg deleted file mode 100644 index 089c9a1ed2..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records10.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records12.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records12.jpg deleted file mode 100644 index a8c0d70d4b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records14.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records14.jpg deleted file mode 100644 index f192db2533..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records14.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records15.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records15.jpg deleted file mode 100644 index 06fbf2bd89..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records15.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records2.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records2.jpg deleted file mode 100644 index b022f873ae..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records3.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records3.jpg deleted file mode 100644 index 124712c72e..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records3.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records4.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records4.jpg deleted file mode 100644 index 4d556a1bfb..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records4.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records5.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records5.jpg deleted file mode 100644 index 3410561afa..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records5.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records6.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records6.jpg deleted file mode 100644 index b0dfc8a7a1..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records6.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records7.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records7.jpg deleted file mode 100644 index e26303ea81..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records7.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records8.jpg b/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records8.jpg deleted file mode 100644 index c8cf321bc2..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records8.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Holdings_Editor_Defaults_Tab.png b/docs-antora/modules/cataloging/assets/images/media/Holdings_Editor_Defaults_Tab.png deleted file mode 100644 index 92d08f500a..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Holdings_Editor_Defaults_Tab.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Holdings_Editor_Hide_Display_Defaults.png b/docs-antora/modules/cataloging/assets/images/media/Holdings_Editor_Hide_Display_Defaults.png deleted file mode 100644 index 45c3d774a3..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Holdings_Editor_Hide_Display_Defaults.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Link_Checker1.jpg b/docs-antora/modules/cataloging/assets/images/media/Link_Checker1.jpg deleted file mode 100644 index b703b6336f..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Link_Checker1.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Link_Checker2.jpg b/docs-antora/modules/cataloging/assets/images/media/Link_Checker2.jpg deleted file mode 100644 index 6477f42090..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Link_Checker2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Link_Checker6.jpg b/docs-antora/modules/cataloging/assets/images/media/Link_Checker6.jpg deleted file mode 100644 index e4222a1e7e..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Link_Checker6.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches1.jpg b/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches1.jpg deleted file mode 100644 index 55f9cb0ece..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches1.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches2.jpg b/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches2.jpg deleted file mode 100644 index 707a43df39..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches3.jpg b/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches3.jpg deleted file mode 100644 index f9a64356b6..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches3.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches4.jpg b/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches4.jpg deleted file mode 100644 index 6bce86ebb9..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Locate_Z39_50_Matches4.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/MARC_Tag_Tables_Detail.PNG b/docs-antora/modules/cataloging/assets/images/media/MARC_Tag_Tables_Detail.PNG deleted file mode 100644 index 21f192cc00..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/MARC_Tag_Tables_Detail.PNG and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/MARC_Tag_Tables_Grid.PNG b/docs-antora/modules/cataloging/assets/images/media/MARC_Tag_Tables_Grid.PNG deleted file mode 100644 index 28a2a72319..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/MARC_Tag_Tables_Grid.PNG and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import1.jpg b/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import1.jpg deleted file mode 100644 index 33e96b56e1..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import1.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import2.jpg b/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import2.jpg deleted file mode 100644 index 32da4092cc..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import3.jpg b/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import3.jpg deleted file mode 100644 index e6b563b620..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import3.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import4.jpg b/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import4.jpg deleted file mode 100644 index bb6049bb87..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import4.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import5.jpg b/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import5.jpg deleted file mode 100644 index 96c2169d28..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import5.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import6.jpg b/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import6.jpg deleted file mode 100644 index 37b9a8ed15..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import6.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/batch_import_profile.png b/docs-antora/modules/cataloging/assets/images/media/batch_import_profile.png deleted file mode 100644 index 748d36b285..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/batch_import_profile.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conj10.jpg b/docs-antora/modules/cataloging/assets/images/media/conj10.jpg deleted file mode 100644 index 1365a92bd5..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conj10.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conj2.jpg b/docs-antora/modules/cataloging/assets/images/media/conj2.jpg deleted file mode 100644 index d5d05ce9fa..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conj2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conj3.jpg b/docs-antora/modules/cataloging/assets/images/media/conj3.jpg deleted file mode 100644 index 75c8d02ff0..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conj3.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conj4.jpg b/docs-antora/modules/cataloging/assets/images/media/conj4.jpg deleted file mode 100644 index 9006f354fc..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conj4.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conj5.jpg b/docs-antora/modules/cataloging/assets/images/media/conj5.jpg deleted file mode 100644 index 7e3b8f7b35..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conj5.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conjoined_menu_markfor.png b/docs-antora/modules/cataloging/assets/images/media/conjoined_menu_markfor.png deleted file mode 100644 index eccec72bba..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conjoined_menu_markfor.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/conjoined_opac.png b/docs-antora/modules/cataloging/assets/images/media/conjoined_opac.png deleted file mode 100644 index 9e1958f691..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/conjoined_opac.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-2.png deleted file mode 100644 index b602dfe619..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-1.png deleted file mode 100644 index 0945569bce..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-2.png deleted file mode 100644 index 6f7d7e0fab..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-3.png deleted file mode 100644 index 00906f2d1a..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-cat-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-1.png deleted file mode 100644 index 226c5dca51..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-copy-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-copy-1.png deleted file mode 100644 index ec0d88c438..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-copy-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-copy-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-copy-2.png deleted file mode 100644 index f8cfa1e535..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-delete-copy-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-1.png deleted file mode 100644 index 9d2292c8ea..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-2.png deleted file mode 100644 index 66797899d7..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-1.png deleted file mode 100644 index c4e5b75a58..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-2.png deleted file mode 100644 index db7c231cf8..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-3.png deleted file mode 100644 index 0b1b9b768c..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-edit-copy-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-1.png deleted file mode 100644 index 4164c8bd83..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-2.png deleted file mode 100644 index 78979d509b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-3.png deleted file mode 100644 index 304e276150..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-new-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-1.png deleted file mode 100644 index 107284cd3a..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-2.png deleted file mode 100644 index 4d476709bf..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-3.png deleted file mode 100644 index 66b1e230a3..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-4.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-4.png deleted file mode 100644 index 5e709531e4..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-4.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-5.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-5.png deleted file mode 100644 index 79dd011f8b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-pending-5.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-1.png deleted file mode 100644 index 3129703e85..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-2.png deleted file mode 100644 index c2a1d33d80..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-3.png deleted file mode 100644 index 606d788fe4..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-remove-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-request-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-request-1.png deleted file mode 100644 index 26527f5b28..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-request-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-request-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-request-2.png deleted file mode 100644 index 2c8a4a9549..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-request-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-1.png deleted file mode 100644 index d1450410ec..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-2.png deleted file mode 100644 index 36c7d65c70..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-3.png deleted file mode 100644 index 211b3cd0fc..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-4.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-4.png deleted file mode 100644 index ecf534ddbf..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-share-4.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-1.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-1.png deleted file mode 100644 index 239d41000f..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-2.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-2.png deleted file mode 100644 index 0873cc5240..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-3.png b/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-3.png deleted file mode 100644 index 75f116c846..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy-bucket-transfer-3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copy_edit_link_1.jpg b/docs-antora/modules/cataloging/assets/images/media/copy_edit_link_1.jpg deleted file mode 100644 index 46094241d1..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copy_edit_link_1.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copytags10.png b/docs-antora/modules/cataloging/assets/images/media/copytags10.png deleted file mode 100644 index 3e78846c9e..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copytags10.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copytags11.png b/docs-antora/modules/cataloging/assets/images/media/copytags11.png deleted file mode 100644 index ddbccd342d..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copytags11.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copytags7.PNG b/docs-antora/modules/cataloging/assets/images/media/copytags7.PNG deleted file mode 100644 index 313997d07b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copytags7.PNG and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/copytags9.PNG b/docs-antora/modules/cataloging/assets/images/media/copytags9.PNG deleted file mode 100644 index 9fd71405cd..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/copytags9.PNG and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/ffrc1_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/ffrc1_2.12.jpg deleted file mode 100644 index 15a3d2a2ae..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/ffrc1_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/ffrc2_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/ffrc2_2.12.jpg deleted file mode 100644 index 5be5a3f7f1..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/ffrc2_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/ffrc3_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/ffrc3_2.12.jpg deleted file mode 100644 index 3d049ba700..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/ffrc3_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/item_tag_button.png b/docs-antora/modules/cataloging/assets/images/media/item_tag_button.png deleted file mode 100644 index 384e1ab7e3..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/item_tag_button.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/manage_item_tags.png b/docs-antora/modules/cataloging/assets/images/media/manage_item_tags.png deleted file mode 100644 index f3d67e1a09..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/manage_item_tags.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/manage_parts_menu.jpg b/docs-antora/modules/cataloging/assets/images/media/manage_parts_menu.jpg deleted file mode 100644 index 0982e3ea23..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/manage_parts_menu.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/manage_parts_opac.png b/docs-antora/modules/cataloging/assets/images/media/manage_parts_opac.png deleted file mode 100644 index 6e0b3d2b62..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/manage_parts_opac.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marc_batch_import_acq_overlay.png b/docs-antora/modules/cataloging/assets/images/media/marc_batch_import_acq_overlay.png deleted file mode 100644 index dfca5c8a9f..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marc_batch_import_acq_overlay.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marc_batch_import_popup.png b/docs-antora/modules/cataloging/assets/images/media/marc_batch_import_popup.png deleted file mode 100644 index d504fbe760..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marc_batch_import_popup.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marc_delete_record_3_3.png b/docs-antora/modules/cataloging/assets/images/media/marc_delete_record_3_3.png deleted file mode 100644 index 655d7a0d6c..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marc_delete_record_3_3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay1.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay1.png deleted file mode 100644 index 558397987f..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay1.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay2.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay2.png deleted file mode 100644 index 3f124e30fc..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay2.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay3.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay3.png deleted file mode 100644 index 0965353a4f..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay3.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay4.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay4.png deleted file mode 100644 index 68af923002..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay4.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay5.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay5.png deleted file mode 100644 index 9271bcdcc8..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay5.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay6.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay6.png deleted file mode 100644 index 96cc4d4cca..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay6.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/marcoverlay7.png b/docs-antora/modules/cataloging/assets/images/media/marcoverlay7.png deleted file mode 100644 index 69c105930b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/marcoverlay7.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/merge_tracking.png b/docs-antora/modules/cataloging/assets/images/media/merge_tracking.png deleted file mode 100644 index fb6621b36e..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/merge_tracking.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/monograph_parts2.jpg b/docs-antora/modules/cataloging/assets/images/media/monograph_parts2.jpg deleted file mode 100644 index 0e43663a53..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/monograph_parts2.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/monograph_parts3.jpg b/docs-antora/modules/cataloging/assets/images/media/monograph_parts3.jpg deleted file mode 100644 index 4fad88fc19..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/monograph_parts3.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/monograph_parts4.jpg b/docs-antora/modules/cataloging/assets/images/media/monograph_parts4.jpg deleted file mode 100644 index 08e5747734..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/monograph_parts4.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/monograph_parts5.jpg b/docs-antora/modules/cataloging/assets/images/media/monograph_parts5.jpg deleted file mode 100644 index a482ff2d8d..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/monograph_parts5.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/pcw1_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/pcw1_2.12.jpg deleted file mode 100644 index 55e9d8fb99..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/pcw1_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/pcw2_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/pcw2_2.12.jpg deleted file mode 100644 index 89649c1b5a..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/pcw2_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/pcw3_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/pcw3_2.12.jpg deleted file mode 100644 index 25a3c53e9b..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/pcw3_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/pcw4_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/pcw4_2.12.jpg deleted file mode 100644 index cc3f29f49a..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/pcw4_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/pcw5_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/pcw5_2.12.jpg deleted file mode 100644 index 88a8687393..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/pcw5_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/pcw6_2.12.jpg b/docs-antora/modules/cataloging/assets/images/media/pcw6_2.12.jpg deleted file mode 100644 index ae4d27d813..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/pcw6_2.12.jpg and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/remove_item_tag.png b/docs-antora/modules/cataloging/assets/images/media/remove_item_tag.png deleted file mode 100644 index 2323bc44e6..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/remove_item_tag.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/assets/images/media/request_from_item_status.png b/docs-antora/modules/cataloging/assets/images/media/request_from_item_status.png deleted file mode 100644 index a544b27bdd..0000000000 Binary files a/docs-antora/modules/cataloging/assets/images/media/request_from_item_status.png and /dev/null differ diff --git a/docs-antora/modules/cataloging/nav.adoc b/docs-antora/modules/cataloging/nav.adoc deleted file mode 100644 index 5ce9bc9b04..0000000000 --- a/docs-antora/modules/cataloging/nav.adoc +++ /dev/null @@ -1,20 +0,0 @@ -* xref:cataloging:introduction.adoc[Cataloging] -** xref:cataloging:copy-buckets_web_client.adoc[Item Buckets] -** xref:cataloging:item_tags_cataloging.adoc[Item Tags] -** xref:cataloging:MARC_Editor.adoc[Working with the MARC Editor] -** xref:cataloging:record_buckets.adoc[Record Buckets] -** xref:admin:staff_client-return_to_results_from_marc.adoc[Return to Search Results from MARC Record] -** xref:cataloging:batch_importing_MARC.adoc[Batch Importing MARC Records] -** xref:cataloging:overlay_record_3950_import.adoc[Overlay Existing Catalog Record via Z39.50 Import] -** xref:cataloging:z39.50_search_enhancements.adoc[Z39.50 Search Enhancements] -** xref:cataloging:monograph_parts.adoc[Monograph Parts] -** xref:cataloging:conjoined_items.adoc[Conjoined Items] -** xref:cataloging:cataloging_electronic_resources.adoc[Cataloging Electronic Resources — Finding Them in Catalog Searches] -** xref:cataloging:item_status.adoc[Using the Item Status interface] -** xref:cataloging:volcopy_editor.adoc[Using the Holdings Editor] -** xref:cataloging:MARC_batch_edit.adoc[MARC Batch Edit] -** xref:cataloging:authorities.adoc[Managing Authorities] -** xref:cataloging:link_checker.adoc[Link Checker] -** xref:admin:schema_bibliographic.adoc[Notes about the Bibliographic Schema in the Database] -** xref:admin:marc_templates.adoc[MARC Templates] - diff --git a/docs-antora/modules/cataloging/pages/MARC_Editor.adoc b/docs-antora/modules/cataloging/pages/MARC_Editor.adoc deleted file mode 100644 index 2ec31e0ed0..0000000000 --- a/docs-antora/modules/cataloging/pages/MARC_Editor.adoc +++ /dev/null @@ -1,185 +0,0 @@ -= Working with the MARC Editor = -:toc: - -== Editing MARC Records == - -. Retrieve the record. -+ -[TIP] -====== -You can retrieve records in many ways, including: - -* If you know its database ID, enter it into Cataloging > Retrieve Bib Record by ID. -* If you know its control number, enter it into Cataloging > Retrieve Bib Record by TCN. -* Searching in the catalog. -* Clicking on a link from the Acquisitions or Serials modules. -====== -+ -. Click on the MARC Edit tab. -. The MARC record will display. -. Select viewing and editing options, if desired. -* Stack subfields to display each subfield on its own line. -* Flat-Text Editor switches to a plain-text (mnemonic) MARC format. This format can be useful when copying and pasting multiple lines. It also allows the use of tools like MarcEdit (http://marcedit.reeset.net/ ). Uncheck the box to switch back. - * Note that you can use a backslash character as a placeholder in the flat text editor's indicators and fixed-length fields. -* Add Item allows attaching items quickly with call number and barcode. When _Save_ is clicked, the copy editor will open. NOTE: Browser pop-up blockers will prevent this, please allow pop-ups. -. Make changes as desired. -* Right click into a tag field to add/remove rows or replace tags. -* To work with the data in a tag or indicator, click or _Tab_ into the required field. Right click to view valid -tags or indicators. -+ -[NOTE] -========== -You can navigate the MARC Editor using keyboard shortcuts. Click _Help_ to see the shortcut menu from -within the MARC Editor. -========== -+ -. When finished, click _Save_. The record will remain open in the editor. You can close the browser window or browser tab. Or you can switch to -another view from the navigation near the top (for example to view it as it appears in the OPAC choose _OPAC View_). - -=== MARC Record Leader and MARC fixed field 008 === - -You can edit parts of the leader and the 008 field in the MARC Editor via the fixed field editor box displayed above -the MARC record. - -==== To edit the MARC record leader ==== - -. Retrieve and display the appropriate record in _MARC Edit_ view. - -. Click into any box displayed in the fixed field editor. - -. Press _Tab_ or use the mouse to move between fields. - -. Click _Save_. - -. The OPAC icon for the appropriate material type will display. - - -OPAC icons for text, moving pictures and sound rely on correct MARC coding in the leader, 007, and 008, as do OPAC -search filters such as publication date, item type, or target audience. - -==== MARC Fixed Field Editor Right-Click Context Menu Options ==== - -The MARC Fixed Field Editor provides suggested values for select fixed fields based on the record type being edited. Users can right-click on the value control for a fixed field and choose the appropriate value from the menu options. -The Evergreen database contains information from the Library of Congress’s MARC 21 format standards that includes possible values for select fixed fields. The right-click context menu options are available for fixed fields whose values are already stored in the database. Fixed fields that do not contain possible values in the database, the user will receive the default web browser menu (such as cut, copy, paste, etc.). - -*To Access the MARC Fixed Field Editor Right-Click Context Menu Options:* - -. Within the bibliographic record that needs to be edited, select *MARC Edit*. -. Make sure that the Flat-Text Editor checkbox is not selected and that you are not using the Flat-Text Editor interface. -. Right-click on the value control for the fixed field that needs to be edited. -+ -image::media/ffrc1_2.12.jpg[Right click on the fixed field input labeled Form] -+ -. Select the appropriate value for the fixed field from the menu options. -+ -image::media/ffrc2_2.12.jpg[One of the options in the Form fixed field context menu is r - Regular print reproduction] -+ -. Continue editing the MARC record, as needed. Once you are finished editing the record, click *Save*. - -Changing the values in the fixed fields will also update the appropriate position in the Leader or 008 Field and other applicable fields (such as the 006 Field). - -image::media/ffrc3_2.12.jpg[Selecting r in the context menu resulted in an r being placed in the 008 field later in the MARC Record display] - -MARC Editor users retain the option of leaving the fixed field value blank or entering special values (such as # or | ). - -[NOTE] -It may be necessary for MARC Editor users to first correctly pad the fixed fields to their appropriate lengths before making further modifications to the fixed field values. - - -*Administration* -The Evergreen database already contains information from the Library of Congress’s MARC 21 format standards that includes possible values for select fixed fields. Users may also add values to these and other fixed fields through the MARC Coded Value Maps interface. Once new values are added, the right-click context menu for the selected fixed field will display those values in the MARC Editor for any Record Type that utilizes that fixed field. -There are three relevant tables that contain the values that display in the fixed field context menu options: - -. *config.marc21_ff_pos_map* describes, for the given record type, where a fixed field is located, its start position, and its length. -. *config.coded_value_map* defines the set of valid values for many of the fixed fields and the translatable, human-friendly labels for them. -. *config.record_attr_definition* links together the information from the config.marc21_ff_pos_map and config.coded_value_map tables. - -=== Deleting MARC Records === -You can delete MARC records using the MARC Editor. - -==== To Delete a MARC record ==== - -. Retrieve and display the appropriate record in the MARC editor. -. Click on the _MARC Edit_ tab. -. Click the *Delete* button. -. In the modal window, click the *OK/Continue* button to remove the MARC record. - -image::media/marc_delete_record_3_3.png[The Delete button is located in the Marc Edit tab] - -=== MARC Tag-table Service === -The tag tables for the web staff client MARC editor are -stored in the database. The tag-table -service has the following features: - -- specifies whether (sub)fields are optional or mandatory -- specifies whether (sub)fields are repeatable or not -- a coded value map can be associated with a subfield to - establish a controlled vocabulary for that subfield -- MARC field and subfield definitions can be overridden - by institutions further down in the organizational unit - hierarchy. This allows, for example, a library to specify - definitions for local MARC tags. -- values supplied by the tag-table service are used to - populate values in context menus in the web staff client - MARC editor. - -MARC Tag Tables can be found under Administration -> Server Administration -> MARC Tag Tables. - -MARC Tag Tables Grid: - -image::media/MARC_Tag_Tables_Grid.PNG[Grid view of MARC Tag Tables] - -MARC Tag Tables Detail: - -image::media/MARC_Tag_Tables_Detail.PNG[Detail view of MARC Tag Tables] - -The initial seed data for the in-database tag table is -derived from the current tooltips XML file. - -== MARC 007 Field Physical Characteristics Wizard == - -The MARC 007 Field Physical Characteristics Wizard enables catalogers to interact with a database wizard that leads the user step-by-step through the MARC 007 field positions. The wizard displays the significance of the current position and provides dropdown lists of possible values for the various components of the MARC 007 field in a more user-friendly way. - -*To Access the MARC 007 Field Physical Characteristics Wizard for a Record that Does Not Already Contain the 007 Field (i.e. Creating the 007 Field from Scratch):* - -. Within the bibliographic record that needs to be edited, select *MARC Edit*. -. Make sure that the Flat-Text Editor checkbox is not selected and that you are not using the Flat-Text Editor interface. -. Right-click in the MARC field column. -+ -image::media/pcw1_2.12.jpg[] -+ -. Click *Add/Replace 007*. The 007 row will appear in the record. -. Click the chain link icon to the right of the field. -+ -image::media/pcw2_2.12.jpg[] -+ -. Click *Physical Characteristics Wizard*. - -The *MARC 007 Field Physical Characteristics Wizard* will open. - -*Using the Physical Characteristics Wizard:* - -As the user navigates through the wizard, each position will display its corresponding label that describes the significance of that position. Each position contains a selection of dropdown choices that list the possible values for that particular position. When the user makes a selection from the dropdown options, the value for that position will also change. - -The first value defines the *Category of Material*. Users select the Category of Material for the given record by choosing an option from the *Category of Material?* dropdown menu. The choices within the remaining character positions will be appropriate for the Category of Material selected. - -Once the Category of Material is selected, click *Next*. - -Evergreen will display the result of each selection in the preview above. The affected character will be in red. - -image::media/pcw3_2.12.jpg[] - -By clicking either the *Previous* or *Next* buttons, the user may step forward and backward, as needed, through the various positions in the 007 field. - -Once the user enters all of the applicable values for the 007 field and is ready to exit the wizard, click *Save*. - -image::media/pcw4_2.12.jpg[] - -All of the values selected will be stored and displayed within the 007 field of the bibliographic record. - -image::media/pcw5_2.12.jpg[] - -Continue editing the MARC record, as needed. Once the user is finished editing the record, click *Save*. - -image::media/pcw6_2.12.jpg[] - diff --git a/docs-antora/modules/cataloging/pages/MARC_batch_edit.adoc b/docs-antora/modules/cataloging/pages/MARC_batch_edit.adoc deleted file mode 100644 index b9435a6bab..0000000000 --- a/docs-antora/modules/cataloging/pages/MARC_batch_edit.adoc +++ /dev/null @@ -1,96 +0,0 @@ -= MARC Batch Edit = -:toc: - -== Introduction == - -This function is used to batch edit MARC records either adding a field, removing a field or changing the contents of a field. - -.What MARC Batch Edit Can and Can't Do -************************************** -MARC Batch Edit is a powerful tool, but it also has some limitations. -This tool can do the following tasks to a group of MARC records: - -* Remove all instances of a specific tag (e.g. remove all 992 tags) -* Remove all instances of a specific tag _if_ a particular subfield -has a particular value (e.g. remove all 650 fields in which the $2 -is _fast_) -* Remove all instances of a specific subfield (e.g. remove all 245$h) -* Remove all instances of a specific set of subfields -* Add a field -* Add a subfield to an existing field -* Replace data in a specific field or subfield - -It cannot do more advanced tasks, such as: - -* Swapping data from one field to another -* Deduplicating MARC records -* Complex logic based on existing data - -For more advanced projects, you may wish to export your records and -use a free tool such as http://marcedit.reeset.net/[MARCEdit] or -https://github.com/edsu/pymarc[PyMarc]. - -************************************** - -== Setting Up a Batch Edit Session == - -Record Source:: -This includes options to batch edit identifying MARC records in a record bucket, CSV file or by record id. - -Go! (button):: -This button runs the action defined by the rule template(s). - -=== Action (Rule Type) === -Replace:: -Replaces the value in a MARC field for a batch of records. -Delete:: -Removes a MARC field and its contents from the batch of records. -Add:: -Use this to add a field and its contents to a batch of records. - -=== Other Template Fields === -MARC Tag:: -This is used to identify the field for adding, replacing, or deleting. -Subfield (optional):: -Indicates which subfield is being edited. -MARC Data:: -Use this to indicate the data to add or used in replacing the existing data. - -=== Advanced Matching Restrictions (Optional) === -Subfield -Regular Expression:: -Using PERL syntax for a regular expression to identify the data to be removed or replaced. - -.Running a Template to Add, Delete, or Replace MARC data -. Click Cataloging->MARC Batch Edit -. Select *Record source* -. Select the appropriate bucket, load the CSV file or enter record id depending on *Record source* selected -. Select the *Action Rule* -. Enter the *MARC Tag* with no indicators (eg. 245) -. Enter the *subfields* with no spaces. Subfields are optional. Multiple subfield can be entered such as _auz_. -. Enter the *MARC Data* which is the value in the fields -. Enter optional *Advanced Matching Restrictions* -.. Subfield -.. Regular Expression (using PERL syntax) -. Click *Go!* -. Results page will display indicating the number of records successfully edited - -== Examples == - -=== Adding a new field to all records === - -. In the _action_ menu, choose _Add_. -. In _MARC Tag_, type the MARC tag number. -. Leave the _Subfields_ field blank. -. In _MARC Data_, type the field you would like to add. - -=== Delete a field if it contains a particular string === - -. In the _action_ menu, choose _Delete_. -. In _MARC Tag_, type the MARC tag number. -. Leave the _Subfields_ field blank. -. In _MARC Data_, type the field you would like to add. -. In the _subfield_ field under _Advanced Matching Restriction_, type the subfield code where you expect to see the string. -. In _Regular Expression_, type the string you expect to see. - - diff --git a/docs-antora/modules/cataloging/pages/_attributes.adoc b/docs-antora/modules/cataloging/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/cataloging/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/cataloging/pages/authorities.adoc b/docs-antora/modules/cataloging/pages/authorities.adoc deleted file mode 100644 index ec8bdff8d7..0000000000 --- a/docs-antora/modules/cataloging/pages/authorities.adoc +++ /dev/null @@ -1,148 +0,0 @@ -= Managing Authorities = -:toc: - -== Introduction == - -This section describes how you can create, import, view, modify, merge, and delete authority records in Evergreen. - -== Creating Authorities == -Currently in Evergreen to create a new authority record, as opposed to importing an authority record, you -need to have a bib record open in the bib MARC editor. - -* For example, if you want to create a new author -authority you need to have a bib record that has a bib 1xx or 7xx tag with the main entry filled out. -* Then you need to right click on that 1xx or 7xx tag. In the context menu that shows up, select _Create -New Authority from this field_, then select either _Create Immediately_ or _Create and Edit..._. -* If you -choose _Create and Edit..._, after the authority MARC editor opens you need to click on the _Save_ button -to finally add the new authority record to your system. - - -[[importing_authority_records_from_the_staff_client]] -== Importing Authorities == -. Click *Cataloging -> MARC Batch Import/Export.* -. You may create a queue to better track this import project. If you do not create a new queue, it will automatically put your records into a default queue named *-*. -. Don't set a value for Holdings Import Profile, because this doesn't apply to authority records. -. Select a file of authority data and put it in the *File to Upload* field. -. Make sure all the settings are correct, then press *Upload.* -+ -The screen displays "Uploading... Processing..." to show that the records -are being transferred to the server, then displays a progress bar to show -the actual import progress. When the staff client displays the progress -bar, you can disconnect your staff client safely. Very large batches of -records might time out at this stage. - -. Evergreen will automatically assign a thesaurus based on the *Subj* fixed field, which is character 11 in the 008 field. -. Evergreen will also try to determine who edited the record (based on the MARC 905u field or the user performing the import) and set the edit date, which you can view -when you examine the record in the future. - -. Once the import is finished, the staff client displays the results of -the import process. You can manually display the import progress by -selecting the _Inspect Queue_ tab of the _MARC Batch Import/Export_ -interface and selecting the queue name. By default, the staff client does -not display records that were imported successfully; it only shows records -that conflicted with existing entries in the database. The screen shows -the overall status of the import process in the top right-hand corner, -with the Total and Imported number of records for the queue. - - -[TIP] -================= -If you are importing authorities from an external vendor and want to track this, you may wish to set a unique Record Source. This source will be visible in the MARC -Editor and in the 901$s field of the imported authority records. -================= - - -=== Setting up Authority Record Match Sets === -. Click *Cataloging -> MARC Batch Import/Export.* -. Click *Record Match Sets.* -. If you have sufficient privileges, you will be able to click on the *New Match Set*. If you are unable to do so, check that you have the ADMIN_IMPORT_MATCH_SET permission. -. Give your new set a descriptive name, an owning library, and a match set type of *authority.* -. Click on the blue hyperlinked name of the match set you just created to add criteria. -. You can match against MARC tag/subfield entries or against a record's normalized heading. - -[NOTE] -================= -Evergreen's database stores normalized authority headings in a format that includes the thesaurus. This way, record match sets will not match terms from other thesauri, even if the term is very similar. -================= - -[TIP] -================= -Evergreen's internal identifier is in the 901c field. If you have previously exported authority record -- perhaps for an external vendor to do authority cleanup work -- and you want to import them back into your catalog, you may wish to include the 901c field in your match set. -================= - -== Viewing and Editing Authority Records by Database ID == - -The authority record retriever allows catalogers to retrieve a specific -authority record using its database ID. Catalogers can -find those IDs in subfield $0 of matching fields in -bibliographic records. - -To use the authority record retriever: - -. Click *Cataloging -> Retrieve Authority Record by ID*. -. Type in the ID number of the authority record you are -interested in. Don't include any prefixes, just the ID -number. -. Click *Submit*. -. View or edit the authority record as needed. - - -== Manage Authorities Interface == - -In Evergreen to view, edit, merge, and delete authority records you would use the *Manage Authorities* interface -through the *Cataloging* menu. - - - -=== Searching for authorities === - -To search for authorities in your system, first select the *Cataloging* menu and then select *Manage Authorities*. -Then proceed to fill out the search form. - -. Type in your _Search Term_ -. Select an _Authority type_, types currently include: Author, Subject, Title, Topic -. Click on the _Submit_ button - - -The authority search results will include the following elements from left to right: - -* _Actions_ menu, which can be used to select actions that affect the corresponding authority record. Actions include: -_Edit_, _Mark for Merge_, _Delete_ -* Count of how many bibs are linked to the corresponding authority -* Main entry of the authority, i.e the authority tag 1xx value -* _Control set_ value, with LoC being the default, but others can be added -* Authority Subject heading system/thesaurus, for example a value of "a" means authority originated from the Library of Congress - (http://www.loc.gov/marc/authority/ad008.html) - - -*Library of Congress list of thesaurus values:* - -* '' = Alternate no attempt to code -* a = Library of Congress Subject Headings -* b = LC subject headings for children's literature -* c = Medical Subject Headings -* d = National Agricultural Library subject authority file -* k = Canadian Subject Headings -* n = Not applicable -* r = Art and Architecture Thesaurus -* s = Sears List of Subject Headings -* v = Repertoire de vedettes-matiere -* z = Other -* | = No attempt to code - - -==== Editing authority records ==== - -Editing an authority record (or merging two authority records) can cause its linked bibliographic records to also update. For example, -if you correct a spelling error in the 150 field of a subject authority record, the relevant 650 field in linked bibliographic records -will also be updated to reflect the correct spelling. - -[TIP] -================= -When a bib record is automatically updated as a result of the modification of a linked authority record, the bib record's "Last Edit Date/ -Time" and "Last Editing User" fields will be updated to match the time of the update and the editor of the authority record. If you'd -prefer that these fields not be automatically updated, you can set the _ingest.disable_authority_auto_update_bib_meta_ setting to true in the -Library Settings Editor. -================= - diff --git a/docs-antora/modules/cataloging/pages/batch_importing_MARC.adoc b/docs-antora/modules/cataloging/pages/batch_importing_MARC.adoc deleted file mode 100644 index 42211c4f58..0000000000 --- a/docs-antora/modules/cataloging/pages/batch_importing_MARC.adoc +++ /dev/null @@ -1,396 +0,0 @@ -= Batch Importing MARC Records = -:toc: - -== Introduction == - -indexterm:[MARC records,importing,using the staff client] - -[[batchimport]] -The cataloging module includes an enhanced MARC Batch Import interface for -loading MARC (and MARCXML) records. In general, it can handle batches up to 5,000 records -without a problem. This interface allows you to specify match points -between incoming and existing records, to specify MARC fields that should be -overlaid or preserved, and to only overlay records if the incoming record is -of higher quality than the existing record. Records are added to a queue where -you can apply filters that enable you to generate any errors that may have -occurred during import. You can print, email or export your queue as a CSV file. - -== Permissions == - -To use match sets to import records, you will need the following permission: - -ADMIN_IMPORT_MATCH_SET - - -== Record Display Attributes == - -This feature enables you to specify the tags and subfields that will display in -records that appear in the import queue. - - -[[matchsets]] -== Record Match Sets == - -This feature enables you to create custom match points that you can use to -accurately match incoming records with existing catalog records. - -=== Creating a Match Set === - -In this example, to demonstrate matching on record attributes and MARC tags and -subfields, we will create a record match set that defines a match based on the -title of the record, in either the 240 or 245, and the fixed field, Lang. You -can add multiple record attributes and MARC tags to customize a record match -set. - - -. Click *Cataloging -> MARC Batch Import/Export*. - -. Create a new record match set. Click *Record Match Sets -> New Match Set*. - -. Enter a name for the record match set. - -. Select an *Owning Library* from the drop down menu. Staff with permissions -at this location will be able to use this record match set. - -. Select a *Match Set Type* from the drop down menu. You can create a match -set for authority records or bibliographic records. - -. Click *Save*. -+ -image::media/Batch_Importing_MARC_Records1.jpg[Batch_Importing_MARC_Records1] - -. The screen will refresh to list the record match set that you created. Click -the link to the record match set. - -. Create an expression that will define the match points for the incoming -record. You can choose from two areas to create a match: *Record Attribute* or -*MARC Tag and Subfield*. You can use the Boolean operators AND and OR to -combine these elements to create a match set. - -. Select a *Record Attribute* from the drop-down menu. - -. Enter a *Match Score.* The *Match Score* indicates the relative importance -of that match point as Evergreen evaluates an incoming record against an -existing record. You can enter any integer into this field. The number that -you enter is only important as it relates to other match points. Recommended -practice is that you create a match score of one (1) for the least important -match point and assign increasing match points to the power of 2 to working -points in increasing importance. - -. Check the *Negate?* box if you want to negate the match point. Checking -this box would be the equivalent of applying a Boolean operator of NOT to the -match point. -+ -image::media/Batch_Importing_MARC_Records2.jpg[Batch_Importing_MARC_Records2] - -. Click *Ok.* - -. Drag the completed match point under the folder with the -appropriately-named Boolean folder under the Expression tree. -+ -image::media/Batch_Importing_MARC_Records3.jpg[Batch_Importing_MARC_Records3] -+ -The match point will nest underneath the folder in the Expression tree. -+ -image::media/Batch_Importing_MARC_Records4.jpg[Batch_Importing_MARC_Records4] - -. Enter another *Boolean Operator* to further refine your match set. - -. Click *Boolean Operator*. - -. Select the *OR* operator from the drop down menu. - -. Click *Ok*. - -. Drag the operator to the expression tree. -+ -image::media/Batch_Importing_MARC_Records5.jpg[Batch_Importing_MARC_Records5] - -. Click *MARC Tag and Subfield*. - -. Enter a *MARC tag* on which you want the records to match. - -. Enter a *subfield* on which you want the records to match. - -. Enter a *Match Score.* The *Match Score* indicates the relative importance -of that match point as Evergreen evaluates an incoming record against an -existing record. You can enter any integer into this field. The number that -you enter is only important as it relates to other match points. Recommended -practice is that you create a match score of one (1) for the least important -match point and assign increasing match points to the power of 2 to working -points in increasing importance. - -. Check the *Negate?* box if you want to negate the match point. Checking -this box would be the equivalent of applying a Boolean operator of NOT to the -match point. - -. Click *Ok.* -+ -image::media/Batch_Importing_MARC_Records6.jpg[Batch_Importing_MARC_Records6] - -. Drag the completed match point under the folder with the -appropriately-named Boolean folder under the Expression tree. The Expression -will build across the top of the screen. - -. Add additional MARC tags or record attributes to build the expression tree. - -. Click *Save Changes to Expression*. -+ -image::media/Batch_Importing_MARC_Records7.jpg[Batch_Importing_MARC_Records7] - -=== Replace Mode === - -Replace Mode enables you to replace an existing part of the expression tree -with a new record attribute, MARC tag, or Boolean operator. For example, if -the top of the tree is AND, in Replace Mode, you could change that to an OR. - -. Create a working match point. - -. Click *Enter Replace Mode*. - -. Highlight the piece of the tree that you want to replace. - -. Drag the replacement piece over the highlighted piece. - -. Click *Exit Replace Mode*. - - -=== Quality Metrics === - -. Set the *Quality Metrics for this Match Set*. Quality metrics are used to -determine the overall quality of a record. Each metric is given a weight and -the total quality value for a record is equal to the sum of all metrics that -apply to that record. For example, a record that has been cataloged thoroughly -and contains accurate data would be more valuable than one of poor quality. You -may want to ensure that the incoming record is of the same or better quality -than the record that currently exists in your catalog; otherwise, you may want -the match to fail. The quality metric is optional. - -. You can create quality metrics based on the record attribute or the MARC Tag -and Subfield. - -. Click *Record Attribute.* - -. Select an attribute from the drop down menu. - -. Enter a value for the attribute. - -. Enter a match score. You can enter any integer into this field. The number -that you enter is only important as it relates to other quality values for the -current configuration. Higher scores would indicate increasing quality of -incoming records. You can, as in the expression match score, increase the -quality points by increasing subsequent records by a power of 2 (two). - -. Click *Ok*. -+ -image::media/Batch_Importing_MARC_Records8.jpg[Batch_Importing_MARC_Records8] - -== Merge/Overlay Profiles == - -If Evergreen finds a match for an incoming record in the database, you need to identify which fields should be replaced, which should be preserved, and which should be added to the record. -Click the Merge/Overlay Profiles button to create a profile that contains this information. - -You can use these profiles when importing records through the MARC Batch Importer or Acquisitions Load MARC Order Records interface. - -You can create a new profile by clicking the New Merge Profile button. Available options for handling the fields include: - -. _Preserve specification_ - fields in the existing record that should be preserved. - -. _Replace specification_ - fields in existing record that should be replaced by those in the incoming record. - -. _Add specification_ - fields from incoming record that should be added to existing record (in addition to any already there.) - -. _Remove specification_ - fields that should be removed from incoming record. - -. _Update bib source_ - If this value is false, just the bibliographic data will be updated when you overlay a new MARC record. If it is true, then Evergreen will also update -the record's bib source to the one you select on import; the last edit date to the date the new record is imported, and the last editor to the person who imported the new -record. - -You can add multiple tags to the specification options, separating each tag with a comma. - - -== Import Item Attributes == -If you are importing items with your records, you will need to map the data in -your holdings tag to fields in the item record. Click the *Holdings Import -Profile* button to map this information. - -. Click the *New Definition* button to create a new mapping for the holdings tag. -. Add a *Name* for the definition. -. Use the *Tag* field to identify the MARC tag that contains your holdings - information. -. Add the subfields that contain specific item information to the appropriate - item field. -. At a minimum, you should add the subfields that identify the *Circulating -Library*, the *Owning Library*, the *Call Number* and the *Barcode*. - -NOTE: All fields (except for Name and Tag) can contain a MARC subfield code -(such as "a") or an XPATH query. You can also use the -related library settings to set defaults for some of these fields. - -image::media/batch_import_profile.png[Partial Screenshot of a Holdings Import Profile] - -.Holdings Import Profile Fields -[options="header"] -|============================= -|Field | Recommended | Description -|Name | Yes | Name you will choose from the MARC Batch Import screen -|Tag | Yes | MARC Holdings Tag/Field (e.g. 949). Use the Tag field to -identify the MARC tag that contains your holdings information. -|Barcode | Yes | -|Call Number | Yes | -|Circulating Library | Yes | -|Owning Library | Yes | -|Alert Message || -|Circulate || -|Circulate As MARC Type || -|Circulation Modifier || -|Copy Number || -|Deposit || -|Deposit Amount || -|Holdable || -|OPAC Visible || -|Overlay Match ID || The copy ID of an existing item to overlay -|Parts Data || Of the format `PART LABEL 1\|PART LABEL 2`. -|Price || -|Private Note || -|Public Note || -|Reference || -|Shelving Location || -|Stat Cat Data || Of the format `CATEGORY 1\|VALUE 1\|\|CATEGORY 2\|VALUE 2`. -If you are overlaying existing items which already have stat cats -attached to them, the overlay process will keep those values unless the -incoming items contain updated values for matching categories. -|Status || -|============================= - - -== Import Records == - -The *Import Records* interface incorporates record match sets, quality metrics, -more merging options, and improved ways to manage your queue. In this example, -we will import a batch of records. One of the records in the queue will -contain a matching record in the catalog that is of lower quality than the -incoming record. We will import the record according to the guidelines set by -our record match set, quality metrics, and merge/overlay choices that we will -select. - -. Select a *Record Type* from the drop down menu. - -. Create a queue to which you can upload your records, or add you records to -an existing queue. Queues are linked to match sets and a holdings import -profile. You cannot change a holdings import or record match set for a queue. - -. Select a *Record Match Set* from the drop down menu. - -. Select a *Holdings Import Profile* if you want to import holdings that are -attached to your records. - -. Select a *Record Source* from the drop down menu. - -. Select a *Merge Profile*. Merge profiles enable you to specify which tags -should be removed or preserved in incoming records. - -. Choose one of the following import options if you want to auto-import -records: - -.. *Merge on Single Match* - Using the Record Match Set, Evergreen will only -attempt to perform the merge/overlay action if only one match was found in the -catalog. - -.. *Merge on Best Match* - If more than one match is found in the catalog for a -given record, Evergreen will attempt to perform the merge/overlay action with -the best match as defined by the match score and quality metric. -+ -NOTE: Quality ratio affects only the *Merge on Single Match* and *Merge on Best -Match* options. - -. Enter a *Best/Single Match Minimum Quality Ratio.* Divide the incoming -record quality score by the record quality score of the best match that might -exist in the catalog. By default, Evergreen will assign any record a quality -score of 1 (one). If you want to ensure that the inbound record is only -imported when it has a higher quality than the best match, then you must enter -a ratio that is higher than 1. For example, if you want the incoming record to -have twice the quality of an existing record, then you should enter a 2 (two) -in this field. If you want to bypass all quality restraints, enter a 0 (zero) -in this field. - -. Select an *Insufficient Quality Fall-Through Profile* if desired. This -field enables you to indicate that if the inbound record does not meet the -configured quality standards, then you may still import the record using an -alternate merge profile. This field is typically used for selecting a merge -profile that allows the user to import holdings attached to a lower quality -record without replacing the existing (target) record with the incoming record. -This field is optional. - -. Under *Copy Import Actions*, choose _Auto-overlay In-process Acquisitions -Copies_ if you want to overlay temporary copies that were created by the -Acquisitions module. The system will attempt to overlay copies that: - -* have associated lineitem details (that is, they were created by the acquisitions process), -* that lineitem detail has the same owning_lib as the incoming copy's owning_lib, and -* the current copy associated with that lineitem detail is _In process_. - -. *Browse* to find the appropriate file, and click *Upload*. The file will -be uploaded to a queue. The file can be in either MARC or MARCXML format. -+ -image::media/marc_batch_import_acq_overlay.png[Batch Importing MARC Records] - -. The screen will display records that have been uploaded to your queue. Above -the table there are three sections: - * *Queue Actions* lists common actions for this queue. _Export Non-Imported -Records_ will export a MARC file of records that failed to import, allowing -those records to be edited as needed and imported separately. (Those -records can be viewed by clicking the _Limit to Non-Imported Records_ -filter.) - * *Queue Summary* shows a brief summary of the records included in the queue. - * *Queue Filters* provides options for limiting which records display in the -table. -+ -image::media/Batch_Importing_MARC_Records15.jpg[Batch_Importing_MARC_Records15] - -. If Evergreen indicates that matching records exist, then click the -*Matches* link to view the matching records. Check the box adjacent to the -existing record that you want to merge with the incoming record. -+ -image::media/Batch_Importing_MARC_Records10.jpg[Batch_Importing_MARC_Records10] - -. Click *Back to Import Queue*. - -. Check the boxes of the records that you want to import, and click *Import -Selected Records*, or click *Import All Records*. - -. A pop up window will offer you the same import choices that were present on -the *Import Records* screen. You can choose one of the import options, or -click *Import*. -+ -image::media/marc_batch_import_popup.png[Batch Importing MARC Records Popup] - -. The screen will refresh. The *Queue Summary* indicates that the record was -imported. The *Import Time* column records the date that the record was -imported. Also, the *Imported As* column should now display the database ID (also known as the bib record number) for the imported record. -+ -image::media/Batch_Importing_MARC_Records12.jpg[Batch_Importing_MARC_Records12] - -. You can confirm that the record was imported by using the value of the *Imported As* column by selecting the menu *Cataloging* -> *Retrieve title by database ID* and using the supplied *Imported As* number. Alternatively, you can search the catalog to confirm that the record was imported. -+ -image::media/Batch_Importing_MARC_Records14.jpg[Batch_Importing_MARC_Records14] - - -== Default Values for Item Import == - -Evergreen now supports additional functionality for importing items through *Cataloging* -> *MARC Batch Import/Export*. When items are imported via a *Holdings Import Profile* in *Cataloging* -> *MARC Batch Import/Export*, Evergreen will create an item-level record for each copy. If an item barcode, call number, shelving location, or circulation modifier is not set in the embedded holdings, Evergreen will apply a default value based on the configured Library Settings. A default prefix can be applied to the auto-generated call numbers and item barcodes. - -The following *Library Settings* can be configured to apply these default values to imported items: - -* *Vandelay: Generate Default Barcodes* —Auto-generate default item barcodes when no item barcode is present - -* *Vandelay: Default Barcode Prefix* —Apply this prefix to any auto-generated item barcodes - -* *Vandelay: Generate Default Call Numbers* —Auto-generate default item call numbers when no item call number is present - -* *Vandelay: Default Call Number Prefix* —Apply this prefix to any auto-generated item call numbers - -* *Vandelay: Default Copy Location* —Default copy location value for imported items - -* *Vandelay: Default Circulation Modifier* —Default circulation modifier value for imported items - diff --git a/docs-antora/modules/cataloging/pages/cataloging_electronic_resources.adoc b/docs-antora/modules/cataloging/pages/cataloging_electronic_resources.adoc deleted file mode 100644 index 9228b51377..0000000000 --- a/docs-antora/modules/cataloging/pages/cataloging_electronic_resources.adoc +++ /dev/null @@ -1,158 +0,0 @@ -= Cataloging Electronic Resources -- Finding Them in Catalog Searches = -:toc: -There are two ways to make electronic resources visible in the catalog without -adding items to the record: - -. Adding a Located URI to the record -. Attaching the record to a bib source that is transcendent - -The Located URI approach is useful for Evergreen sites where libraries have -access to different electronic resources. The transcendent bib source approach -is useful if all libraries have access to the same electronic resources. - -Another difference between the two approaches is that electronic resources with -Located URI's never appear in results where the search is limited to a specific -shelving location(s). In contrast, transcendent electronic resources will appear in -results limited to any shelving location. - -== Adding a Located URI to the Record == -A Located URI allows you to add the short name for the owning library to the 856 -field to indicate which organizational units should be able to find the -resource. The owning organizational unit can be a branch, system, or consortium. - -A global flag called _When enabled, Located URIs will provide visibility -behavior identical to copies_ will determine where these resources will appear -in search results. This flag is available through *Admin* -> *Server -Administration* -> *Global Flags*. - -If the _When enabled, Located URIs will provide visibility behavior identical -to copies_ flag is set to False (default behavior): - -* When the user's search scope is set at the owning organizational unit or to -a child of the owning organizational unit, the record will appear in search -results. -* When a logged-in user's preferred search library is set to the owning -organizational unit or to a child of that owning organizational unit, the record -will appear regardless of search scope. - -If the _When enabled, Located URIs will provide visibility behavior identical -to copies_ flag is set to True: - -* When the user's search scope is set at the owning organizational unit, at a -child of the owning organizational unit, or at a parent of the owning -organizational unit, the record will appear in search results. -* When a logged-in user's preferred search library is set to the owning -organizational unit, to a child of the owning organizational unit, or to a -parent (with the exception of the consortium) of the owning organizational unit, -the record will appear regardless of search scope. - - -To add a located URI to the record: - -. Open the record in _MARC Edit_ -. Add a subfield 9 to the 856 field of the record and enter the short name of -the organizational unit for the value. Make sure there is a 4 entered as the -first indicator and a 0 entered as the second indicator. -For example: -+ -'856 40 $u http://lwn.net $y Linux Weekly News $9 BR1' -+ -would make this item visible to people searching in a library scope of BR1 or to -logged-in users who have set BR1 as their preferred search library. -+ -[NOTE] -If multiple organizational units own the resource, you can enter more than one -subfield 9 to the 856 field or you can enter multiple 856 fields with a subfield -9 to the record -+ -. Save the record - -[NOTE] -When troubleshooting located URIs, check to make sure there are no spaces either -before or after the organizational unit short name. - -=== Located URI Example 1 === - -The _When enabled, Located URIs will provide visibility behavior identical to -copies_ flag is set to False (default behavior) - -The Record has two 856 fields: one with SYS1 in subfield 9 and the other with -BR4 in subfield 9 - -* Any user searching SYS1 or any of its children (BR1, BR2, SL1) will find the -record. These users will only see the URL belonging to SYS1. -* Any user searching BR4 will find the record. These users will only see the -URL belonging to BR4. -* A user searching SYS2 will NOT find the record because SYS2 is a parent of -an owning org unit, not a child. The same thing happens if the user is searching -the consortium. In this case, the system assumes the user is unlikely to -have access to this resource and therefore does not retrieve it. -* A logged-in user with a preferred search library of BR4 will find the record -at any search scope. This user will see the URL belonging to BR4. Because this -user previously identified a preference for using this library, the system -assumes the user is likely to have access to this resource. -* A logged-in user with a preferred search library of BR4 who is searching SYS1 -or any of its children will also retrieve the record. In this case, the user -will see both URLs, the one belonging to SYS1 because the search library matches -or is a child of the owning organizational unit and the one belonging to BR4 -because it matches or is a child of the preferred search library. The URL -belonging to the search library (if it is an exact match, not a child) will sort -to the top. - -=== Located URI Example 2 === - -The _When enabled, Located URIs will provide visibility behavior identical to -copies_ flag is set to True - -The Record has two 856 fields: one with SYS1 in subfield 9 and the other with -BR4 in subfield 9 - -* Any user searching SYS1 or any of its children (BR1, BR2, SL1) will find the -record. These users will only see the URL belonging to SYS1. -* Any user searching BR4 will find the record. These users will only see the -URL belonging to BR4. -* Any user searching the consortium will find the record. These users will see -both URLs in the record. In this case, the system sees this user as a potential -user of SYS2 or BR4 and therefore offers them the option of accessing the -resource through either URL. -* A user searching SYS2 will find the record because SYS2 is a parent of -an owning org unit. The user will see the URL belonging to BR4. Once again, -the system sees this user as a potential user of BR4 and therefore offers -them the option of accessing this resource. -* A user searching BR3 will NOT find the record because BR3 is neither a child -nor a parent of an owning organizational unit. -* A logged-in user with a preferred search library of BR4 who is searching BR3 -will find the record. This user will see the URL belonging to BR4. Because this -user previously identified a preference for using this library, the system -assumes the user is likely to have access to this resource. -* A logged-in user with a preferred search library of BR4 who is searching SYS1 -or any of its children will also retrieve the record. In this case, the user -will see both URLs, the one belonging to SYS1 because the search library matches -or is a child of the owning organizational unit and the one belonging to BR4 -because it matches or is a child of the preferred search library. The URL -belonging to the search library (if it is an exact match, not a child) will sort -to the top. - -== Using Transcendant Bib Sources for Electronic Resources == -Connecting a bib record to a transcendent bib source will make the record -visible in search results regardless of the user's search scope. - -To start, you need to create a transcendent bib source by adding it to -'config.bib_source' in the Evergreen database and setting the _transcendant_ -field to true. For example: - -+# INSERT INTO config.bib_source(quality, source, transcendant, can_have_copies) -VALUES (50, `ebooks', TRUE, FALSE);+ - -[NOTE] -If you want to allow libraries to add copies to these records, set the -_can_have_copies_ field to _TRUE_. If you want to prevent libraries from adding -copies to these records, set the _can_have_copies_ field to _FALSE_. - -When adding or uploading bib records for electronic resources, set the -bibliographic source for the record to the newly-created transcendent -bibliographic source. Using the staff client, the bibliographic source can be -selected in the _MARC Batch Import_ interface when importing new, non-matching -records or in the _MARC Edit_ interface when editing existing records. - - diff --git a/docs-antora/modules/cataloging/pages/conjoined_items.adoc b/docs-antora/modules/cataloging/pages/conjoined_items.adoc deleted file mode 100644 index 6119c4c9f0..0000000000 --- a/docs-antora/modules/cataloging/pages/conjoined_items.adoc +++ /dev/null @@ -1,94 +0,0 @@ -= Conjoined Items = -:toc: - -Prior to Evergreen version 2.1, items could be attached to only one bibliographic record. The Conjoined Items feature in Evergreen 2.1 enables catalogers to link items to multiple bibliographic records. This feature will enable more precise cataloging. For example, catalogers will be able to indicate items that are printed back to back, are bilingual, are part of a bound volume, are part of a set, or are available as an e-reader pre-load. This feature will also help the user retrieve more relevant search results. For example, a librarian catalogs a multi-volume festschrift. She can create a bibliographic record for the festschrift and a record for each volume. She can link the items on each volume to the festschrift record so that a patron could search for a volume or the festschrift and retrieve information about both works. - -In the example below, a librarian has created a bibliographic record for two bestselling items. These books are available as physical copies in the library, and they are available as e-reader downloads. The librarian will link the copy of the Kindle to the bibliographic records that are available on the e-reader. - -== Using the Conjoined Items Feature == - -The Conjoined Items feature was designed so that you can link items between bibliographic records when you have the item in hand, or when the item is not physically present. Both processes are described here. The steps are fewer if you have the item in hand, but both processes accomplish the same task. This document also demonstrates the process to edit or delete links between items and bibliographic records. Finally, the permission a cataloger needs to use this feature is listed. - -.Scenario 1: I want to link an item to another bibliographic record, but I do not have the item in hand - -1. Retrieve the bibliographic record to which you would like to link an item. - -2. Click *Actions for this Record -> Mark as Target for Conjoined Items.* -+ -image::media/conjoined_menu_markfor.png[Menu: Mark as Target for Conjoined Items] - -3. A confirmation message will appear. Click *OK.* - -4. In a new tab, retrieve the bibliographic record with the item that -you want to link to the other record. - -5. Click *Actions for this Record -> Holdings Maintenance.* - -6. Select the copy that you want to link to the other bibliographic -record. Right-click, or click *Actions for Selected Rows -> Link as -Conjoined Items to Previously Marked Bib Record.* -+ -image::media/conj2.jpg[conj2] - -7. The *Manage Conjoined Items* interface opens in a new tab. This -interface enables you to confirm the success of the link, and to change -the peer type if desired. The *Result* column indicates that you -created a successful link between the item and the bib record. -+ -image::media/conj3.jpg[conj3] -+ -The default peer type, *Back-to-back*, was set as the peer type for our item. To change a peer type after the link has been created, right-click or click *Actions for Selected Items -> Change Peer Type*. A drop down menu will appear. Select the desired peer type, and click *OK.* -+ -image::media/conj4.jpg[conj4] - -8. The *Result* column will indicate that the *Peer Type* [has been] *Updated.* -+ -image::media/conj5.jpg[conj5] - -9. To confirm the link between the item and the desired bib record, -reload the tab containing the bib record to which you linked the item. -You should now see the copy linked in the copies table. -+ -image::media/conjoined_opac.png[Catalog Record showing Conjoined Item link] - - -.Scenario 2: I want to link an item to another bibliographic record, and I do have the item in hand - -1. Retrieve the bibliographic record to which you would like to add the item. - -2. Click *Actions for this Record -> Manage Conjoined Items.* -+ -image::media/conjoined_menu_markfor.png[Menu: Manage Conjoined Items] - -3. A note in the bottom left corner of the screen will confirm that the -record was targeted for linkage with conjoined items, and the *Manage -Conjoined Items* screen will appear. - -4. Select the peer type from the drop down menu, and scan in the barcode -of the item that you want to link to this record. - -5. Click *Link to Bib (Submit).* -+ -image::media/conj10.jpg[conj10] - -6. The linked item will appear in the screen. The *Result* column indicates Success. - -7. To confirm the linkage, click *Actions for this Record -> OPAC View.* - -8. When the bibliographic record appears, click *Reload. Linked Titles* -will show the linked title and item. - - -.Scenario 3: I want to edit or break the link between a copy and a bibliographic record - -1. Retrieve the bibliographic record that has a copy linked to it. - -2. Click *Actions for this Record -> Manage Conjoined Items.* - -3. Select the copy that you want to edit, and right-click or click -*Actions for Selected Items.* - -4. Make any changes, and click *OK.* - - -UPDATE_COPY - Link items to bibliographic records diff --git a/docs-antora/modules/cataloging/pages/copy-buckets_web_client.adoc b/docs-antora/modules/cataloging/pages/copy-buckets_web_client.adoc deleted file mode 100644 index 2f448e3ced..0000000000 --- a/docs-antora/modules/cataloging/pages/copy-buckets_web_client.adoc +++ /dev/null @@ -1,289 +0,0 @@ -= Item Buckets = -:toc: - -Item buckets are containers copy records can be put into to easily perform batch actions on. Copies stay in buckets until they are removed. - -The _Item Bucket_ interface is accessed by going to *Cataloguing* -> *Copy Buckets*. - -image::media/copy-bucket-2.png[Cataloguing Menu] - -NOTE: The words _copy_ and _item_ are used interchangeably in Evergreen. - -== Managing Item Buckets == - -=== Creating Item Buckets === - -Item buckets can be created in the _Item Bucket_ interface as well as on the fly when adding items to a bucket from -a catalogue search or from within the _Item Status_ interface. For information on creating buckets on the fly see _Adding Copies to a Bucket_ (needs section ID). - -1. In the _Item Bucket_ interface on the click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. -+ -image::media/copy-bucket-new-1.png[Item Bucket Interface] -+ -2. From the drop down menu select *New Bucket*. -+ -image::media/copy-bucket-new-2.png[Item Bucket Interface] -+ -3. Enter a _Name_ and a _Description_ (optional) for your bucket and click *Create Bucket*. -+ -image::media/copy-bucket-new-3.png[Item Bucket Interface] -+ -The bucket can also be set as _Publicly Visible_ at this time. - -NOTE: The functionality for making buckets publicly visible does not appear to be in place at this time. - -=== Editing Item Buckets === - -1. In the _Item Bucket_ interface click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. -+ -image::media/copy-bucket-new-1.png[Item Bucket Interface] -+ -2. From the drop down menu select the bucket you would like to edit. The bucket will load in the interface. -3. Click on *Buckets*. -4. From the drop down menu select *Edit Bucket*. -+ -image::media/copy-bucket-edit-1.png[Item Bucket Interface] -+ -5. Update the desired information and click *Apply Changes*. -+ -image::media/copy-bucket-edit-2.png[Item Bucket Interface] - -NOTE: The functionality for making buckets publicly visible does not appear to be in place at this time. - -=== Sharing Item Buckets === - -==== Finding the Bucket ID ==== - -1. With the bucket open, look at the URL for the bucket ID. Share this ID with the staff member who needs access to this bucket. - -image::media/copy-bucket-share-1.png[Bucket ID URL] - -==== Opening a Shared Bucket ==== - -. In the _Item Bucket_ interface click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. -+ -image::media/copy-bucket-new-1.png[Item Bucket Interface] -+ -. From the drop down menu select *Shared Bucket*. -+ -image::media/copy-bucket-share-2.png[Item Bucket Interface] -+ -. Enter the bucket ID and click *Load Bucket*. -+ -image::media/copy-bucket-share-3.png[Item Bucket Interface] -+ -. The shared bucket will display and can be worked with the same as any bucket you own. -+ -image::media/copy-bucket-share-4.png[Item Bucket Interface] - -=== Deleting Item Buckets === - -1. In the _Item Bucket_ interface click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. -+ -image::media/copy-bucket-new-1.png[Item Bucket Interface] -+ -2. From the drop down menu select the bucket you would like to delete. The bucket will load in the interface. -3. Click on *Buckets*. -4. From the drop down menu select *Delete Bucket*. -+ -image::media/copy-bucket-delete-1.png[Item Bucket Interface] -+ -5. On the confirmation pop up click *Delete Bucket*. -6. Refresh your screen. - - -== Adding Copies to a Bucket == - -=== From the Item Bucket Interface === - -1. In the _Item Bucket_ interface click on the *Pending Copies* tab. -+ -image::media/copy-bucket-pending-1.png[Item Bucket Interface] -+ -2. Scan in all of the items you wish to add to the bucket. -+ -image::media/copy-bucket-pending-3.png[Item Bucket Interface] -+ -3. Click on *Buckets*. -4. From the drop down menu select the bucket you wish to add the items to. -Alternatively you can create a *New Bucket* (link back to Item Bucket Interface section of Creating Copy Buckets). -+ -image::media/copy-bucket-pending-2.png[Item Bucket Interface] -+ -5. Use the check boxes to select the item(s) you wish to add to the bucket. -6. Click *Actions*. -7. From the drop down menu select *Add To Bucket*. -+ -image::media/copy-bucket-pending-4.png[Item Bucket Interface] -+ -8. The number of items in the bucket, displayed beside the bucket name, will update as will the number on the *Bucket View* tab. -+ -image::media/copy-bucket-pending-5.png[Item Bucket Interface] - -NOTE: Once you have added your selected items to a bucket you can deselect them, select other items on your pending list, and add those items to a different bucket. - - -=== From a Catalogue Search === - -1. Retrieve the title through a catalogue search. -2. If it is not your default view click on the *Holdings View* tab. -+ -image::media/copy-bucket-cat-1.png[Holdings View] -+ -3. Use the check boxes to select the item(s) you would like to add to the bucket. -4. Click *Actions*. -5. From the drop down menu select *Add Items to Bucket* -+ -image::media/copy-bucket-cat-2.png[Holdings View] -+ -6. Enter a name for your bucket or select an existing from the drop down menu. -7. Click *Add To New Bucket* or *Add To Selected Bucket*. -+ -image::media/copy-bucket-cat-3.png[Item Bucket Interface] -+ -8. Repeat steps 1 through 7 to add additional items. - - -=== From the Scan Item Interface === - -. Click on _Search_ -> _Search for Copies by Barcode_ -. Scan the barcode(s) of the item(s) you wish to add to the bucket. -. Make sure that the items you want to add are selected (i.e. that the checkbox on the left -side of the screen is checked. -. Right click on one of the selected items. -. Click _Add items to bucket_. -. Choose the existing bucket that you'd like to add to, or create a new bucket. - - -== Removing Copies from a Bucket == - -. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. -+ -image::media/copy-bucket-remove-1.png[Item Bucket Interface] -+ -. Click on *Buckets*. -. From the drop down menu select the bucket containing the item(s) you would like to remove. -+ -image::media/copy-bucket-remove-2.png[Item Bucket Interface] -+ -. Use the check boxes to select the item(s) you wish to remove from the bucket. -. Click *Actions*. -. From the drop down menu select *Remove Selected Copies from Bucket*. -+ -image::media/copy-bucket-remove-3.png[Item Bucket Interface] -+ -. Your bucket will reload and the selected item(s) will no longer be in the bucket. - -== Editing Copies in a Bucket == - -. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. -+ -image::media/copy-bucket-remove-1.png[Item Bucket Interface] -+ -. Click on *Buckets*. -. From the drop down menu select the bucket containing the item(s) you would like to edit. -+ -image::media/copy-bucket-remove-2.png[Item Bucket Interface] -+ -. Use the check boxes to select the item(s) you wish to edit. -. Click *Actions*. -. From the drop down menu select *Edit Selected Copies*. -+ -image::media/copy-bucket-edit-copy-1.png[Item Bucket Interface] -+ -. The _Copy Editor_ will open in a new tab. Make your edits and then click *Save and Exit*. -+ -image::media/copy-bucket-edit-copy-2.png[Item Bucket Interface] -+ -. Your items have been updated. -+ -image::media/copy-bucket-edit-copy-3.png[Item Bucket Interface] - -== Deleting Copies from the Catalogue == - -. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. -+ -image::media/copy-bucket-remove-1.png[Item Bucket Interface] -+ -. Click on *Buckets*. -. From the drop down menu select the bucket containing the item(s) you would like to delete from the catalogue. -+ -image::media/copy-bucket-remove-2.png[Item Bucket Interface] -+ -. Use the check boxes to select the item(s) you wish to delete. -. Click *Actions*. -. From the drop down menu select *Delete Selected Copies from Catalog*. -+ -image::media/copy-bucket-delete-copy-1.png[Item Bucket Interface] -+ -. On the confirmation pop up click *OK/Continue*. -+ -image::media/copy-bucket-delete-copy-2.png[Item Bucket Interface] -+ -. The items have been deleted from the catalogue. - - -== Placing Holds on Copies in a Bucket == - -. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. -+ -image::media/copy-bucket-remove-1.png[Item Bucket Interface] -+ -. Click on *Buckets*. -. From the drop down menu select the bucket containing the item(s) you would like to place a hold on. -+ -image::media/copy-bucket-remove-2.png[Item Bucket Interface] -+ -. Use the check boxes to select the item(s) you wish to delete. -. Click *Actions*. -. From the drop down menu select *Request Selected Copies*. -+ -image::media/copy-bucket-request-1.png[Item Bucket Interface] -+ -. Enter the barcode for the patron who the hold is for. By default the system enters the barcode of the account logged into the client. -+ -image::media/copy-bucket-request-2.png[Item Bucket Interface] -+ -. Select the correct _Pickup Library_. -. Select the correct _Hold Type_. (More explanation of the hold types needed here.) -. Click *OK*. -. The hold has been placed. - - -== Transferring Copies to Volumes == - -1. Retrieve the title through a catalogue search. -2. If it is not your default view click on the *Holdings View* tab. -+ -image::media/copy-bucket-cat-1.png[Holdings View] -+ -3. Use the check boxes to select the volume you would like to transfer the item(s) to. -4. Click *Actions*. -5. From the drop down menu select *Volume as Item Transfer Destination* -+ -image::media/copy-bucket-transfer-1.png[Holdings View] -+ -6. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. -+ -image::media/copy-bucket-remove-1.png[Item Bucket Interface] -+ -7. Click on *Buckets*. -8. From the drop down menu select the bucket containing the item(s) you would like to transfer to the volume. -+ -image::media/copy-bucket-remove-2.png[Item Bucket Interface] -+ -9. Use the check boxes to select the item(s) you wish to transfer. -10. Click *Actions*. -11. From the drop down menu select *Transfer Selected Copies to Marked Volume*. -+ -image::media/copy-bucket-transfer-2.png[Item Bucket Interface] -+ -12. The item(s) is transferred. -+ -image::media/copy-bucket-transfer-3.png[Item Bucket Interface] - - - - - - diff --git a/docs-antora/modules/cataloging/pages/holdings_templates.adoc b/docs-antora/modules/cataloging/pages/holdings_templates.adoc deleted file mode 100644 index 377fe94c74..0000000000 --- a/docs-antora/modules/cataloging/pages/holdings_templates.adoc +++ /dev/null @@ -1,27 +0,0 @@ -= Working with holdings templates = -:toc: - -Setting up holdings templates can save a lot of time when creating items, and they -also improve consistency and accuracy. Any time you find yourself creating multiple -items with the same item-level data, you may wish to create a holdings template -to automate that process. - -== Creating a new holdings template == - -* Open _Administration_ -> _Local Administration_ -> _Holdings Template Editor_. -* Select the desired template attributes by moving through the fields in the -editor. The attributes you've changed will appear in green. If you want to -start this process over, you can click the _Clear_ button in the top right -corner of the screen. -* Type a name for your template into the box labeled _Template_ at the top -of the screen. -* Press the _Save_ button. - -== Using a holdings template == - -Whenever you see the holdings editor, you can use data from your templates. - -* In the _Template_ menu, choose the template you wish to use. -* Click _Apply_. -* Make any other necessary changes. - diff --git a/docs-antora/modules/cataloging/pages/introduction.adoc b/docs-antora/modules/cataloging/pages/introduction.adoc deleted file mode 100644 index e2feb18836..0000000000 --- a/docs-antora/modules/cataloging/pages/introduction.adoc +++ /dev/null @@ -1,3 +0,0 @@ -= Introduction = -:toc: -This part describes cataloging in Evergreen. diff --git a/docs-antora/modules/cataloging/pages/item_status.adoc b/docs-antora/modules/cataloging/pages/item_status.adoc deleted file mode 100644 index 2bebee8b76..0000000000 --- a/docs-antora/modules/cataloging/pages/item_status.adoc +++ /dev/null @@ -1,89 +0,0 @@ -= Using the Item Status interface = -:toc: -indexterm:[copies] -indexterm:[items] - -The Item Status interface is a powerful tool that can give you a lot of information -about specific items in your catalog. - -== Accessing the Item Status interface == - -There are three ways to access the item status interface: - -=== Through the Search menu === - -. Click *Search -> Search for Copies by Barcode*. -. Scan your barcode. - -=== Through the Circulation menu === - -. Click *Circulation -> Item Status*. -. Scan your barcode. - -=== From the OPAC view === - -. Click *Search -> Search the Catalog*. -. Find a bibliographic record that you are interested in. -. Make sure you are on the _OPAC View_ tab of that record. -. Locate the _BARCODE_ column in the holdings session. -. Click _view_ next to the barcode of the item you're interested -in. - - -== Specific fields == - -=== Active date === -indexterm:[active date] -indexterm:[copies,activating] -indexterm:[items,activating] - -This date is automatically added by Evergreen the first time -an item receives a status that is considered active (i.e. the -first date on which patrons could access the copy). While your -consortium may customize which statuses are considered active -and which are not, statuses like _Available_ and _On holds shelf_ -are typically considered active, and statuses like _In process_ or -_On order_ are typically not. - -== Printing spine labels == - -indexterm:[spine labels] -indexterm:[printing, spine labels] -indexterm:[item labels] -indexterm:[printing, item labels] -indexterm:[pocket labels] - -Before printing spine labels, you will want to install Hatch -or turn off print headers and footers in your browser. - -include::admin:partial$turn-off-print-headers-firefox.adoc[] - -include::admin:partial$turn-off-print-headers-chrome.adoc[] - -=== Creating spine labels === - -To create spine and item labels for an item (or group of items): - -. Click *Circulation -> Item Status*. -. Scan your barcode(s). -. Select all the items you'd like to print labels for. -. Right-click on the items, or click the Actions drop-down menu. -. Under _Show_, click on _Print Labels_. -. Take a look at the Label Preview area. -. When you are satisfied with your labels, click the _Print_ button. - -== Request Items Action == - -To place requests from the Item Status interface, select one or more items in List View and select *Actions -> Request Items*. This action can also be invoked for a single item from Item Status Detail View. - -Starting in 3.4, this action has an Honor User Preferences checkbox which does the following for the selected user when checked: - -* Changes the Pickup Library selection to match the user's Default Hold Pickup Location -* Honor the user's Holds Notification settings (including Default Phone Number, etc.) - -Also beginning with 3.4, a Title Hold option has been added to the Hold Type menu. This will create one title-level hold request for each unique title associated with the items that were selected when Request Items was invoked. - -image::media/request_from_item_status.png[Request from Item Status] - -Success and Failure toasts have also been added based on what happens after the Request Items interface has closed. - diff --git a/docs-antora/modules/cataloging/pages/item_tags_cataloging.adoc b/docs-antora/modules/cataloging/pages/item_tags_cataloging.adoc deleted file mode 100644 index f44c553ef7..0000000000 --- a/docs-antora/modules/cataloging/pages/item_tags_cataloging.adoc +++ /dev/null @@ -1,89 +0,0 @@ -= Item Tags = -:toc: - -indexterm:[copy tags] - -Item Tags allow staff to apply custom, pre-defined labels or tags to items. Item tags are visible in the public catalog and are searchable in both the staff client and public catalog based on configuration. This feature was designed to be used for Digital Bookplates to attach donation or memorial information to items, but may be used for broader purposes to tag items. - -Item tags can be created ahead of time in the Administration module (See the Administration section of this documentation for more information.) and then applied to items or they can be created on the fly during the cataloging process. - -== Adding Existing Item Tags to Items == - -Item Tags can be added to existing items or to new items as they are cataloged. To add an item tag: - -. In the _Holdings Editor_, click on *Item Tags*. A dialog box called _Manage Item Tags_ will appear. - -image::media/item_tag_button.png[Location of Item Tag Button] - -. Select the *Tag Type* from the drop down menu and start typing in the Tag field to bring up tag suggestions from the existing item tags. Select the tag and click *Add Tag*, then click *OK*. -.. If you are cataloging a new item, make any other changes to the item record. -. Click *Save & Exit*. The item tag will now appear in the catalog. - -image::media/manage_item_tags.png[Assigning an Item Tag] - -image::media/copytags7.PNG[Item Tags in the OPAC] - -== Creating and Applying a Item Tag During Cataloging == - -Item tags can be created in the Holdings Editor on the fly while cataloging or viewing an item: - -. In the _Holdings Editor_, click on *Item Tags*. A dialog box called _Manage Item Tags_ will appear. -. Select the *Tag Type* from the drop down menu and type in the new Tag you want to apply to the item. Click *Add Tag*, then click *OK*. The new tag will be created and attached to the item. It will be owned by the organization unit your workstation is registered to. The tag can be modified under *Admin->Local Administration->Item Tags*. - - -== Removing Item Tags from Items == - -To remove an item tag from a item: - -. In the Holdings Editor, click on *Item Tags*. A dialog box called _Manage Item Tags_ will appear. -. Click *Remove* next to the tag you would like to remove, and click *OK*. -. Click *Save & Exit*. The item tag will now be removed from the catalog. - -image::media/remove_item_tag.png[Removing an Item Tag] - - -== Adding Item Tags to Items in Batch == - -Item tags can be added to multiple items in batch using _Item Buckets_. After adding the items to a Item Bucket: - -. Go to *Cataloging->Item Buckets->Bucket View* and select the bucket from the Buckets drop down menu. -. Select the items to which you want to add the item tag and go to *Actions->Apply Tags* or right-click and select *Apply Tags*. The _Apply Item Tags_ dialog box will appear. -. Select the *Tag Type* and enter the *Tag*. Click *Add Tag*, then click *OK*. The item tag will now be attached to the items. - -image::media/copytags9.PNG[Item Bucket View] - -NOTE: It is not possible to remove tags using the Item Bucket interface. - -== Searching Item Tags == - -Item Tags can be searched in the public catalog if searching has been enabled via Library Settings. Item Tags can be searched in the Basic and Advanced Search interfaces by selecting Digital Bookplate as the search field. Specific item tags can also be searched using a Keyword search and a specific search syntax. - -=== Digital Bookplate Search Field === - -*Basic Search* - -image::media/copytags10.png[Digital Bookplates Search Field Location in Basic Search] - -*Advanced Search* - -image::media/copytags11.png[Digital Bookplates Search Field Location in Advanced Search] - - -=== Keyword Search === - -Item Tags can also be searched by using a Keyword search in the Basic and Advanced search interfaces. Searches need to be constructed using the following syntax: - - -copy_tag(item tag type code, search term) - - -For example: - - -copy_tag(bookplate, friends of the library) - - -It is also possible to conduct a wildcard search across all item tag types: - -copy_tag(*, smith) - diff --git a/docs-antora/modules/cataloging/pages/link_checker.adoc b/docs-antora/modules/cataloging/pages/link_checker.adoc deleted file mode 100644 index e229c988bd..0000000000 --- a/docs-antora/modules/cataloging/pages/link_checker.adoc +++ /dev/null @@ -1,78 +0,0 @@ -= Link Checker = -:toc: - -The Link Checker enables you to verify the validity of URLs stored in MARC records. -The ability to verify URLs would benefit locations with large electronic resource collections. - -== Search for URLs == - -Search for MARC records that contain URLs that you want to verify. - -. Click *Cataloging* -> *Link Checker*. -. Click *New Link Checker Session*. -. Create a session name. Note that each session must have a unique name. -. Select a search scope from the drop down menu. Records that would be retrieved by searching -Example Branch 1 (BR1) in an OPAC search would also be retrieved here. For example, -a record that describes an electronic resource with a URL in the 856 $u and an org unit code, -such as BR1, in the 856 $9, would be retrieved by a search of relevant keywords. Also, records -that contain a URL without the $9 subfield, but also have physical copies at BR1, would be -retrieved. Note that you can skip this step if you enter the org unit code of the location -that you want to search in the *Search* field. -. Enter search terms to retrieve records with URLs that you want to verify. You can also add -a location filter, such as BR1. -. You may further limit your search by selecting a saved search. Saved searches are filters made -up of specific criteria, such as shelving location or audience. Adding a saved search to your -keyword search will narrow your search for records with URLs. This step is optional. -. Enter tags and subfields that contain URLs in the appropriate boxes. Click *Add* after you enter -the data in the fields. You can add multiple tags and subfields by repeating this process. Evergreen -will search for records that match your search terms, and then, from the set that it retrieves, it -will extract any URLs from all of the tag/subfield locations you have specified for the session. -. To view and manually verify the URLs that Evergreen retrieves, leave the *Process Immediately* button -unchecked. If you want Evergreen to automatically verify the URLs that it retrieves, then check the box to *Process Immediately*. -. Click *Begin* to process your search. - -image::media/Link_Checker1.jpg[Link_Checker1] - - -== View Your Results == - -If you do not click *Process Immediately*, then you must select the links that you want to verify, and click -*Verify Selected URLs*. If you click *Process Immediately*, then you skip this step, and Evergreen -jumps directly to the results of the verification attempts as seen in the next step. - -image::media/Link_Checker2.jpg[Link_Checker2] - -Evergreen displays the results of the verification attempts, including the tags that you searched, -the URLs that Evergreen retrieved, the Bib Record ID, the request and result time, and the result code and text. - -image::media/Link_Checker6.jpg[Link_Checker6] - -== Manage Your Sessions == - -=== Edit Columns === - -You can use the *Column Picker* to add and remove columns on any of the *Link Checker* interfaces. -To access the *Column Picker*, right click on any of the column headings. The columns are saved to your user account. - - -=== Clone Sessions === - -You can clone sessions that you run frequently or that have frequently-used parameters that -need only minor adjustments to create new searches. To clone a session: - -. Click *Cataloging* -> *Link Checker*. -. In the Session ID column, click *Clone*. A copy of the parameters of that search will appear. - - -=== View Verification Attempts === - -To view the results of a verification attempt after you have closed the session, click *Cataloging* -> *Link Checker*. -Your link checker sessions appear in a list. To view the results of a session, click the *Open* link in the Session ID column. - -Click *Filter* to refine the results on this page. To add a filter: - -. Select a column from the first drop down menu. -. Select an operator from the second drop down menu. -. A third field will appear. Enter the appropriate text. -. Click *Apply* to apply the filter to your current results. Click *Save Filters* to save the filter to your user account for later use. - diff --git a/docs-antora/modules/cataloging/pages/monograph_parts.adoc b/docs-antora/modules/cataloging/pages/monograph_parts.adoc deleted file mode 100644 index 1b341c9b94..0000000000 --- a/docs-antora/modules/cataloging/pages/monograph_parts.adoc +++ /dev/null @@ -1,97 +0,0 @@ -= Monograph Parts = -:toc: - -*Monograph Parts* enables you to differentiate between parts of -monographs or other multi-part items. This feature enables catalogers -to describe items more precisely by labeling the parts of an item. For -example, catalogers might identify the parts of a monograph or the discs -of a DVD set. This feature also allows patrons more flexibility when -placing holds on multi-part items. A patron could place a hold on a -specific disc of a DVD set if they want to access a specific season or -episode rather than an entire series. - -Four new permissions are used by this functionality: - -* CREATE_MONOGRAPH_PART -* UPDATE_MONOGRAPH_PART -* DELETE_MONOGRAPH_PART -* MAP_MONOGRAPH_PART - -These permissions should be assigned at the consortial level to those -groups or users that will make use of the features described below. - - -== Add a Monograph Part to an Existing Record == - -To add a monograph part to an existing record in the catalog: - -. Retrieve a record. - -. Click the *Manage Parts* tab. -+ -image::media/manage_parts_menu.jpg[Menu: Manage Parts] - -. Click the *New Monograph Part* button - -. Enter the *label* that you want to appear to the user in the catalog, -and click *Save*. This will create a list of monograph parts from which -you can choose when you create holdings. -+ -image::media/monograph_parts2.jpg[monograph_parts2] - -. Add holdings. To add holdings to your workstation -library, click the *Add Holdings* button in the *Record Summary* area above the tabs. -+ -To add holdings to your workstation library or other libraries, -click the *Holdings View* tab, right-click the appropriate -library, and choose *Add -> Call numbers and Items*. -+ -image::media/monograph_parts3.jpg[monograph_parts3] - -. The Holdings Editor opens. Enter the number of call numbers -that you want to add to the catalog and the call number description. - -. Enter the number of items and barcode(s) of each item. - -. Choose the part label from the *Part* drop down menu. -+ -image::media/monograph_parts4.jpg[monograph_parts4] - -. Apply a template to the items, or edit fields in the *Working Items* section below. -+ -image::media/monograph_parts5.jpg[monograph_parts5] - -. Click *Store Selected* when those items are ready. - -. Review your completed items on the "Completed Items" tab. - -. When all items have been stored and reviewed, click "Save & Exit". -+ -NOTE: If you are only making one set of changes, you can simply click -*Save & Exit* and skip the *Store Selected* stage. - -. The *Holdings View* tab now shows the new part information. These fields -also appear in the OPAC View. -+ -image::media/manage_parts_opac.png[Catalog Record showing items with part details] - -== Monograph Part Merging == - -The monograph part list for a bibliographic record may, over time, diverge from -the proscribed format, resulting in multiple labels for what are essentially the -same item. For instance, ++Vol.{nbsp}1++ may have variants -like ++V.1++, ++Vol{nbsp}1++, or ++{nbsp}Vol.{nbsp}1++ (leading -space). Merging parts will allow cataloging staff to collapse the variants into -one value. - -In the Monograph Parts display: - -. Click the checkbox for all items you wish to merge including the one you wish -to prevail when done. -. Click on the ``Merge Selected'' button. A pop-up window will list the selected -items in a monospaced font, with blanks represented by a middle-dot character -for more visibility. -. Click on the item you wish to prevail. - -The undesired part labels will be deleted, and any items that previously used -those labels will now use the prevailing label diff --git a/docs-antora/modules/cataloging/pages/overlay_record_3950_import.adoc b/docs-antora/modules/cataloging/pages/overlay_record_3950_import.adoc deleted file mode 100644 index 44a384de1b..0000000000 --- a/docs-antora/modules/cataloging/pages/overlay_record_3950_import.adoc +++ /dev/null @@ -1,55 +0,0 @@ -= Overlay Existing Catalog Record via Z39.50 Import = -:toc: - -This feature enables you to replace a catalog record with a record obtained through a Z39.50 search. No new permissions or administrative settings are needed to use this feature. - -*To Overlay an Existing Record via Z39.50 Import:* - -1) Click *Cataloging -> Import Record from Z39.50* - -2) Select at least one *Service* in addition to the *Local Catalog* in the *Service and Credentials* window in the top right panel. - -3) Enter search terms in the *Query* window in the top left panel. - -4) Click *Search*. - -image::media/Overlay_Existing_Record_via_Z39_50_Import1.jpg[] - -5) The results will appear in the lower window. - -6) Select the record in the local catalog that you wish to overlay. - -7) Click *Mark Local Result as Overlay Target* - - -image::media/Overlay_Existing_Record_via_Z39_50_Import2.jpg[] - - -8) A confirmation message appears. Click *OK*. - -9) Select the record that you want to replace the existing catalog record. - -10) Click *Overlay.* - - -image::media/Overlay_Existing_Record_via_Z39_50_Import3.jpg[] - - -11) The record that you selected will open in the MARC Editor. Make any desired changes to the record, and click *Overlay Record*. - -image::media/Overlay_Existing_Record_via_Z39_50_Import4.jpg[] - - -12) The catalog record that you want to overlay will appear in a new window. Review the MARC record to verify that you are overlaying the correct catalog record. - -13) If the correct record appears, click *Overlay*. - - -image::media/Overlay_Existing_Record_via_Z39_50_Import5.jpg[] - -14) A confirmation message will appear to confirm that you have overlaid the record. Click *Ok*. - -15) The screen will refresh in the OPAC View to show that the record has been overlaid. - - -image::media/Overlay_Existing_Record_via_Z39_50_Import6.jpg[] diff --git a/docs-antora/modules/cataloging/pages/record_buckets.adoc b/docs-antora/modules/cataloging/pages/record_buckets.adoc deleted file mode 100644 index 8b46717cd6..0000000000 --- a/docs-antora/modules/cataloging/pages/record_buckets.adoc +++ /dev/null @@ -1,129 +0,0 @@ -= Record Buckets = -:toc: - -== Introduction == - -Record buckets are containers for MARC records. Once records are in a bucket, you can take -various types of actions, including: - -* Editing all the records at once using the MARC Batch Editor. -* Deleting all the records in the bucket. -* Merging all the records in the bucket. -* Downloading the MARC files for all records in the bucket, so you can edit them in another -program like http://marcedit.reeset.net[MARCEdit]. - -== Creating Record Buckets == - -. Click on _Cataloging_ -> _Record Buckets_. -. On the _Buckets_ menu, click _New Bucket_. -. Give the bucket a name and (optionally) a description. - -== Adding Records to a Bucket == - -=== From the Record Bucket Interface === -. Click on _Cataloging_ -> _Record Buckets_. -. On the _Buckets_ menu, choose the bucket that you'd like to add records to. -. Go to the _Record Query_ tab. -. Enter your query into the _Record Query_ box. -. Select the records you would like to add. -. On the _Actions_ menu, click _Add to Bucket_. - -.Advanced record queries -**** - -The _Record Query_ tab allows some advanced search functionality through the use of search keys, -which can be combined with one another. - -.Record Bucket search keys -[options="header"] -|=================== -|Search key |Abbreviated version |Usage example |Description -|author: |au: |au:Anzaldua |An author, creator, or contributor -|available: | |available:yes |Limits to available items. There is no way to limit to _unavailable_ items -|keyword: |kw: |kw:Schirmer |A keyword -|lang: | |lang:Spanish |A language -|series: |se: |se:avatar last airbender |A series title -|site: | |site:LIB3 |The shortname of the library/system/consortium you'd like to search -|subject: |su: |su:open source software |A subject -|subject\|geographic: | |subject\|geographic:Uruguay |A geographic subject -|title: |ti: |ti:Harry Potter |Title proper or alternate title -|title\|proper: | |title\|proper:Harry Potter |Title proper taken from 245 -|=================== - -You can combine these in the same query, e.g. `ti:borderlands au:anzaldua available:yes`. However -- with the exception of the _lang_ search key, -you should not repeat the same search key twice. - -**** - -[TIP] -You can use the same boolean operator symbols that are used in the OPAC (_||_ for boolean OR, _&&_ for boolean AND, and _-_ for boolean NOT). - - -== Bibliographic Record Merging and Overlay == - -Catalogers can merge or overlay records in record buckets or using records obtained from a Z39.50 service. - -=== Merge Records in Record Buckets === - -1. Click *Cataloging>Record Buckets*. -2. Create and/or select a record bucket. -3. Select the records that you want to merge, and click *Actions>Merge Selected Records*. - -image::media/marcoverlay1.png[] - -4. The Merge Selected Records interface appears. -5. The records to be merged appear on the right side of the screen. Click *Use as Lead Record* to select a lead record from those that need to be merged. - -image::media/marcoverlay2.png[] - -6. Select a merge profile from the drop down box. - -image::media/marcoverlay3.png[] - -7. After you select the profile, you can preview the changes that will be made to the record. - -image::media/marcoverlay4.png[] - -8. You can change the merge profile at any time; after doing so, the result of the merge will be recalculated. The merge result will also be recalculated after editing the lead record, changing which record is to be used as lead, or removing a record from consideration. -9. When you are satisfied that you have selected the correct merge profile, click the *Merge* button in the bottom right corner. -10. Note that merge profiles that contain a preserve field specification are not available to be chosen in this interface, as they would have the effect of reversing which bibliographic record is considered the target of the merge. - -=== Track Record Merges === - -When 2 or more bib records are merged in a record bucket, all records involved are stamped with a new merge_date value. For any bib record, this field indicates the last time it was involved in a merge. At the same time, all subordinate records (i.e. those deleted as a product of the merge) are stamped with a merged_to value indicating which bib record the source record was merged with. - -In the browser client bib record display, a warning alert now appears along the top of the page (below the Deleted alert) indicating when a record was used in a merge, when it was merged, and which record it was merge with, rendered as a link to the target record. - -image::media/merge_tracking.png[merge message with date] - -=== Merge Records Using Z39.50 === - -1. Search for a record in the catalog that you want to overlay. -2. Select the record, and click *MARC View*. -3. Select *Mark for: Overlay Target*. - -image::media/marcoverlay5.png[] - -4. Click *Cataloging>Import Record from Z39.50*. -5. Search for the lead record that you want to overlay within the Z39.50 interface. -6. Select the desired record, and click *Overlay*. - -image::media/marcoverlay6.png[] - -7. The record that you have targeted to be overlaid, and the new record, appear side by side. - -image::media/marcoverlay7.png[] - -8. You can edit the lead record before you overlay the target. To edit the record, click the *Edit Z39.50 Record* button above the lead record. -9. The MARC editor will appear. You can make your changes in the MARC editor, or you can select the *Flat Text Editor* to make changes. After you have edited the record, click *Modify* in the top right corner, and then *Use Edits* in the bottom right corner. Note that the record you are editing is the version from the Z39.50 server not including any changes that would be made as a result of applying the selected merge file. -10. You will return to the side-by-side comparison of the records and then can proceed with the overlay. -11. Once you are satisfied with the record that you want to overlay, select a merge profile from the drop down box, *Choose merge profile*. -12. Click *Overlay*. The overlay will occur, and you will be taken back to the Z39.50 interface. -13. Note that the staff client remembers the last merge overlay profile that you selected, so the next time that you open the interface, it will default to that profile. Simply change the profile to make a different selection. -14. Also note when the merge profile is applied, the Z39.50 record acts as the target of the merge. For example, if your merge profile adds 650 fields, those 650 fields are brought over from the record that already exists in the Evergreen database (i.e., the one that you are overlaying from Z39.50). -15. Also note that merge profiles that contain a preserve field specification are not available to be chosen in this interface, as they would have the effect of reversing which bibliographic record is considered the target of the merge. - -=== New Admin Settings === - -1. Go to *Admin>Local Administration>Library Settings Editor>Upload Default Merge Profile (Z39.50 and Record Buckets)*. -2. Select a default merge profile, and *click Update Setting*. The merge profiles that appear in this drop down box are those that are created in *MARC Batch Import/Export*. Note that catalogers will only see merge profiles that are allowed by their org unit and permissions. diff --git a/docs-antora/modules/cataloging/pages/specific_variable_fields.adoc b/docs-antora/modules/cataloging/pages/specific_variable_fields.adoc deleted file mode 100644 index d059fec61b..0000000000 --- a/docs-antora/modules/cataloging/pages/specific_variable_fields.adoc +++ /dev/null @@ -1,7 +0,0 @@ -= Specific fields = -:toc: - -== 264 == - -The Public Catalog displays tag 264 information for Publisher, Producer, Distributor, Manufacturer, -and Copyright within a full bib record's summary. diff --git a/docs-antora/modules/cataloging/pages/volcopy_editor.adoc b/docs-antora/modules/cataloging/pages/volcopy_editor.adoc deleted file mode 100644 index 261833335f..0000000000 --- a/docs-antora/modules/cataloging/pages/volcopy_editor.adoc +++ /dev/null @@ -1,82 +0,0 @@ -= Using the Holdings Editor = -:toc: -indexterm:[copies,editing] -indexterm:[items,editing] -indexterm:[call numbers,editing] -indexterm:[volumes,editing] -indexterm:[holdings editor] -[[holdings_editor]] - -The Holdings Editor is the tool where you can edit all holdings data. - -== Specific fields == - -=== Acquisitions Cost === -indexterm:[acquisitions cost] - -This field is populated with the invoiced cost of the originating acquisition. -This field will be empty until its originating acquisition is connected to an -invoice. - -=== Item Number === -indexterm:[copy number] -indexterm:[item number] - -If you have multiple copies of the same item, you may want to -assign them item numbers to help distinguish them. If you do -not include an item number in this field, Evergreen will assign your -item a default item number of 1. - -== Accessing the Holdings Editor by barcode == - -. Click *Search -> Search for Items by Barcode* -. Scan your barcode. -. Right click on the entry in the grid. -. Click *Edit -> Call Numbers and Items* on the actions menu that appears. - -== Accessing the holdings editor from a catalog record == - -The bibliographic record detail page displays library holdings, including the call number, shelving location, and item barcode. Within the -staff client, the holdings list displays a column next to the item barcode(s) containing two links, *view* and *edit*. - -image::media/copy_edit_link_1.jpg[Copy Edit Link] - -Clicking on the *view* link opens the *Item Status* screen for that specific item. - -Clicking on the *edit* link opens the *Holdings Editor* screen for that specific item. - -The *edit* link will only be exposed next to copies when the user has the *UPDATE_COPY* permission at the copy owning or circulating library. - -== Hiding Fields in the Holdings Editor == - - -A user may hide specific fields in the holdings editor if these fields are not used for cataloging in their organization. Hiding fields that are not used by your organization helps to reduce confusion among staff and also declutters the holdings editor screen. - -To hide one or more fields from the holdings editor: - -. Retrieve the record. -+ -[NOTE] -=================================================================================== -You can retrieve records in many ways, including: - -* If you know its database ID, enter it into Cataloging > Retrieve Bib Record by ID. - -* If you know its control number, enter it into Cataloging > Retrieve Bib Record by TCN. - -* Searching in the catalog. - -* Clicking on a link from the Acquisitions or Serials modules. -=================================================================================== -+ -. Select the *Add Holdings* button. The *Holdings Editor* will display. - -. In the Holdings Editor, select the *Defaults* tab. -+ -image::media/Holdings_Editor_Defaults_Tab.png[Holdings editor defaults tab] -+ -. On the Defaults tab, uncheck the boxes for the field(s) that you wish to hide. It is not necessary to save this screen; changes are saved automatically. -+ -image::media/Holdings_Editor_Hide_Display_Defaults.png[Holdings editor display defaults with deselected fields] -+ -. Select the *Edit* tab; the de-selected fields no longer appear on the holdings editor. diff --git a/docs-antora/modules/cataloging/pages/z39.50_search_enhancements.adoc b/docs-antora/modules/cataloging/pages/z39.50_search_enhancements.adoc deleted file mode 100644 index 97a139c1e4..0000000000 --- a/docs-antora/modules/cataloging/pages/z39.50_search_enhancements.adoc +++ /dev/null @@ -1,102 +0,0 @@ -= Z39.50 Search Enhancements = -:toc: - -*Abstract* - -In Evergreen version 2.5, you will be able to search multiple Z39.50 sources simultaneously from record buckets. Using this feature, you can match records from Z39.50 sources to catalog records in your bucket and import the Z39.50 records via Vandelay. - - -*Administration* - -The following administrative interfaces will enable you to configure Z39.50 search parameters. - - - -*Z39.50 Index Field Maps* - -Click *Administration* -> *Server Administration* -> *Z39.50 Index Field Maps* to map bib record indexes (metabib fields and record attributes) in your catalog records to Z39.50 search attributes. Metabib fields are typically free form fields found in the body of a catalog record while record attributes typically have only one value and are often found in the leader. - -You can map a metabib field or a record attribute to a Z39.50 attribute or a Z39.50 attribute type. To map a specific field in your catalog record to a specific field in a chosen Z39.50 source, you should map to a Z39.50 attribute. For example, if you want the Personal Author in your catalog record to map to the Author field when searching the Library of Congress, then you should do the following: - -. Click *New* or double-click to edit an existing map. - -. Select the *Metabib Field* from the drop down menu. - -. Select the appropriate source and field from the *Z39.50 Attribute* drop down menu. - -. Click *Save*. - - -Alternatively, if you want the Personal Author in your catalog record to map to the generic author field of any Z39.50 source, then you should do the following: - -. Click *New* or double-click to edit an existing map. - -. Select the *Metabib Field* from the drop down menu. - -. Select the appropriate heading from the *Z39.50 Attribute Type* drop down menu. - -. Click *Save*. - - - -*Z39.50 servers* - -Click *Admin* -> *Server Admin* -> *Z39.50 Servers* to input your Z39.50 server. Click the hyperlinked name of any server to view the Z39.50 search attribute types and settings. These settings describe how the search values (from a metabib field or record attribute) are translated into Z39.50 searches. - - - - -*Apply Quality Sets to Z30.50 Sources* - -From this interface, you can rank the quality of incoming search results according to the match set that you have established and their Z39.50 point of origin. By applying a quality score, you tell Evergreen to merge the highest quality records into the catalog. - -. Click *Cataloging* -> *MARC Batch Import/Export*. - -. Click *Record Match Sets*. Match Sets specify the MARC attributes, tags, and subfields that you want Evergreen to use to identify matches between catalog and incoming records. - -. Rank the quality of the records from Z39.50 sources by adding quality metrics for the match set. Click *MARC Tag and Subfield*, and enter the 901z tag and subfield, specify the Z39.50 source, and enter a quality metric. Source quality increases as the numeric quality increases. - -image::media/Locate_Z39_50_Matches4.jpg[Locate_Z39.50_Matches4] - - - -*Org Unit Settings* - -Org Unit settings can be set for your local branch, your system, or your consortium. To access these settings, click *Administration* -> *Local Administration* -> *Library Settings Editor* -> *Maximum Parallel Z39.50 Batch Searches*. - -Two new settings control the Z39.50 search enhancements. - -. Maximum Parallel Z39.50 Batch Searches - This setting enables you to set the maximum number of Z39.50 searches that can be in-flight at any given time when performing batch Z39.50 searches. The default value is five (5), which means that Evergreen will perform 5 searches at a given time regardless of the number of sources selected. The searches will be divided between the sources selected. Thus, if you maintain this default and perform a search using two Z39.50 sources, Evergreen will conduct five searches, shared between the two sources. - -. Maximum Z39.50 Batch Search Results - This setting enables you to set the maximum number of search results to retrieve and queue for each record + Z39 source during batch Z39.50 searches. The default value is five (5). - - - -*Matching Records in Buckets with Records from Z39.50 Sources* - -. Add records to a bucket. - -. Click *Bucket Actions* -> *Locate Z39.50 Matches*. A pop up window will appear. - -. Select a *Z39.50 Server(s)*. - -. Select a *Z39.50 Search Index(es)*. Note that selecting multiple checkboxes will AND the search indexes. - -. Select a Vandelay queue from the drop down menu to which you will add your results, or create a queue by typing its name in the empty field. - -. Select a *Match Set*. The Match Set is configured in Vandelay and, in this instance, will only be used to compare the Z39.50 results with the records in your bucket. - -. Click *Perform Search*. - -image::media/Locate_Z39_50_Matches1.jpg[Locate_Z39.50_Matches1] - -. Status information will appear, including the number of records in the bucket that were searched, the matches that were found, and the progress of the search. When the search is complete, click *Open Queue*. - -image::media/Locate_Z39_50_Matches2.jpg[Locate_Z39.50_Matches2] - -. The Vandelay Queue will display. Matching records are identified in the *Matches* column. From this interface, import records according to your normal procedure. It is suggested that to merge the incoming records with the catalog records, you should choose an option to import the records. Next, select either merge option from the drop down menu, click *Merge on Best Match*, and then click *Import*. - -image::media/Locate_Z39_50_Matches3.jpg[Locate_Z39.50_Matches3] - -. The records from the Z39.50 search will merge with the catalog records. NOTE: A new column has been added to this interface to identify the Z39.50 source. When records are imported to the Vandelay queue via a record bucket, Evergreen tags the Z39.50 source and enters the data into the $901z. - diff --git a/docs-antora/modules/circulation/_attributes.adoc b/docs-antora/modules/circulation/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/circulation/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/Display_Hold_Types_on_Pull_Lists1.jpg b/docs-antora/modules/circulation/assets/images/advanced_holds/Display_Hold_Types_on_Pull_Lists1.jpg deleted file mode 100644 index 26d7952b47..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/Display_Hold_Types_on_Pull_Lists1.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location1.png deleted file mode 100644 index a76b1b2a99..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location2.jpg b/docs-antora/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location2.jpg deleted file mode 100644 index 605dd896e5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location2.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-1.png deleted file mode 100644 index 34c2a0ad55..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-2.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-2.png deleted file mode 100644 index 9fb2a77726..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-3.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-3.png deleted file mode 100644 index 1f75803846..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-4.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-4.png deleted file mode 100644 index eef6c646e5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-clearing-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-1.png deleted file mode 100644 index 76eb32fc8d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-10.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-10.JPG deleted file mode 100644 index f2fffc8136..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-10.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-10.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-10.png deleted file mode 100644 index fab2a8be54..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-10.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-11.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-11.JPG deleted file mode 100644 index 8479b79cda..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-11.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-11.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-11.png deleted file mode 100644 index 89beec91a0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-11.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-12.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-12.JPG deleted file mode 100644 index 130c370ae1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-12.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-12.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-12.png deleted file mode 100644 index 9265a57634..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-12.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-13.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-13.JPG deleted file mode 100644 index 030ceee289..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-13.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-13.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-13.png deleted file mode 100644 index 2765f756e3..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-13.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-14.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-14.JPG deleted file mode 100644 index 78fde73166..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-14.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-14.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-14.png deleted file mode 100644 index ee8649480d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-14.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-15.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-15.JPG deleted file mode 100644 index e7e5865f3c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-15.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-15.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-15.png deleted file mode 100644 index 20dd0b454b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-15.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-16.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-16.png deleted file mode 100644 index 2db6c18230..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-16.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-17.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-17.png deleted file mode 100644 index 5d0e175b55..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-17.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-18.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-18.png deleted file mode 100644 index 2ffe43a89d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-18.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-19.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-19.png deleted file mode 100644 index c74c47ed6e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-19.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-2.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-2.JPG deleted file mode 100644 index 382d799e54..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-2.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-2.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-2.png deleted file mode 100644 index 4b7af26bcb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-3.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-3.png deleted file mode 100644 index 16ca2674f6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-4.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-4.JPG deleted file mode 100644 index e0fdf1fedb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-4.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-4.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-4.png deleted file mode 100644 index 34e758d6eb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-5.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-5.png deleted file mode 100644 index 96b836f3fe..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-5.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-5_and_6.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-5_and_6.JPG deleted file mode 100644 index 4522ab6fe0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-5_and_6.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-6.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-6.png deleted file mode 100644 index f7ce7ebd7f..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-6.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-7.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-7.JPG deleted file mode 100644 index e8237f3886..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-7.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-7.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-7.png deleted file mode 100644 index 12f7ffef38..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-7.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-8.JPG b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-8.JPG deleted file mode 100644 index 738cfd41e0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-8.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-8.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-8.png deleted file mode 100644 index bbbff642ba..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-8.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-9.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-9.png deleted file mode 100644 index 006b105c88..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-managing-9.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-1.png deleted file mode 100644 index 6d6b147fc5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-2.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-2.png deleted file mode 100644 index c64fcdb83a..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-3.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-3.png deleted file mode 100644 index bfb8ee201d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-4.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-4.png deleted file mode 100644 index fd9201b872..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-notifications-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-1.png deleted file mode 100644 index 5f60630e8b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-10.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-10.png deleted file mode 100644 index 46fe88a112..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-10.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-11.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-11.png deleted file mode 100644 index cd1a10201e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-11.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-2.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-2.png deleted file mode 100644 index 71daec5c6c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-3.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-3.png deleted file mode 100644 index 16ca2674f6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-4.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-4.png deleted file mode 100644 index 7b0c2e7423..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-5.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-5.png deleted file mode 100644 index 7ef50cb46f..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-5.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-6.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-6.png deleted file mode 100644 index d8e7c3d5f9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-6.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-7.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-7.png deleted file mode 100644 index aa0ec912a7..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-7.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-8.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-8.png deleted file mode 100644 index c354a9d4d9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-8.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-9.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-9.png deleted file mode 100644 index 41fe57efda..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-placing-9.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-1.png deleted file mode 100644 index 1f325e2713..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-2.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-2.png deleted file mode 100644 index c4e385115d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-3.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-3.png deleted file mode 100644 index 494febb84e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-4.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-4.png deleted file mode 100644 index 5db02f7f17..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-5.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-5.png deleted file mode 100644 index 74829a091b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-5.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-6.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-6.png deleted file mode 100644 index b022065760..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-6.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-7.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-7.png deleted file mode 100644 index 37651dceb9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-7.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-9.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-9.png deleted file mode 100644 index 365db4eb99..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds-pull-9.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_options.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_options.png deleted file mode 100644 index cd79a155f1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_options.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_options_adv.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_options_adv.png deleted file mode 100644 index 2f6dbac2e6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_options_adv.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_searchresults.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_searchresults.png deleted file mode 100644 index 5bbae023ba..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_searchresults.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_success.png b/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_success.png deleted file mode 100644 index 0db924f696..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/holds_title_success.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/advanced_holds/place-another-hold-1.png b/docs-antora/modules/circulation/assets/images/advanced_holds/place-another-hold-1.png deleted file mode 100644 index 130ec10964..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/advanced_holds/place-another-hold-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/Check_In-Cancel_Transit.png b/docs-antora/modules/circulation/assets/images/media/Check_In-Cancel_Transit.png deleted file mode 100644 index cf9cdfa404..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/Check_In-Cancel_Transit.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/Display_Hold_Types_on_Pull_Lists1.jpg b/docs-antora/modules/circulation/assets/images/media/Display_Hold_Types_on_Pull_Lists1.jpg deleted file mode 100644 index 26d7952b47..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/Display_Hold_Types_on_Pull_Lists1.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-0.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-0.JPG deleted file mode 100644 index dc11dbd92d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-0.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-1.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-1.JPG deleted file mode 100644 index efe099354d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-1.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-2.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-2.JPG deleted file mode 100644 index 44d70374de..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-2.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-3.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-3.JPG deleted file mode 100644 index a49ffc2ba1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-3.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-4.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-4.JPG deleted file mode 100644 index 23d63a69bd..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-4.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-5.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-5.JPG deleted file mode 100644 index f1a48d22fc..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-5.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/PlaceHold-6.JPG b/docs-antora/modules/circulation/assets/images/media/PlaceHold-6.JPG deleted file mode 100644 index a5ae4f55fe..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/PlaceHold-6.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices1.jpg b/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices1.jpg deleted file mode 100644 index 07392f60e0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices1.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices2.jpg b/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices2.jpg deleted file mode 100644 index 8ff0d82ce8..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices2.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices3.jpg b/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices3.jpg deleted file mode 100644 index 69d9ab0c7b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/Triggered_Events_and_Notices3.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/backdate_checkin_web_client.png b/docs-antora/modules/circulation/assets/images/media/backdate_checkin_web_client.png deleted file mode 100644 index 6784318ec1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/backdate_checkin_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/backdate_post_checkin_web_client.png b/docs-antora/modules/circulation/assets/images/media/backdate_post_checkin_web_client.png deleted file mode 100644 index ca98a613bd..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/backdate_post_checkin_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/backdate_post_date_web_client.png b/docs-antora/modules/circulation/assets/images/media/backdate_post_date_web_client.png deleted file mode 100644 index aea169d1ca..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/backdate_post_date_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/backdate_red_web_client.png b/docs-antora/modules/circulation/assets/images/media/backdate_red_web_client.png deleted file mode 100644 index dfa4bd01ea..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/backdate_red_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/booking-capture-1_web_client.png b/docs-antora/modules/circulation/assets/images/media/booking-capture-1_web_client.png deleted file mode 100644 index 032f440da1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/booking-capture-1_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/booking-capture-2_web_client.png b/docs-antora/modules/circulation/assets/images/media/booking-capture-2_web_client.png deleted file mode 100644 index 3288f96937..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/booking-capture-2_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/booking-capture-3.png b/docs-antora/modules/circulation/assets/images/media/booking-capture-3.png deleted file mode 100644 index 2de6c8a5eb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/booking-capture-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/check_in_menu_web_client.png b/docs-antora/modules/circulation/assets/images/media/check_in_menu_web_client.png deleted file mode 100644 index 0b52f67b61..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/check_in_menu_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/checkin_barcode_web_client.png b/docs-antora/modules/circulation/assets/images/media/checkin_barcode_web_client.png deleted file mode 100644 index e2156ae538..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/checkin_barcode_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/checkinmodifiers-with-inventory2.png b/docs-antora/modules/circulation/assets/images/media/checkinmodifiers-with-inventory2.png deleted file mode 100644 index 753010d8d6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/checkinmodifiers-with-inventory2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/checkout_item_barcode_web_client.png b/docs-antora/modules/circulation/assets/images/media/checkout_item_barcode_web_client.png deleted file mode 100644 index 358844e1dc..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/checkout_item_barcode_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/checkout_menu_web_client.png b/docs-antora/modules/circulation/assets/images/media/checkout_menu_web_client.png deleted file mode 100644 index 0b52f67b61..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/checkout_menu_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-11_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-11_web_client.png deleted file mode 100644 index 02d07f0bfb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-11_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-12.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-12.JPG deleted file mode 100644 index 7f690425fb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-12.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-16.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-16.JPG deleted file mode 100644 index 6762ac4563..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-16.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-18_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-18_web_client.png deleted file mode 100644 index bb5dcdf299..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-18_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-19_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-19_web_client.png deleted file mode 100644 index 4c27f0122b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-19_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-1a_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-1a_web_client.png deleted file mode 100644 index 944dc0eb4d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-1a_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-1b_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-1b_web_client.png deleted file mode 100644 index bc24f1327c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-1b_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-20.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-20.png deleted file mode 100644 index dfdd9161c5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-20.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-23_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-23_web_client.png deleted file mode 100644 index 8b113e5969..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-23_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-24_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-24_web_client.png deleted file mode 100644 index e8c4199a1c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-24_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-25_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-25_web_client.png deleted file mode 100644 index 88786c23ed..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-25_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-26_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-26_web_client.png deleted file mode 100644 index 45aa347ae3..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-26_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-2_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-2_web_client.png deleted file mode 100644 index e47c0f0022..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-2_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-4.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-4.JPG deleted file mode 100644 index ef38851158..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-4.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-5.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-5.JPG deleted file mode 100644 index da9e3d7204..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-5.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-6.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-6.JPG deleted file mode 100644 index 326fa707a0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-6.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-8.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-8.JPG deleted file mode 100644 index 9fe45c782e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-8.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-9_web_client.png b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-9_web_client.png deleted file mode 100644 index 78844b983e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records-9_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_13.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_13.JPG deleted file mode 100644 index 7ef41992c6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_13.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_14.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_14.JPG deleted file mode 100644 index 6a1b64a9c4..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_14.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_15.JPG b/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_15.JPG deleted file mode 100644 index f5e0dd659b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/circulation_patron_records_15.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/claimed_date_web_client.png b/docs-antora/modules/circulation/assets/images/media/claimed_date_web_client.png deleted file mode 100644 index 0de9b1bdc4..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/claimed_date_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/cr_section_web_client.png b/docs-antora/modules/circulation/assets/images/media/cr_section_web_client.png deleted file mode 100644 index bdf80b9f38..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/cr_section_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/custom_hold_pickup_location1.png b/docs-antora/modules/circulation/assets/images/media/custom_hold_pickup_location1.png deleted file mode 100644 index a76b1b2a99..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/custom_hold_pickup_location1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/custom_hold_pickup_location2.jpg b/docs-antora/modules/circulation/assets/images/media/custom_hold_pickup_location2.jpg deleted file mode 100644 index 605dd896e5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/custom_hold_pickup_location2.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/due_date_display_web_client.png b/docs-antora/modules/circulation/assets/images/media/due_date_display_web_client.png deleted file mode 100644 index e0e2eff76f..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/due_date_display_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/edit_due_date_action_web_client.png b/docs-antora/modules/circulation/assets/images/media/edit_due_date_action_web_client.png deleted file mode 100644 index 5f622efe4c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/edit_due_date_action_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/ereceipts1_web_client.PNG b/docs-antora/modules/circulation/assets/images/media/ereceipts1_web_client.PNG deleted file mode 100644 index 6db93af0f7..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/ereceipts1_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/ereceipts2_web_client.PNG b/docs-antora/modules/circulation/assets/images/media/ereceipts2_web_client.PNG deleted file mode 100644 index 981b1c8d8d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/ereceipts2_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/ereceipts3_web_client.PNG b/docs-antora/modules/circulation/assets/images/media/ereceipts3_web_client.PNG deleted file mode 100644 index 84e70f6e5c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/ereceipts3_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/ereceipts4_web_client.PNG b/docs-antora/modules/circulation/assets/images/media/ereceipts4_web_client.PNG deleted file mode 100644 index 8f94d97cf5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/ereceipts4_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/ereceipts5_web_client.PNG b/docs-antora/modules/circulation/assets/images/media/ereceipts5_web_client.PNG deleted file mode 100644 index fd2ea05e88..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/ereceipts5_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/ereceipts6_web_client.PNG b/docs-antora/modules/circulation/assets/images/media/ereceipts6_web_client.PNG deleted file mode 100644 index 74de1e59d5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/ereceipts6_web_client.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-clearing-1.png b/docs-antora/modules/circulation/assets/images/media/holds-clearing-1.png deleted file mode 100644 index 34c2a0ad55..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-clearing-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-clearing-2.png b/docs-antora/modules/circulation/assets/images/media/holds-clearing-2.png deleted file mode 100644 index 9fb2a77726..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-clearing-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-clearing-3.png b/docs-antora/modules/circulation/assets/images/media/holds-clearing-3.png deleted file mode 100644 index 1f75803846..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-clearing-3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-clearing-4.png b/docs-antora/modules/circulation/assets/images/media/holds-clearing-4.png deleted file mode 100644 index eef6c646e5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-clearing-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-1.png b/docs-antora/modules/circulation/assets/images/media/holds-managing-1.png deleted file mode 100644 index 76eb32fc8d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-10.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-10.JPG deleted file mode 100644 index f2fffc8136..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-10.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-11.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-11.JPG deleted file mode 100644 index 8479b79cda..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-11.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-12.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-12.JPG deleted file mode 100644 index 130c370ae1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-12.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-13.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-13.JPG deleted file mode 100644 index 030ceee289..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-13.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-14.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-14.JPG deleted file mode 100644 index 78fde73166..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-14.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-15.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-15.JPG deleted file mode 100644 index e7e5865f3c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-15.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-16.png b/docs-antora/modules/circulation/assets/images/media/holds-managing-16.png deleted file mode 100644 index 2db6c18230..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-16.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-17.png b/docs-antora/modules/circulation/assets/images/media/holds-managing-17.png deleted file mode 100644 index 5d0e175b55..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-17.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-18.png b/docs-antora/modules/circulation/assets/images/media/holds-managing-18.png deleted file mode 100644 index 2ffe43a89d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-18.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-19.png b/docs-antora/modules/circulation/assets/images/media/holds-managing-19.png deleted file mode 100644 index c74c47ed6e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-19.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-2.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-2.JPG deleted file mode 100644 index 382d799e54..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-2.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-4.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-4.JPG deleted file mode 100644 index e0fdf1fedb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-4.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-5_and_6.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-5_and_6.JPG deleted file mode 100644 index 4522ab6fe0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-5_and_6.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-7.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-7.JPG deleted file mode 100644 index e8237f3886..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-7.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-8.JPG b/docs-antora/modules/circulation/assets/images/media/holds-managing-8.JPG deleted file mode 100644 index 738cfd41e0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-8.JPG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-managing-9.png b/docs-antora/modules/circulation/assets/images/media/holds-managing-9.png deleted file mode 100644 index 006b105c88..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-managing-9.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-notifications-1.png b/docs-antora/modules/circulation/assets/images/media/holds-notifications-1.png deleted file mode 100644 index 6d6b147fc5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-notifications-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-notifications-2.png b/docs-antora/modules/circulation/assets/images/media/holds-notifications-2.png deleted file mode 100644 index c64fcdb83a..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-notifications-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-1.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-1.png deleted file mode 100644 index 1f325e2713..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-2.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-2.png deleted file mode 100644 index c4e385115d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-4.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-4.png deleted file mode 100644 index 5db02f7f17..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-5.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-5.png deleted file mode 100644 index 74829a091b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-5.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-6.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-6.png deleted file mode 100644 index b022065760..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-6.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-7.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-7.png deleted file mode 100644 index 37651dceb9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-7.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds-pull-9.png b/docs-antora/modules/circulation/assets/images/media/holds-pull-9.png deleted file mode 100644 index 365db4eb99..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds-pull-9.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds_title_options.png b/docs-antora/modules/circulation/assets/images/media/holds_title_options.png deleted file mode 100644 index cd79a155f1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds_title_options.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds_title_options_adv.png b/docs-antora/modules/circulation/assets/images/media/holds_title_options_adv.png deleted file mode 100644 index 2f6dbac2e6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds_title_options_adv.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds_title_searchresults.png b/docs-antora/modules/circulation/assets/images/media/holds_title_searchresults.png deleted file mode 100644 index 5bbae023ba..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds_title_searchresults.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/holds_title_success.png b/docs-antora/modules/circulation/assets/images/media/holds_title_success.png deleted file mode 100644 index 0db924f696..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/holds_title_success.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/in_house_use_non_cat.png b/docs-antora/modules/circulation/assets/images/media/in_house_use_non_cat.png deleted file mode 100644 index fd2b2cea60..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/in_house_use_non_cat.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/in_house_use_web_client.png b/docs-antora/modules/circulation/assets/images/media/in_house_use_web_client.png deleted file mode 100644 index 0851df52d1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/in_house_use_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/item_status_altview_web_client.png b/docs-antora/modules/circulation/assets/images/media/item_status_altview_web_client.png deleted file mode 100644 index 624810c9cd..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/item_status_altview_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/item_status_barcode_web_client.png b/docs-antora/modules/circulation/assets/images/media/item_status_barcode_web_client.png deleted file mode 100644 index 1dced5021f..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/item_status_barcode_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/item_status_list_view_web_client.png b/docs-antora/modules/circulation/assets/images/media/item_status_list_view_web_client.png deleted file mode 100644 index 7ddb694653..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/item_status_list_view_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/item_status_menu_web_client.png b/docs-antora/modules/circulation/assets/images/media/item_status_menu_web_client.png deleted file mode 100644 index 109c331f4d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/item_status_menu_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/items_out_click_web_client.png b/docs-antora/modules/circulation/assets/images/media/items_out_click_web_client.png deleted file mode 100644 index 0320f9d159..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/items_out_click_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/last_few_circs_action_web_client.png b/docs-antora/modules/circulation/assets/images/media/last_few_circs_action_web_client.png deleted file mode 100644 index b4f698129d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/last_few_circs_action_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/last_few_circs_display_web_client.png b/docs-antora/modules/circulation/assets/images/media/last_few_circs_display_web_client.png deleted file mode 100644 index 4c6e8b3475..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/last_few_circs_display_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/long_overdue1.png b/docs-antora/modules/circulation/assets/images/media/long_overdue1.png deleted file mode 100644 index 3e66d054e1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/long_overdue1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/long_overdue2.png b/docs-antora/modules/circulation/assets/images/media/long_overdue2.png deleted file mode 100644 index fe770a418e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/long_overdue2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/lost_section_web_client.png b/docs-antora/modules/circulation/assets/images/media/lost_section_web_client.png deleted file mode 100644 index a21edae3c8..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/lost_section_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/mark_claims_returned_web_client.png b/docs-antora/modules/circulation/assets/images/media/mark_claims_returned_web_client.png deleted file mode 100644 index 1fd625c00c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/mark_claims_returned_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/mark_lost_web_client.png b/docs-antora/modules/circulation/assets/images/media/mark_lost_web_client.png deleted file mode 100644 index dc4fa0bf17..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/mark_lost_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_checkin.png b/docs-antora/modules/circulation/assets/images/media/offline_checkin.png deleted file mode 100644 index 152b41c752..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_checkin.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_checkout.png b/docs-antora/modules/circulation/assets/images/media/offline_checkout.png deleted file mode 100644 index a8d2d4b2bd..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_checkout.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_clear_pending.png b/docs-antora/modules/circulation/assets/images/media/offline_clear_pending.png deleted file mode 100644 index f06014d13b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_clear_pending.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_exceptions.png b/docs-antora/modules/circulation/assets/images/media/offline_exceptions.png deleted file mode 100644 index 006cfcfc4f..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_exceptions.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_homepage_loggedin.png b/docs-antora/modules/circulation/assets/images/media/offline_homepage_loggedin.png deleted file mode 100644 index d961918ef9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_homepage_loggedin.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_homepage_loggedout.png b/docs-antora/modules/circulation/assets/images/media/offline_homepage_loggedout.png deleted file mode 100644 index 29ef316270..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_homepage_loggedout.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_inhouse.png b/docs-antora/modules/circulation/assets/images/media/offline_inhouse.png deleted file mode 100644 index c2958ba3d0..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_inhouse.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_logout_warning.png b/docs-antora/modules/circulation/assets/images/media/offline_logout_warning.png deleted file mode 100644 index 482bde6cdc..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_logout_warning.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_patron_blocked.png b/docs-antora/modules/circulation/assets/images/media/offline_patron_blocked.png deleted file mode 100644 index 627bedcae4..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_patron_blocked.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_patron_registration.png b/docs-antora/modules/circulation/assets/images/media/offline_patron_registration.png deleted file mode 100644 index f1d46b98d8..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_patron_registration.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_pending_xacts.png b/docs-antora/modules/circulation/assets/images/media/offline_pending_xacts.png deleted file mode 100644 index 61610325be..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_pending_xacts.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_processing_complete.png b/docs-antora/modules/circulation/assets/images/media/offline_processing_complete.png deleted file mode 100644 index 9cbc24d755..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_processing_complete.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_renew.png b/docs-antora/modules/circulation/assets/images/media/offline_renew.png deleted file mode 100644 index b0f6a717f8..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_renew.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_session_list.png b/docs-antora/modules/circulation/assets/images/media/offline_session_list.png deleted file mode 100644 index 5caaceab37..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_session_list.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/offline_unprocessed.png b/docs-antora/modules/circulation/assets/images/media/offline_unprocessed.png deleted file mode 100644 index 6cd479d8e3..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/offline_unprocessed.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/overdue_checkin_web_client.png b/docs-antora/modules/circulation/assets/images/media/overdue_checkin_web_client.png deleted file mode 100644 index aaa2f6352c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/overdue_checkin_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/patron_self_registration2.jpg b/docs-antora/modules/circulation/assets/images/media/patron_self_registration2.jpg deleted file mode 100644 index 51da802c7e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/patron_self_registration2.jpg and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/patron_summary_checkouts_web_client.png b/docs-antora/modules/circulation/assets/images/media/patron_summary_checkouts_web_client.png deleted file mode 100644 index ea5f6dc6aa..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/patron_summary_checkouts_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/place-another-hold-1.png b/docs-antora/modules/circulation/assets/images/media/place-another-hold-1.png deleted file mode 100644 index 130ec10964..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/place-another-hold-1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/precat_web_client.png b/docs-antora/modules/circulation/assets/images/media/precat_web_client.png deleted file mode 100644 index 24628c61e1..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/precat_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/record_in_house_action_web_client.png b/docs-antora/modules/circulation/assets/images/media/record_in_house_action_web_client.png deleted file mode 100644 index 30605db899..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/record_in_house_action_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/renew_action_web_client.png b/docs-antora/modules/circulation/assets/images/media/renew_action_web_client.png deleted file mode 100644 index 0d177f20ad..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/renew_action_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/renew_item_calendar_web_client.png b/docs-antora/modules/circulation/assets/images/media/renew_item_calendar_web_client.png deleted file mode 100644 index 5a2e06fdc4..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/renew_item_calendar_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/renew_item_web_client.png b/docs-antora/modules/circulation/assets/images/media/renew_item_web_client.png deleted file mode 100644 index a81d2d682d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/renew_item_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/retrieve_patron_web_client.png b/docs-antora/modules/circulation/assets/images/media/retrieve_patron_web_client.png deleted file mode 100644 index d1ed320ec4..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/retrieve_patron_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self-check-admin-login.png b/docs-antora/modules/circulation/assets/images/media/self-check-admin-login.png deleted file mode 100644 index ed0ec9f587..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self-check-admin-login.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_1.png b/docs-antora/modules/circulation/assets/images/media/self_check_check_out_1.png deleted file mode 100644 index ed2220cdb8..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_2.png b/docs-antora/modules/circulation/assets/images/media/self_check_check_out_2.png deleted file mode 100644 index 40fd9b8537..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_3.png b/docs-antora/modules/circulation/assets/images/media/self_check_check_out_3.png deleted file mode 100644 index 79a418af0e..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_3.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_4.png b/docs-antora/modules/circulation/assets/images/media/self_check_check_out_4.png deleted file mode 100644 index 4092994542..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_4.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_5.png b/docs-antora/modules/circulation/assets/images/media/self_check_check_out_5.png deleted file mode 100644 index 01b50c134a..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_5.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_6.png b/docs-antora/modules/circulation/assets/images/media/self_check_check_out_6.png deleted file mode 100644 index 230ed0da1f..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_check_out_6.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_error_1.png b/docs-antora/modules/circulation/assets/images/media/self_check_error_1.png deleted file mode 100644 index e6df645953..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_error_1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_view_fines_1.png b/docs-antora/modules/circulation/assets/images/media/self_check_view_fines_1.png deleted file mode 100644 index 106b392ef2..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_view_fines_1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_view_fines_2.png b/docs-antora/modules/circulation/assets/images/media/self_check_view_fines_2.png deleted file mode 100644 index 17201dc0e6..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_view_fines_2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_view_holds_1.png b/docs-antora/modules/circulation/assets/images/media/self_check_view_holds_1.png deleted file mode 100644 index 3ad4c354b9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_view_holds_1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_view_holds_2.png b/docs-antora/modules/circulation/assets/images/media/self_check_view_holds_2.png deleted file mode 100644 index 41b89738c8..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_view_holds_2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_view_items_out_1.png b/docs-antora/modules/circulation/assets/images/media/self_check_view_items_out_1.png deleted file mode 100644 index d239f4d1c9..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_view_items_out_1.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/self_check_view_items_out_2.png b/docs-antora/modules/circulation/assets/images/media/self_check_view_items_out_2.png deleted file mode 100644 index 5323ba360b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/self_check_view_items_out_2.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/specify_due_date1_web_client.png b/docs-antora/modules/circulation/assets/images/media/specify_due_date1_web_client.png deleted file mode 100644 index f28b921c1b..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/specify_due_date1_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-1_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-1_web_client.png deleted file mode 100644 index 35b11cc8eb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-1_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-2_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-2_web_client.png deleted file mode 100644 index cd13b6f463..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-2_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-3_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-3_web_client.png deleted file mode 100644 index 3214e2550c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-3_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-4_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-4_web_client.png deleted file mode 100644 index f9e7520746..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-4_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-5_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-5_web_client.png deleted file mode 100644 index c1079c7568..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-5_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-6_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-6_web_client.png deleted file mode 100644 index e90b4f9b52..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-6_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/staff-penalties-7_web_client.png b/docs-antora/modules/circulation/assets/images/media/staff-penalties-7_web_client.png deleted file mode 100644 index 9fbc019a8d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/staff-penalties-7_web_client.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket1.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket1.PNG deleted file mode 100644 index 39bb4a214c..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket1.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket10.png b/docs-antora/modules/circulation/assets/images/media/userbucket10.png deleted file mode 100644 index da96d10ff3..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket10.png and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket11.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket11.PNG deleted file mode 100644 index b2120937ac..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket11.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket12.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket12.PNG deleted file mode 100644 index 33b3a077ce..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket12.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket2.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket2.PNG deleted file mode 100644 index 54d5dc7334..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket2.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket3.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket3.PNG deleted file mode 100644 index 033cc397cb..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket3.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket4.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket4.PNG deleted file mode 100644 index dd0a893625..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket4.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket7.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket7.PNG deleted file mode 100644 index 8770491fb5..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket7.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket8.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket8.PNG deleted file mode 100644 index e2e7bc787d..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket8.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/assets/images/media/userbucket9.PNG b/docs-antora/modules/circulation/assets/images/media/userbucket9.PNG deleted file mode 100644 index 32d0d34603..0000000000 Binary files a/docs-antora/modules/circulation/assets/images/media/userbucket9.PNG and /dev/null differ diff --git a/docs-antora/modules/circulation/nav.adoc b/docs-antora/modules/circulation/nav.adoc deleted file mode 100644 index 92b03a56e3..0000000000 --- a/docs-antora/modules/circulation/nav.adoc +++ /dev/null @@ -1,10 +0,0 @@ -* xref:circulation:introduction.adoc[Circulation] -** xref:circulation:circulating_items_web_client.adoc[Circulating Items] -** xref:circulation:basic_holds.adoc[Holds Management] -** xref:circulation:booking.adoc[Booking Module] -** xref:circulation:circulation_patron_records_web_client.adoc[Circulation - Patron Record] -** xref:admin:patron_self_registration.adoc[Patron Self-Registration Administration] -** xref:circulation:triggered_events.adoc[Triggered Events and Notices] -** xref:circulation:offline_circ_webclient.adoc[Offline Circulation] -** xref:circulation:self_check.adoc[Self Checkout] - diff --git a/docs-antora/modules/circulation/pages/README b/docs-antora/modules/circulation/pages/README deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/docs-antora/modules/circulation/pages/_attributes.adoc b/docs-antora/modules/circulation/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/circulation/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/circulation/pages/basic_holds.adoc b/docs-antora/modules/circulation/pages/basic_holds.adoc deleted file mode 100644 index 7e24a89b92..0000000000 --- a/docs-antora/modules/circulation/pages/basic_holds.adoc +++ /dev/null @@ -1,485 +0,0 @@ -= Holds Management = -:toc: - -== Placing Holds == - -Holds can be placed by staff in the _Staff Client_ and by patrons in the OPAC. In this chapter we demonstrate placing holds in the _Staff Client_. - -== Holds Levels == - -Evergreen has different levels of holds. Library staff can place holds at all levels, while patrons can only place title-level holds, and parts-level holds. The chart below summarizes the levels of holds. - -|============================== -|*Hold level* |*Abbreviation* |*When to use* |*How to use* |*Who can use* |*Hold tied to* -|Title |T |Patron wants first available copy of a title | Staff or patron click on _Place Hold_ next to title. | Patron or staff | Holdings attached to a single MARC (title) record -|Parts |P |Patron wants a particular part of title (e.g. volume or disk number) | Staff or patron selects part on the create/edit hold screen when setting holds notification options. |Patron or staff |Holdings with identical parts attached to a single MARC (title) record. -|Volume |V |Patron or staff want any title associated with a particular call number | In the staff client, click on _Volume Hold_ under _Holdable?_ |Staff only |Holdings attached to a single call number (volume) -|Copy |C |Patron or staff want a specific copy of an item |In the staff client, click on _Copy Hold_ under _Holdable?_ |Staff only |A specific copy (barcode) -|============================== - - -== Title Level Hold == - -[TIP] -==================== -A default hold expiration date will be displayed if the library has set up a default holds expiration period in their library settings. Uncaptured holds will not be targeted after the expiration date. - -If you select the _Suspend this Hold_ checkbox, the hold will be suspended and not be captured until you activate it. -==================== - -. To place a title level hold, retrieve the title record on the catalog and click the _Place Hold_ link beside the title on the search results list, or click the _Place Hold_ link on the title summary screen. -+ -image::media/holds_title_searchresults.png[Search Results with Place Hold link] -+ -. Scan or type patron's barcode into the _Place hold for patron by -barcode_ box, or choose _Place this hold for me_. -. If this title contains multiple parts, you can specify which part to -request. If you do not select a part, the hold will target any of the -other copies on this record, that is, those with no parts attached. -Those copies are usually the complete set, containing all the parts. -. Edit patron hold notification and expiration date fields as required. -Be sure to choose a valid _Pickup location_. -. Click _Submit_. -+ -image::media/holds_title_options.png[Place Holds screen with Basic Options] -+ -. A confirmation screen appears with the message "Hold was successfully placed". -+ -image::media/holds_title_success.png[Place Holds confirmation screen] - -*Advanced Hold Options* - -Clicking the *Advanced Hold Options* link will take you into the -metarecord level hold feature, where you can select multiple formats -and/or languages, if available. - -Selecting multiple formats will not place all of these formats on hold. -For example, selecting CD Audiobook and Book implies that either the CD -format or the book format is the acceptable format to fill the hold. If -no format is selected, then any of the available formats may be used to -fill the hold. The same holds true for selecting multiple languages. - -image::media/holds_title_options_adv.png[Place Hold screen with Advanced Options] - - -== Patron Search from Place Hold == -Patron Search from Place Hold allows staff members, when placing a hold on behalf of a patron in the web staff client, to search for patrons by names and other searchable patron information, rather than relying on barcode alone. - - -=== To use Patron Search From Place Holds: === -1. After performing a search in the catalog, staff will retrieve a bibliographic record. -2. Click *Place Hold* either in the search results or within the detailed bibliographic record. The Place Hold Screen will appear. Note: this feature also appears when placing volume level holds and copy level holds. -+ -image::media/PlaceHold-0.JPG[] -+ -3. Next to _Place Hold for patron by barcode_, click on *Patron Search*. Please note that Patron Search will only appear in this interface when using the web-based staff client. It will not appear in the patron-facing OPAC. -+ -image::media/PlaceHold-1.JPG[] -+ -4. A dialog box will appear with the patron search interface used elsewhere in the staff client. By default, the search scopes to your workstation org unit, and you can search by patron last name, first name, and middle name. -+ -image::media/PlaceHold-2.JPG[] -+ -Clicking the *arrow icon* to the right of _Clear Form_ can either expand or condense the searchable fields display which includes other patron information. -+ -image::media/PlaceHold-3.JPG[] -+ -5. To search for a patron, fill out the relevant search fields, and click *Search* or hit ENTER on your keyboard. Results will appear below in Patron Search Results in the lower half of the screen. -+ -image::media/PlaceHold-4.JPG[] -+ -6. Click the row of the desired patron account, and click *Select*. -+ -image::media/PlaceHold-5.JPG[] -+ -7. The dialog box will close and the selected patron's barcode will appear next to _Place Hold for patron by barcode_. This will cause the patron's hold notification preferences to appear in the relevant fields in the bottom half of the screen. Changes to the Hold Notification preferences can be made before clicking *Submit* to finish placing a hold for the patron. -+ -image::media/PlaceHold-6.JPG[] - -== Parts Level Hold == - -. To place a parts level hold, retrieve a record with parts-level items -attached to the title, such as a multi-disc DVD, an annual travel guide, -or a multi-volume book set. -. Place the hold as you would for a title-level hold, including patron -barcode, notification details, and a valid pickup location. -. Select the applicable part from the _Select a Part_ dropdown menu. -. Click _Submit_. -+ -image::media/holds_title_options.png[Place Holds screen with Basic Options] -+ -[TIP] -=============== -Requested formats are listed in the _Holdable Part_ column in hold records. Use the _Column Picker_ to display it when the hold record is displayed. -=============== - -== Placing Holds in Patron Records == - -. Holds can be placed from patron records too. In the patron record on the _Holds_ screen, click the _Place Hold_ button on the left top corner. - -. The catalog is displayed in the _Holds_ screen to search for the title on which you want to place a hold. - -. Search for the title and click the _Place Hold_ link. - -. The patron’s account information is retrieved automatically. Set up the notification and expiration date fields. Click _Place Hold_ and confirm your action in the pop-up window. - -. You may continue to search for more titles. Once you are done, click the _Holds_ button on the top to go back to the _Holds_ screen. Click the _Refresh_ button to display your newly placed holds. - -=== Placing Multiple Holds on Same Title === - -After a successful hold placement, staff have the option to place another hold on the same title by clicking the link _Place another hold for this title_. This returns to the hold screen, where a different patron's information can be entered. - -image::media/place-another-hold-1.png[place-another-hold-1] - -This feature can be useful for book groups or new items where a list of waiting patrons needs to be transferred into the system. - - -== Managing Holds == - -Holds can be cancelled at any time by staff or patrons. Before holds are captured, staff or patrons can suspend them or set them as inactive for a period of time without losing the hold queue position, activate suspended holds, change -notification method, phone number, pick-up location (for multi-branch libraries only), expiration date, activation date for inactive holds, etc. Once a hold is captured, staff can change the pickup location and extend the hold shelf -time if required. - -Staff can edit holds in either patron’s records or the title records. Patrons can edit their holds in their account on the OPAC. - -[TIP] -============== -If you use the column picker to change the holds display from one area of the staff client (e.g. the patron record), it will change the display for all parts of the staff client that deal with holds, including the title record holds -display, the holds shelf display, and the pull list display. -============== - - -[#actions_for_selected_holds] -=== Actions for Selected Holds === - -. Retrieve the patron record and go to the _Holds_ screen. -. Highlight the hold record, then select _Actions_. -+ -image::media/holds-managing-1.png[holds-managing-1] -+ -. Manage the hold by choosing an action on the list. -.. If you want to cancel the hold, click _Cancel Hold_ from the menu. You are prompted to select a reason and put in a note if required. To finish, click _Apply_. -+ -image::media/holds-managing-2.JPG[holds-managing-2] -+ -[NOTE] -============= -A captured hold with a status of _On Hold Shelf_ can be cancelled by either staff or patrons. But the status of the item will not change until staff check it in. -============= -.. If you want to suspend a hold or activate a suspended hold, click the appropriate action on the list. You will be prompted to confirm your action. Suspended holds have a _No_ value in the _Active?_ column. -+ -[NOTE] -=============== -Suspended holds will not be filled but its hold position will be kept. They will automatically become active on the activation date if there is an activation date in the record. Without an activation date, the holds will remain inactive until staff or a patron activates them manually. -=============== - -.. You may edit the _Activation Date_ and _Expiration Date_ by using the corresponding action on the _Actions_ dropdown menu. You will be prompted to enter the new date. Use the calendar widget to choose a date, then click _Apply_. Use the _Clear_ button to unset the date. -+ -image::media/holds-managing-4.JPG[holds-managing-4] -+ - -.. Hold shelf expire time is automatically recorded in the hold record when a hold is filled. You may edit this time by using the _Edit Shelf Expire Time_ on the _Actions_ dropdown menu. You will be prompted to enter the new date. Use the calendar widget to choose a date, then click _Apply_. - -.. If you want to enable or disable phone notification or change the phone number, click _Edit Notification Settings_. You will be prompted to enter the new phone number. Make sure you enter a valid and complete phone number. The phone number is used for this hold only and can be different from the one in the patron account. It has no impact on the patron account. If you leave it blank, no phone number will be printed on the hold slip. If you want to enable or disable email notification for the hold, check _Send Emails_ on the prompt screen. -+ -image::media/holds-managing-5_and_6.JPG[holds-managing-5_and_6] -+ - -.. Pickup location can be changed by clicking _Edit Pickup Library_. Click the dropdown list of all libraries and choose the new pickup location. Click _Submit_. -+ -image::media/holds-managing-7.JPG[holds-managing-7] -+ -[NOTE] -============== -Staff can change the pickup location for holds with in-transit status. Item will be sent in transit to the new destination. Staff cannot change the pickup location once an item is on the holds shelf. -============== - -.. The item’s physical condition is recorded in the copy record as _Good_ or _Mediocre_ in the _Quality_ field. You may request that your holds be filled with copies of good quality only. Click _Set Desired Copy Quality_ on the -_Actions_ list. Make your choice in the pop-up window. -+ -image::media/holds-managing-8.JPG[holds-managing-8] - - -=== Transferring Holds === - -. Holds on one title can be transferred to another with the hold request -time preserved. To do so, you need to find the destination title and -click _Mark for:_ -> _Title Hold Transfer_. -+ -image::media/holds-managing-9.png[holds-managing-9] -+ -. Select the hold you want to transfer. Click _Actions_ -> _Transfer to Marked Title_. -+ -image::media/holds-managing-10.JPG[holds-managing-10] - -=== Cancelled Holds === - -. Cancelled holds can be displayed. Click the _Recently Cancelled Holds_ button on the _Holds_ screen. -+ -image::media/holds-managing-11.JPG[holds-managing-11] -+ -. You can un-cancel holds. -+ -image::media/holds-managing-12.JPG[holds-managing-12] -+ -Based on your library’s setting, hold request time can be reset when a hold is un-cancelled. - - -=== Viewing Details & Adding Notes to Holds === - -. You can view details of a hold by selecting a hold then clicking the _Detail View_ button on the _Holds_ screen. -+ -image::media/holds-managing-13.JPG[holds-managing-13] -+ -. You may add a note to a hold in the _Detail View_. -+ -image::media/holds-managing-14.JPG[holds-managing-14] -+ -. Notes can be printed on the hold slip if the _Print on slip?_ checkbox -is selected. Enter the message, then click _OK_. -+ -image::media/holds-managing-15.JPG[holds-managing-15] - - -=== Displaying Queue Position === - -Using the Column Picker, you can display _Queue Position_ and _Total number of Holds_. - -image::media/holds-managing-16.png[holds-managing-16] - - -=== Managing Holds in Title Records === - -. Retrieve and display the title record in the catalog. -. Click _Actions_ -> _View Holds_. -+ -image::media/holds-managing-17.png[holds-managing-17] -+ -. All holds on this title to be picked up at your library are displayed. Use the _Pickup Library_ to view holds to be picked up at other libraries. -+ -image::media/holds-managing-18.png[holds-managing-18] -+ -. Highlight the hold you want to edit. Choose an action from the -_Actions_ menu. For more information see the -xref:#actions_for_selected_holds[Actions for Selected Holds] section. For -example, you can retrieve the hold requestor’s account by selecting -_Retrieve Patron_ from this menu. -+ -image::media/holds-managing-19.png[holds-managing-19] - - -=== Retargeting Holds === - -Holds need to be retargeted whenever a new item is added to a record, or after some types of item status changes, for instance when an item is changed from _On Order_ to _In Process_. The system does not automatically recognize the newly added items as available to fill holds. - -. View the holds for the item. - -. Highlight all the holds for the record, which have a status of _Waiting for Copy_. If there are a lot of holds, it may be helpful to sort the holds by _Status_. - -. Click on the head of the status column. - -. Under _Actions_, select _Find Another Target_. - -. A window will open asking if you are sure you would like to reset the holds for these items. - -. Click _Yes_. Nothing may appear to happen, or if you are retargeting a lot of holds at once, your screen may go blank or seem to freeze for a moment while the holds are retargeted. - -. When the screen refreshes, the holds will be retargeted. The system will now recognize the new items as available for holds. - - -=== Pulling & Capturing Holds === - -==== Holds Pull List ==== - -There are usually four statuses a hold may have: _Waiting for Copy_, _Waiting for Capture_, _In Transit_ and _Ready for Pickup_. - -. *Waiting-for-copy*: all holdable copies are checked out or not available. - -. *Waiting-for-capture*: an available copy is assigned to the hold. The item shows up on the _Holds Pull List_ waiting for staff to search the shelf and capture the hold. - -. *In Transit*: holds are captured at a non-pickup branch and on the way to the pick-up location. - -. *Ready-for-pick-up*: holds are captured and items are on the _Hold Shelf_ waiting for patrons to pick up. Besides capturing holds when checking in items, Evergreen matches holds with available items in your library at regular -intervals. Once a matching copy is found, the item’s barcode number is assigned to the hold and the item is put on the _Holds Pull List_. Staff can print the _Holds Pull List_ and search for the items on shelves. - -. To retrieve your _Holds Pull List_, select _Circulation_ -> _Pull List for Hold Requests_. -+ -image::media/holds-pull-1.png[holds-pull-1] -+ -. The _Holds Pull List_ is displayed. You may re-sort it by clicking the column labels, e.g. _Title_. You can also add fields to the display by using the column picker. -+ -image::media/holds-pull-2.png[holds-pull-2] -+ -[NOTE] -=========== -Column adjustments will only affect the screen display and the CSV download for the holds pull list. It will not affect the printable holds pull list. -=========== - -. The following options are available for printing the pull list: - -* _Print Full Pull List_ prints _Title_, _Author_, _Shelving Location_, _Call Number_ and _Item Barcode_. This method uses less paper than the alternate strategy. - -* _Print Full Pull List (Alternate Strategy)_ prints the same fields as the above option but also includes a patron barcode. This list will also first sort by copy location, as ordered under _Admin_ -> _Local Administration_ -> _Copy Location Order_. - -* _Download CSV_ – This option is available from the _List Actions_ button (adjacent to the _Page "#"_ button) and saves all fields in the screen display to a CSV file. This file can then be opened in Excel or another spreadsheet program. This option provides more flexibility in identifying fields that should be printed. -+ -image::media/holds-pull-4.png[holds-pull-4] -+ -With the CSV option, if you are including barcodes in the holds pull list, you will need to take the following steps to make the barcode display properly: in Excel, select the entire barcode column, right-click and select _Format Cells_, click _Number_ as the category and then reduce the number of decimal places to 0. - -. You may perform hold management tasks by using the _Actions_ dropdown list. - -The _Holds Pull List_ is updated constantly. Once an item on the list is no longer available or a hold on the list is captured, the items will disappear from the list. The _Holds Pull List_ should be printed at least once a day. - -==== Capturing Holds ==== - -Holds can be captured when a checked-out item is returned (checked in) or an item on the _Holds Pull List_ is retrieved and captured. When a hold is captured, the hold slip will be printed and if the patron has chosen to be notified by email, the email notification will be sent out. The item should be put on the hold shelf. - -. To capture a hold, select _Circulation_ -> _Capture Holds_ (or press -_Shift-F2_). -+ -image::media/holds-pull-5.png[holds-pull-5] -+ -. Scan or type barcode and click _Submit_. -+ -image::media/holds-pull-6.png[holds-pull-6] -+ -. The following hold slip is automatically printed. If your workstation -is not setup for silent printing (via Hatch), then a print window will appear. -+ -image::media/holds-pull-7.png[holds-pull-7] -+ -. If the item should be sent to another location, a hold transit slip -will be printed. If your workstation is not setup for silent printing -(via Hatch), then another print window will appear. -+ -[TIP] -=============== -If a patron has an _OPAC/Staff Client Holds Alias_ in his/her account, it will be used on the hold slip instead of the patron’s name. Holds can also be captured on the _Circulation_ -> _Check In Items_ screen where you have more control over automatic slip printing. -=============== - - -=== Handling Missing and Damaged Item === - -If an item on the holds pull list is missing or damaged, you can change its status directly from the holds pull list. - -. From the _Holds Pull List_, right-click on the item and either select _Mark Item Missing_ or _Mark Item Damaged_. -+ -image::media/holds-pull-9.png[holds-pull-9] -+ -. Evergreen will update the status of the item and will immediately retarget the hold. - - -=== Holds Notification Methods === - -. In Evergreen, patrons can set up their default holds notification method in the _Account Preferences_ area of _My Account_. Staff cannot set these preferences for patrons; the patrons must do it when they are logged into the public catalog. -+ -image::media/holds-notifications-1.png[holds-notifications-1] -+ -. Patrons with a default notification preference for phone will see their phone number at the time they place a hold. The checkboxes for email and phone notification will also automatically be checked (if an email or phone number has been assigned to the account). -+ -image::media/holds-notifications-2.png[holds-notifications-2] -+ -. The patron can remove these checkmarks at the time they place the hold or they can enter a different phone number if they prefer to be contacted at a different number. The patron cannot change their e-mail address at this time. - -. When the hold becomes available, the holds slip will display the patron’s e-mail address only if the patron selected the _Notify by Email by default when a hold is ready for pickup?_ checkbox. It will display a phone number only if the patron selected the _Notify by Phone by default when a hold is ready for pickup?_ checkbox. - -[NOTE] -If the patron changes their contact telephone number when placing the hold, this phone number will display on the holds slip. It will not necessarily be the same phone number contained in the patron’s record. - - -=== Clearing Shelf-Expired Holds === - -. Items with _Ready-for-Pickup_ status are on the _Holds Shelf_. The _Holds Shelf_ can help you manage items on the holds shelf. To see the holds shelf list, select _Circulation_ -> _Holds Shelf_. -+ -image::media/holds-clearing-1.png[holds-clearing-1] -+ -. The _Holds Shelf_ is displayed. Note the _Actions_ menu is available, as in the patron record. -+ -You can cancel stale holds here. -+ -image::media/holds-clearing-2.png[holds-clearing-2] -+ -. Use the column picker to add and remove fields from this display. Two fields you may want to display are _Shelf Expire Time_ and _Shelf Time_. -+ -image::media/holds-clearing-3.png[holds-clearing-3] -+ -. Click the _Show Clearable Holds_ button to list expired holds, wrong-shelf holds and canceled holds only. Expired holds are holds that expired before today's date. -+ -image::media/holds-clearing-4.png[holds-clearing-4] -+ -. Click the _Print Full List_ button if you need a printed list. To format the printout customize the *Holds Shelf* receipt template. This can be done in _Administration_ -> _Workstation_ -> _Print Templates_. - -. The _Clear These Holds_ button becomes enabled when viewing clearable -holds. Click it and the expired holds will be canceled. - -. Bring items down from the hold shelf and check them in. - -[IMPORTANT] -============= -If you cancel a ready-for-pickup hold, you must check in the item to make it available for circulation or trigger the next hold in line. -============= - -Hold shelf expire time is inserted when a hold achieves on-hold-shelf status. It is calculated based on the interval entered in _Local Admin_ -> _Library Settings_ -> _Default hold shelf expire interval_. - -[NOTE] -=========== -The clear-hold-shelf function cancels shelf-expired holds only. It does not include holds canceled by patron. Staff needs to trace these items manually according to the hold slip date. -=========== - - -== Alternate Hold Pick up Location == - -*Abstract* - -This feature enables libraries to configure an alternate hold pick up -location. The alternate pick up location will appear in the staff -client to inform library staff that a patron has a hold waiting at that -location. In the stock Evergreen code, the default alternate location -is called "Behind Desk". - -*Configuration* - -The alternate pick up location is disabled in Evergreen by default. It -can be enabled by setting *Holds: Behind Desk Pickup Supported* to -'True' in the Library Settings Editor. - -Libraries can also choose to give patrons the ability to opt-in to pick up holds at the alternate location through their OPAC account. To add this option, set the *OPAC/Patron Visible* field in the User Setting Type *Hold is behind Circ Desk* to 'True'. The User Setting Types can be found under *Administration -> Server Administration -> User Setting Types*. - -*Display* - -When enabled, the alternate pick up location will be displayed under the -Holds button in the patron account. - -image::media/custom_hold_pickup_location1.png[Custom Hold Pickup Location] - - -If configured, patrons will see the option to opt-in to the alternate location in the _Account Preferences_ section of their OPAC Account. - -image::media/custom_hold_pickup_location2.jpg[OPAC Account] - - -== Display Hold Types on Pull Lists == - -This feature ensures that the hold type can be displayed on all hold interfaces. - -You will find the following changes to the hold type indicator: - -. The hold type indicator will display by default on all XUL-based hold -interfaces. XUL-based hold interfaces are those that number the items on the -interface. This can be overridden by saving column configurations that remove -the _Type_ column. -. The hold type indicator will display by default on the HTML-based pull list. -To access, click _Circulation_ -> _Pull List for Hold Requests_ -> _Print Full -Pull List (Alternate Strategy)_. -. The hold type indicator can be added to the Simplified Pull List. To access, -click _Circulation_ -> _Pull List for Hold Requests_ -> _Simplified Pull List -Interface_. - -To add the hold type indicator to the simplified pull list, click _Simplified -Pull List Interface_, and right click on any of the column headers. The Column -Picker appears in a pop up window. Click the box adjacent to _Hold Type_, and -Click _Save_. The _Simplified Pull List Interface_ will now include the hold -type each time that you log into the staff client. - -image::media/Display_Hold_Types_on_Pull_Lists1.jpg[Display_Hold_Types_on_Pull_Lists1] diff --git a/docs-antora/modules/circulation/pages/booking.adoc b/docs-antora/modules/circulation/pages/booking.adoc deleted file mode 100644 index c3a2453e4d..0000000000 --- a/docs-antora/modules/circulation/pages/booking.adoc +++ /dev/null @@ -1,180 +0,0 @@ -= Booking Module = -:toc: - -== Creating a Booking Reservation == - -indexterm:[scheduling,resources using the booking module] -indexterm:[booking,reserving a resource] -indexterm:[booking,creating a reservation] -indexterm:[reserving a bookable resource] - -[NOTE] -The "Create a booking reservation" screen uses your library's timezone. If you create a reservation at a library -in a different timezone, Evergreen will alert you and provide the time in both your timezone and the other library's -timezone. - -Only staff members may create reservations. A reservation can be started from a patron record, or a booking resource. -To reserve catalogued items, you may start from searching the catalogue, if you do not know the booking item's barcode. - -=== To create a reservation from a patron record === - -. Retrieve the patron's record. -. Select Other -> Booking -> Create Reservations. This takes you to the Create Reservations Screen. -. If you want to create a reservation that lasts less than a day (such as for a study room), select _Single-day reservation_ -as the reservation type. If your reservation will last several days (such as for a video camera needed for a class project), -select _Multiple-day reservation_. -. In the area labeled "Reservation details", select the _Choose resource by barcode_ tab if you know the specific barcode -of a resource you'd like to reserve. Otherwise, select the _Choose resource by type_ tab. -. A schedule grid will display on the bottom part of the screen. -. If necessary, adjust the day or days that are displayed. You can also make other adjustments using the _Schedule settings_ -tab. -. For non-catalogued resources, patrons may wish to specify certain attributes. The _Attributes_ tab allows you to do this. -For example, if a patron is booking a laptop, they can choose between PC and Mac laptops if they need to. -. When you have found the days or times that work the best, you can proceed with creating the reservation, by doing one -of the following: -** Double click the appropriate row in the grid. -** Use the tab and space keys to select the appropriate rows, -then press Shift+F10 to open the actions menu. Select -"Create Reservation". -** Select the appropriate rows in the grid, then right click -to open the actions menu. Select "Create Reservation". -** Select the appropriate rows in the grid, then select the -actions button. Select "Create Reservation". -. Adjust the values in this screen as necessary. -. Select the "Confirm reservation" button. -. The screen will refresh, and the new reservation will appear in the schedule. - - -=== Search the catalogue to create a reservation === - -If you would like to reserve a catalogued item but do not know the item barcode, you may start with a catalogue search. - -. Select Cataloguing -> Search the Catalogue to search for the item you wish to reserve. You may search by any -bibliographic information. -. Select the _Holdings View_ tab. -. Right-click on the row that you want to reserve. Select _Book Item Now_. This takes you to the Create Reservations Screen. -. If you want to create a reservation that lasts less than a day (such as for a study room), select _Single-day reservation_ -as the reservation type. If your reservation will last several days (such as for a video camera needed for a class project), -select _Multiple-day reservation_. -. A schedule grid will display on the bottom part of the screen. -. If necessary, adjust the day or days that are displayed. You can also make other adjustments using the _Schedule settings_ -tab. -. When you have found the days or times that work the best, you can proceed with creating the reservation, by doing one -of the following: -.* Double click the appropriate row in the grid. -.* Use the tab and space keys to select the appropriate rows, -then press Shift+F10 to open the actions menu. Select -"Create Reservation". -.* Select the appropriate rows in the grid, then right click -to open the actions menu. Select "Create Reservation". -.* Select the appropriate rows in the grid, then select the -actions button. Select "Create Reservation". -. Enter the patron's barcode. -. Adjust the values in this screen as necessary. -. Select the "Confirm reservation" button. -. The screen will refresh, and the new reservation will appear in the schedule. - - -[NOTE] -Reservations on catalogued items can be created on Item Status (F5) screen. Select the item, then Actions -> Book Item Now. - -== Reservation Pull List == - -indexterm:[booking,pull list] -indexterm:[pull list,booking] - -Reservation pull list can be generated dynamically on the Staff Client. - -. To create a pull list, select Booking -> Pull List. - -. You can decide how many days in advance you would like to pull reserved items. Enter the number of days in the box -adjacent to Generate list for this many days hence. For example, if you would like to pull items that are needed today, -you can enter 1 in the box, and you will retrieve items that need to be pulled today. - -. The pull list will appear. Select the actions button, then _Print_ to print the pull list. - -== Capturing Items for Reservations == - -indexterm:[booking,capturing reservations] - -Depending on your library's workflow, reservations may need to be captured before they are ready to be picked up by the patron. - -[CAUTION] -Always capture reservations in Booking Module. Check In function in Circulation does not function the same as Capture Resources. - -1) In the staff client, select Booking -> Capture Resources. - -image::media/booking-capture-1_web_client.png[] - -2) Scan the item barcode or type the barcode then click Capture. - -image::media/booking-capture-2_web_client.png[] - -3) The message Capture succeeded will appear to the right. Information about the item will appear below the message. Click Print button to print a slip for the reservation. - -image::media/booking-capture-3.png[] - - -== Picking Up Reservations == - -indexterm:[booking,picking up reservations] -indexterm:[booking,checkout] -indexterm:[checkout,booking resources] - -[CAUTION] -Always use the dedicated Booking Module interfaces for tasks related to reservations. Items that have been captured for a -reservation cannot be checked out using the Check Out interface, even if the patron is the reservation recipient. - -1) Ready-for-pickup reservations can be listed from Other -> Booking -> Pick Up Reservations within a patron record or Booking -> Pick Up Reservations. - -2) Scan the patron barcode if using Booking -> Pick Up Reservations. - -3) The reservation(s) available for pickup will display. Select those you want to pick up and double click them. - -4) The screen will refresh to show that the patron has picked up the reservation(s). - - -== Returning Reservations == - -indexterm:[booking,returning reservations] -indexterm:[booking,checkin] -indexterm:[checkin,booking resources] - -[CAUTION] -When a reserved item is brought back, staff must use the Booking Module to return the reservation. - -1) To return reservations, select Booking -> Return Reservations - -2) You can return the item by patron or item barcode. Here we choose Resource to return by item barcode. Scan or enter the barcode, and click Go. - -3) A pop up box will tell you that the item was returned. Click OK on the prompt. - -4) If we select Patron on the above screen, after scanning the patron's barcode, reservations currently out to that patron are displayed. Highlight the reservations you want to return, and double click them. - -5) The screen will refresh to show any resources that remain out and the reservations that have been returned. - -[NOTE] -Reservations can be returned from within patron records by selecting Other -> Booking -> Return Reservations - -== Cancelling a Reservation == - -indexterm:[booking,canceling reservations] - -A reservation can be cancelled in a patron's record or reservation creation screen. - -=== Cancel a reservation from the patron record === - -1) Retrieve the patron's record. - -2) Select Other -> Booking -> Manage Reservations. - -3) The existing reservations will appear at the bottom of the screen. - -4) Highlight the reservation that you want to cancel. Select the Actions menu, then select _Cancel Selected_. - -5) A pop-up window will confirm the cancellation. Click OK on the prompt. - -6) The screen will refresh, and the cancelled reservation(s) will disappear. - - - diff --git a/docs-antora/modules/circulation/pages/circulating_items_web_client.adoc b/docs-antora/modules/circulation/pages/circulating_items_web_client.adoc deleted file mode 100644 index d1d7e0eeab..0000000000 --- a/docs-antora/modules/circulation/pages/circulating_items_web_client.adoc +++ /dev/null @@ -1,472 +0,0 @@ -= Circulating Items = -:toc: - -== Check Out == - -=== Regular Items === - -1) To check out an item click *Check Out Items* from the Circulation and Patrons toolbar, or select *Circulation* -> *Check Out*. - -image::media/checkout_menu_web_client.png[] - -2) Scan or enter patron's barcode and click *Submit* if entering barcode manually. If scanning, number is submitted automatically. - -image::media/retrieve_patron_web_client.png[] - -3) Scan or enter item barcode manually, clicking *Submit* if manual. - -image::media/checkout_item_barcode_web_client.png[] - -4) Due date is now displayed. - -image::media/due_date_display_web_client.png[] - -5) When all items are scanned, click the *Done* button to generate slip receipt or to exit patron record if not printing slip receipts. - -=== Pre-cataloged Items === - -1) Go to patron's *Check Out* screen by clicking *Circulation* -> *Check Out Items*. - -2) Scan the item barcode. - -3) At prompt, enter the required information click *Precat Checkout*. - -image::media/precat_web_client.png[] - -[TIP] -On check-in, Evergreen will prompt staff to re-route the item to cataloging. - -[NOTE] -This screen does not respond to the enter key or carriage return provided -by a barcode scanner when the cursor is in the ISBN field. This behavior -prevents pre-cataloged items from being checked out before you are done -entering all the desired information. - -[NOTE] -This requires the _CREATE_PRECAT_ permission. All form elements in the -dialog other than the Cancel button will be disabled if the current user -lacks the CREATE_PRECAT permission. - -=== Due Dates === - -Circulation periods are pre-set. When items are checked out, due dates are automatically calculated and inserted into circulation records if the *Specific Due Date* checkbox is not selected on the Check Out screen. The *Specific Due Date* checkbox allows you to set a different due date to override the pre-set loan period. - -Before you scan the item, select the *Specific Due Date* checkbox. Enter the date in yyyy-mm-dd format. This date applies to all items until you change the date, de-select the *Specific Due Date* checkbox, or quit the patron record. - -image::media/specify_due_date1_web_client.png[] - - -=== Email Checkout Receipts === - -This feature allows patrons to receive checkout receipts through email at the circulation desk and in the Evergreen self-checkout interface. Patrons need to opt in to receive email receipts by default and must have an email address associated with their account. Opt in can be staff mediated at the time of account creation or in existing accounts. Patrons can also opt in directly in their OPAC account or through patron self-registration. This feature does not affect the behavior of checkouts from SIP2 devices. - -==== Staff Client Check Out ==== - -When a patron has opted to receive email checkout receipts by default, an envelope icon representing email will appear next to the receipt options in the Check Out screen. A printer icon representing a physical receipt appears if the patron has not opted in to the default email receipts. - -image::media/ereceipts5_web_client.PNG[] - -Staff can click *Quick Receipt* and the default checkout receipt option will be triggered—an email will be sent or the receipt will print out. The Quick Receipt option allows staff to stay in the patron account after completing the transaction. Alternatively, staff can click *Done* to trigger the default checkout receipt and close out the patron account. By clicking on the arrow next to the Quick Receipt or Done buttons, staff can select which receipt option to use, regardless of the selected default. The email receipt option will be disabled if the patron account does not have an email address. - -==== Self Checkout ==== - -In the Self Checkout interface, patrons will have the option to select a print or email checkout receipt, or no receipt. The radio button for the patron's default receipt option will be selected automatically in the interface. Patrons can select a different receipt option if desired. The email receipt radio button will be disabled if there is no email address associated with the patron's account. - -image::media/ereceipts6_web_client.PNG[] - -==== Opt In ==== - -*Staff Mediated Opt In At Registration* - -Patrons can be opted in to receive email checkout receipts by default by library staff upon the creation of their library account. Within the patron registration form, there is a new option below the Email Address field to select _Email checkout receipts by default?_. Select this option if the patron wants email checkout receipts to be their default. Save any changes. - -image::media/ereceipts1_web_client.PNG[] - -*Staff Mediated Opt In After Registration* - -Staff can also select email checkout receipts as the default option in a patron account after initial registration. Within the patron account go to *Edit* and select _Email checkout receipts by default?_. Make sure the patron also has an email address associated with their account. Save any changes. - -image::media/ereceipts2_web_client.PNG[] - -*Patron Opt In – Self-Registration Form* - -If your library offers patrons the ability to request a library card through the patron self-registration form, they can select email checkout receipts by default in the initial self-registration form: - -image::media/ereceipts3_web_client.PNG[] - -*Patron Opt In - OPAC Account* - -Patrons can also opt in to receive email checkout receipts by default directly in their OPAC account. After logging in, patrons can go to *Account Preferences->Notification Preferences* and enable _Email checkout receipts by default?_ and click *Save*. - -image::media/ereceipts4_web_client.PNG[] - - -==== Email Checkout Receipt Configuration ==== - -Email checkout receipts will be sent out through a Notifications/Action Trigger called Email Checkout Receipt. The email template and action trigger can be customized by going to *Administration->Local Administration->Notifications/Action Trigger->Email Checkout Receipt*. - - -== Check In == - -=== Regular check in === - -1) To check in an item click *Check In Items* from the Circulation and Patrons toolbar, or select *Circulation* -> *Check In*. - -image::media/check_in_menu_web_client.png[] - -2) Scan item barcode or enter manually and click *Submit*. - -image::media/checkin_barcode_web_client.png[] - -3) If there is an overdue fine associated with the checkin, an alert will appear at the top of the screen with a fine tally for the current checkin session. To immediately handle fine payment, click the alert to jump to the patron's bill record. - -image::media/overdue_checkin_web_client.png[] - -4) If the checkin is an item that can fill a hold, a pop-up box will appear with patron contact information or routing information for the hold. - -5) Print out the hold or transit slip and place the item on the hold shelf or route it to the proper library. - -6) If the item is not in a state acceptable for hold/transit (for instance, it is damaged), select the line of the item, and choose *Actions* -> *Cancel Transit*. The item will then have a status of _Canceled Transit_ rather than _In Transit_. - -image::media/Check_In-Cancel_Transit.png[Actions Menu - Cancel Transit] - -=== Backdated check in === - -This is useful for clearing a book drop. - -1) To change effective check-in date, select *Circulation* -> *Check In Items*. In *Effective Date* field enter the date in yyyy-mm-dd format. - -image::media/backdate_checkin_web_client.png[] - -2) The new effective date is now displayed in the red bar above the Barcode field. - -image::media/backdate_red_web_client.png[] - -3) Move the cursor to the *Barcode* field. Scan the items. When finishing backdated check-in, change the *Effective Date* back to today's date. - -=== Backdate Post-Checkin === - -After an item has been checked in, you may use the Backdate Post-Checkin function to backdate the check-in date. - -1) Select the item on the Check In screen, click *Actions* -> *Backdate Post-Checkin*. - -image::media/backdate_post_checkin_web_client.png[] - -2) In *Effective Date* field enter the date in yyyy-mm-dd format. The check-in date will be adjusted according to the new effective check-in date. - -image::media/backdate_post_date_web_client.png[] - -.Checkin Modifiers -[TIP] -=================================================== -At the right bottom corner there is a *Checkin Modifiers* pop-up list. The options are: - -- *Ignore Pre-cat Items*: No prompt when checking in a pre-cat item. Item will be routed to Cataloguing with Cataloguing status. - -- *Suppress Holds and Transit*: Item will not be used to fill holds or sent in transit. Item has Reshelving status. - -- *Amnesty Mode/Forgive Fines*: Overdue fines will be voided if already created or not be inserted if not yet created (e.g. hourly loans). - -- *Auto-Print Hold and Transit Slips*: Slips will be automatically printed without prompt for confirmation. - -- *Clear Holds Shelf*: Checking in hold-shelf-expired items will clear the items from the hold shelf (holds to be cancelled). - -- *Retarget Local Holds*: When checking in in process items that are owned by the library, attempt to find a local hold to retarget. This is intended to help with proper targeting of newly-catalogued items. - -- *Retarget All Statuses*: Similar to Retarget Local Holds, this modifier will attempt to find a local hold to retarget, regardless of the status of the item being checked in. This modifier must be used in conjunction with the Retarget Local Holds modifier. - -- *Capture Local Holds as Transits*: With this checkin modifier, any local holds will be given an in transit status instead of on holds shelf. The intent is to stop the system from sending holds notifications before the item is ready to be placed on the holds shelf and item will have a status of in-transit until checked in again. If you wish to simply delay notification and allow time for staff to process item to holds shelf, you may wish to use the Hold Shelf Status Delay setting in Library Settings Editor instead. See Local Administration section for more information. - -- *Manual Floating Active*: Floating Groups must be configured for this modifier to function. The manual flag in Floating Groups dictates whether or not the "Manual Floating Active" checkin modifier needs to be active for a copy to float. This allows for greater control over when items float. - -- *Update Inventory*: When this checkin modifier is selected, scanned barcodes will have the current date/time added as the inventory date while the item is checked in. - -These options may be selected simultaneously. The selected option is displayed in the header area. - -image::media/checkinmodifiers-with-inventory2.png[Web client check-in modifiers] -=================================================== - -== Renewal and Editing the Item's Due Date == - -Checked-out items can be renewed if your library's policy allows it. The new due date is calculated from the renewal date. Existing loans can also be extended to a specific date by editing the due date or renewing with a specific due date. - -=== Renewing via a Patron's Account === - -1) Retrieve the patron record and go to the *Items Out* screen. - -image::media/items_out_click_web_client.png[] - -2) Select the item you want to renew. Click on *Actions* -> *Renew*. If you want to renew all items in the account, click *Renew All* instead. - -image::media/renew_action_web_client.png[] - -3) If you want to specify the due date, click *Renew with Specific Due Date*. You will be prompted to select a due date. Once done, click *Apply*. - -//image::media/renew_specific_date_web_client.png[] - - -=== Renewing by Item Barcode === -1) To renew items by barcode, select *Circulation* -> *Renew Items*. - -2) Scan or manually entire the item barcode. - -image::media/renew_item_web_client.png[] - -3) If you want to specify the due date, click *Specific Due Date* and enter a new due date in yyyy-mm-dd format. - -image::media/renew_item_calendar_web_client.png[] - -=== Editing Due Date === - -1) Retrieve the patron record and go to the *Items Out* screen. - -2) Select the item you want to renew. Click on *Actions* -> *Edit Due Date*. - -image::media/edit_due_date_action_web_client.png[] - -3) Enter a new due date in yyyy-mm-dd format in the pop-up window, then click *OK*. - -[NOTE] -Editing a due date is not included in the renewal count. - -== Marking Items Lost and Claimed Returned == - -=== Lost Items === -1) To mark items Lost, retrieve patron record and click *Items Out*. - -2) Select the item. Click on *Actions* -> *Mark Lost (by Patron)*. - -image::media/mark_lost_web_client.png[] - -3) The lost item now displays as lost in the *Items Checked Out* section of the patron record. - -image::media/lost_section_web_client.png[] - -4) The lost item also adds to the count of *Lost* items in the patron summary on the left (or top) of the screen. - -image::media/patron_summary_checkouts_web_client.png[] - -[NOTE] -Lost Item Billing -======================== -- Marking an item Lost will automatically bill the patron the replacement cost of the item as recorded in the price field in the item record, and a processing fee as determined by your local policy. If the lost item has overdue charges, the overdue charges may be voided or retained based on local policy. -- A lost-then-returned item will disappear from the Items Out screen only when all bills linked to this particular circulation have been resolved. Bills may include replacement charges, processing fees, and manual charges added to the existing bills. -- The replacement fee and processing fee for lost-then-returned items may be voided if set by local policy. Overdue fines may be reinstated on lost-then-returned items if set by local policy. -======================== - -=== Refunds for Lost Items === - -If an item is returned after a lost bill has been paid and the library's policy is to void the replacement fee for lost-then-returned items, there will be a negative balance in the bill. A refund needs to be made to close the bill and the circulation record. Once the outstanding amount has been refunded, the bill and circulation record will be closed and the item will disappear from the Items Out screen. - -If you need to balance a bill with a negative amount, you need to add two dummy bills to the existing bills. The first one can be of any amount (e.g. $0.01), while the second should be of the absolute value of the negative amount. Then you need to void the first dummy bill. The reason for using a dummy bill is that Evergreen will check and close the circulation record only when payment is applied or bills are voided. - -=== Claimed Returned Items === - -1) To mark an item Claimed Returned, retrieve the patron record and go to the *Items Out* screen. - -2) Select the item, then select *Actions* -> *Mark Claimed Returned* from the dropdown menu. - -image::media/mark_claims_returned_web_client.png[] - -3) Enter date in yyyy-mm-dd format and click *Submit*. - -image::media/claimed_date_web_client.png[] - -4) The Claimed Returned item now displays in the *Other/Special Circulations* section of the patron record. - -image::media/cr_section_web_client.png[] - -5) The Claimed Returned item adds to the count of items that are Claimed Returned in the patron summary on the left (or top) of the screen. It also adds to the total *Other/Special Circulations* that is displayed when editing the patron's record. - -image::media/patron_summary_checkouts_web_client.png[] - -[NOTE] -More on Claimed Returned Items -==================================== -- The date entered for a Claimed Returned item establishes the fine. If the date given has passed, bills will be adjusted accordingly. -- When a Claimed Returned item is returned, if there is an outstanding bill associated with it, the item will not disappear from the *Items Out* screen. It will disappear when the outstanding bills are resolved. -- When an item is marked Claimed Returned, the value in *Claims-returned Count* field in the patron record is automatically increased. Staff can manually adjust this count by editing the patron record. -==================================== - -== In-house Use (F6) == -1) To record in-house use, select *Circulation* -> *Record-In House Use*, click *Check Out* -> *Record In-House Use* on the circulation toolbar , or press *F6*. - -image::media/record_in_house_action_web_client.png[] - -2) To record in-house use for cataloged items, enter number of uses, scan - barcode or type barcode and click *Submit*. - -image::media/in_house_use_web_client.png[] - -[NOTE] -==================================== -There are two independent library settings that will allow copy alerts to display when scanned in In-house Use: -*Display copy alert for in-house-use* set to true will cause an alert message to appear, if it has one, when recording in-house-use for the copy. -*Display copy location check in alert for in-house-use* set to true will cause an alert message indicating that the item needs to be routed to its location if the location has check in alert set to true. -==================================== - -3) To record in-house use for non-cataloged items, enter number of uses, choose non-cataloged type from drop-down menu, and click *Submit*. - -image::media/in_house_use_non_cat.png[] - -[NOTE] -The statistics of in-house use are separated from circulation statistics. The in-house use count of cataloged items is not included in the items' total use count. - -[[itemstatus_web_client]] -== Item Status == - -The Item Status screen is very useful. Many actions can be taken by either circulation staff or catalogers on this screen. Here we will cover some circulation-related functions, namely checking item status, viewing past circulations, inserting item alert messages, marking items missing or damaged, etc. - -=== Checking item status === - -1) To check the status of an item, select *Search* -> *Search for copies by Barcode*. - -image::media/item_status_menu_web_client.png[] - -2) Scan the barcode or type it and click *Submit*. The current status of the item is displayed with selected other fields. You can use the column picker to select more fields to view. - -image::media/item_status_barcode_web_client.png[] - -3) Click the *Detail View* button and the item summary and circulation history will be displayed. - -image::media/item_status_altview_web_client.png[] - -4) Click *List View* to go back. - -image::media/item_status_list_view_web_client.png[] - -[NOTE] -If the item's status is "Available", the displayed due date refers to the previous circulation's due date. - -[TIP] -Upload From File allows you to load multiple items saved in a file on your local computer. The file contains a list of the barcodes in text format. To ensure smooth uploading and further processing on the items, it is recommended that the list contains no more than 100 items. - -=== Viewing past circulations === -1) To view past circulations, retrieve the item on the *Item Status* screen as described above. - -2) Select *Detail view*. - -image::media/last_few_circs_action_web_client.png[] - -3) Choose *Recent Circ History*. The item’s recent circulation history is displayed. - -image::media/last_few_circs_display_web_client.png[] - -4) To retrieve the patron(s) of the last circulations, click on the name of the patron. The patron record will be displayed. - -[TIP] -The number of items that displays in the circulation history can be set in Local *Administration* -> *Library Settings Editor*. - -[NOTE] -You can also retrieve the past circulations on the patron's Items Out screen and from the Check In screen. - -=== Marking items damaged or missing and other functions === -1) To mark items damaged or missing, retrieve the item on the *Item Status* screen. - -2) Select the item. Click on *Actions for Selected Items* -> *Mark Item Damaged* or *Mark Item Missing*. - -// image::media/mark_missing_damaged_web_client.png[] - -[NOTE] -Depending on the library's policy, when marking an item damaged, bills (cost and/or processing fee) may be inserted into the last borrower's account. - -3) Following the above procedure, you can check in and renew items by using the *Check in Items* and *Renew Items* on the dropdown menu. - -=== Item alerts === - -The *Edit Item Attributes* function on the *Actions for Selected Items* dropdown list allows you to edit item records. Here, we will show you how to insert item alert messages by this function. See cataloging instructions for more information on item editing. -1) Retrieve record on *Item Status* screen. - -2) Once item is displayed, highlight it and select *Actions for Selected Items* -> *Edit Item Attributes*. - -3) The item record is displayed in the *Copy Editor*. - -//image::media/copy_edit_alert_web_client.png[] - -4) Click *Alert Message* in the *Miscellaneous* column. The background color of the box changes. Type in the message then click *Apply*. - -//image::media/copy_alert_message_web_client.png[] - -5) Click *Modify Copies*, then confirm the action. - - -== Long Overdue Items == - -*Items Marked Long Overdue* - -Once an item has been overdue for a configurable amount of time, Evergreen will mark the item long overdue in the borrowing patron’s account. This will be done automatically through a Notification/Action Trigger. When the item is marked long overdue, several actions will take place: - -. The item will go into the status of “Long Overdue” - -. The accrual of overdue fines will be stopped - -Optionally the patron can be billed for the item price, a long overdue -processing fee, and any overdue fines can be voided from the account. Patrons -can also be sent a notification that the item was marked long overdue. And -long-overdue items can be included on the "Items Checked Out" or "Other/Special -Circulations" tabs of the "Items Out" view of a patron's record. These are all -controlled by <>. - -image::media/long_overdue1.png[Patron Account-Long Overdue] - - -*Checking in a Long Overdue item* - -If an item that has been marked long overdue is checked in, an alert will appear on the screen informing the staff member that the item was long overdue. Once checked in, the item will go into the status of “In process”. Optionally, the item price and long overdue processing fee can be voided and overdue fines can be reinstated on the patron’s account. If the item is checked in at a library other than its home library, a library setting controls whether the item can immediately fill a hold or circulate, or if it needs to be sent to its home library for processing. - -image::media/long_overdue2.png[Long Overdue Checkin] - -*Notification/Action Triggers* - -Evergreen has two sample Notification/Action Triggers that are related to marking items long overdue. The sample triggers are configured for 6 months. These triggers can be configured for any amount of time according to library policy and will need to be activated for use. - -* Sample Triggers - -** 6 Month Auto Mark Long-Overdue—will mark an item long overdue after the configured period of time - -** 6 Month Long Overdue Notice—will send patron notification that an item has been marked long overdue on their account - -[[longoverdue_library_settings]] -*Library Settings* - -The following Library Settings enable you to set preferences related to long overdue items: - -* *Circulation: Long-Overdue Check-In Interval Uses Last Activity Date* —Use the - long-overdue last-activity date instead of the due_date to determine whether - the item has been checked out too long to perform long-overdue check-in - processing. If set, the system will first check the last payment time, - followed by the last billing time, followed by the due date. See also the - "Long-Overdue Max Return Interval" setting. - -* *Circulation: Long-Overdue Items Usable on Checkin* —Long-overdue items are usable on checkin instead of going "home" first - -* *Circulation: Long-Overdue Max Return Interval* —Long-overdue check-in processing (voiding fees, re-instating overdues, etc.) will not take place for items that have been overdue for (or have last activity older than) this amount of time - -* *Circulation: Restore Overdues on Long-Overdue Item Return* - -* *Circulation: Void Long-Overdue item Billing When Returned* - -* *Circulation: Void Processing Fee on Long-Overdue Item Return* - -* *Finances: Leave transaction open when long overdue balance equals zero* —Leave transaction open when long-overdue balance equals zero. This leaves the lost copy on the patron record when it is paid - -* *Finances: Long-Overdue Materials Processing Fee* - -* *Finances: Void Overdue Fines When Items are Marked Long-Overdue* - -* *GUI: Items Out Long-Overdue display setting* - -[TIP] -Learn more about these settings in the chapter about the -Library Settings Editor. - -*Permissions to use this Feature* - -The following permissions are related to this feature: - -* COPY_STATUS_LONG_OVERDUE.override - -** Allows the user to check-in long-overdue items thus removing the long-overdue status on the item - - - diff --git a/docs-antora/modules/circulation/pages/circulation_patron_records_web_client.adoc b/docs-antora/modules/circulation/pages/circulation_patron_records_web_client.adoc deleted file mode 100644 index 693d180199..0000000000 --- a/docs-antora/modules/circulation/pages/circulation_patron_records_web_client.adoc +++ /dev/null @@ -1,643 +0,0 @@ -= Circulation - Patron Record = -:toc: - -[[searching_patrons]] -== Searching Patrons == - -indexterm:[patrons, searching for] - -To search for a patron, select _Search -> Search for Patrons_ from the menu bar. - -The Patron Search screen will display. It will contain options to search on the -following fields: - -* Last Name -* First Name -* Middle Name - -image::media/circulation_patron_records-1a_web_client.png[circulation_patron_records 1a] - - -Next to the _Clear Form_ button there is a button with an arrow pointing down that will display the following additional search fields: - -* Barcode -* Alias -* Username -* Email -* Identification -* database ID -* Phone -* Street 1 -* Street 2 -* City -* State -* Postal Code -* Profile Group -* Home Library -* DOB (date of birth) year -* DOB month -* DOB day - -To include patrons marked ``inactive'', click on the _Include Inactive?_ checkbox. - - -image::media/circulation_patron_records-1b_web_client.png[circulation_patron_records 1b] - -.Tips for searching -[TIP] -=================== -* Search one field or combine fields for more precise results. -* Truncate search terms for more search results. -* Search ignores punctuation such as diacritics, apostrophes, hyphens and commas. -* Searching by Date of Birth: Year searches are "contains" searches. E.g. year - "15" matches 2015, 1915, 1599, etc. For exact matches use the full 4-digit - year. Day and month values are exact matches. E.g. month "1" (or "01") matches - January, "12" matches December. -=================== - -Once you have located the desired patron, click on the entry row for this patron in -the results screen. A summary for this patron will display on the left hand side. - -image::media/circulation_patron_records-2_web_client.png[circulation_patron_records 2] - -The _Patron Search_ button on the upper right may be used to resume searching for patrons. - -== Retrieve Recent Patrons == - -indexterm:[patrons, retrieving recent] - -=== Setting up Retrieve Recent Patrons === - -* This feature must be configured in the _Library Settings Editor_ -(_Administration -> Local Administration -> Library Settings Editor_). The -library setting is called "Number of Retrievable Recent Patrons" and is located -in the Circulation settings group. -** A value of zero (0) means no recent patrons can be retrieved. -** A value greater than 1 means staff will be able to retrieve multiple recent -patrons via a new _Circulation -> Retrieve Recent Patrons_ menu entry. -** The default value is 1 for backwards compatibility. (The _Circulation -> -Retrieve Last Patron_ menu entry will be available.) - -=== Retrieving Recent Patrons === -* Once the library setting has been configured to a number greater than 1, the -option Retrieve Recent Patrons will appear below the Retrieve Last patron -option in the Circulation drop-down from the Menu Bar (_Circulation -> -Retrieve Recent Patrons_). - -* When selected, a grid will appear listing patrons accessed by that workstation -in the current session. The length of the list will be limited by the value -configured in the _Library Settings Editor_. If no patrons have been accessed, -the grid will display "No Items To Display." - - -== Registering New Patrons == - -indexterm:[patrons, registering] - -To register a new patron, select _Circulation -> Register Patron_ from the menu bar. The Patron -Registration form will display. - -image::media/circulation_patron_records-4.JPG[Patron registration form] - -Mandatory fields display in yellow. - -image::media/circulation_patron_records-5.JPG[circulation_patron_records 5] - -The _Show: Required Fields_ and _Show: Suggested Fields_ links may be used to limit -the options on this page. - -image::media/circulation_patron_records-6.JPG[circulation_patron_records 6] - -When finished entering the necessary information, select _Save_ to save the new -patron record or _Save & Clone_ to register a patron with the same address. -When _Save & Clone_ is selected, the address information is copied into the -resulting patron registration screen. It is linked to the original patron. -Address information may only be edited through the original record. - -image::media/circulation_patron_records-8.JPG[circulation_patron_records 8] - -[TIP] -============================================================================ -* Requested fields may be configured in the _Library Settings Editor_ -(_Administration -> Local Administration -> Library Settings Editor_). -* Statistical categories may be created for information tracked by your library -that is not in the default patron record. -* These may be configured in the _Statistical Categories Editor_ -(_Administration -> Local Administration -> Statistical Categories Editor_). -* Staff accounts may also function as patron accounts. -* You must select a _Main (Profile) Permission Group_ before the _Update Expire -Date_ button will work, since the permission group determines the expiration date. -============================================================================ - -=== Email field === - -indexterm:[patrons,email addresses] -indexterm:[email] - -It's possible for administrators to set up the email field to allow or disallow -multiple email addresses for a single patron (usually separated by a comma). -If you'd like to make changes to whether multiple email addresses -are allowed here or not, ask your system administrator to change the -`ui.patron.edit.au.email.regex` library setting. - - -== Patron Self-Registration == -*Abstract* - -Patron Self-Registration allows patrons to initiate registration for a library account through the OPAC. Patrons can fill out a web-based form with basic information that will be stored as a “pending patron” in Evergreen. Library staff can review pending patrons in the staff-client and use the pre-loaded account information to create a full patron account. Pending patron accounts that are not approved within a configurable amount of time will be automatically deleted. - -*Patron Self-Registration* - -. In the OPAC, click on the link to *Request Library Card* - -. Fill out the self-registration form to request a library card, and click *Submit Registration*. - -. Patrons will see a confirmation message: “Registration successful! Please see library staff to complete your registration.” - -image::media/patron_self_registration2.jpg[Patron Self-Registration form] - -*Managing Pending Patrons* - -. In the staff client select *Circulation* -> *Pending Patrons*. - -. Select the patron you would like to review. In this screen you have the option to *Load* the pending patron information to create a permanent library account. - -. To create a permanent library account for the patron, click on the patron’s row, click on the *Load Patron* button at the top of the screen. This will load the patron self-registration information into the main *Patron Registration* form. - -. Fill in the necessary patron information for your library, and click *Save* to create the permanent patron account. - - -[[updating_patron_information]] -== Updating Patron Information == - -indexterm:[patrons, updating] - -Retrieve the patron record as described in the section -<>. - -Click on _Edit_ from the options that display at the top of the patron record. - -image::media/circulation_patron_records-9_web_client.png[Patron edit with summary display] - -Edit information as required. When finished, select _Save_. - -After selecting _Save_, the page will refresh. The edited information will be -reflected in the patron summary pane. - -[TIP] -======= -* To quickly renew an expired patron, click the _Update Expire Date_ button. -You will need a _Main (Profile) Permission Group_ selected for this to work, -since the permission group determines the expiration date. -======= - - -== Renewing Library Cards == - -indexterm:[library cards, renewing] - -Expired patron accounts when initially retrieved – an alert -stating that the ``Patron account is EXPIRED.'' - -image::media/circulation_patron_records-11_web_client.png[circulation_patron_records 11] - -Open the patron record in edit mode as described in the section -<>. - -Navigate to the information field labeled _Privilege Expiration Date_. Enter a -new date in this box. Or click the calendar icon, and a calendar widget -will display to help you easily navigate to the desired date. - -image::media/circulation_patron_records-12.JPG[circulation_patron_records 12] - -Select the date using the calendar widget or key the date in manually. Click -the _Save_ button. The screen will refresh and the ``expired'' alerts on the -account will be removed. - - -== Lost Library Cards == - -indexterm:[library cards, replacing] - -Retrieve the patron record as described in the section -<>. - -Open the patron record in edit mode as described in the section -<>. - -Next to the _Barcode_ field, select the _Replace Barcode_ button. - -image::media/circulation_patron_records_13.JPG[circulation_patron_records 13] - -This will clear the barcode field. Enter a new barcode and _Save_ the record. -The screen will refresh and the new barcode will display in the patron summary -pane. - -If a patron’s barcode is mistakenly replaced, the old barcode may be reinstated. -Retrieve the patron record as described in the section -<>. Open the patron record in -edit mode as described in the section <>. - -Select the _See All_ button next to the _Replace Barcode_ button. This will -display the current and past barcodes associated with this account. - -image::media/circulation_patron_records_14.JPG[circulation_patron_records 14] - -Check the box(es) for all barcodes that should be ``active'' for the patron. An -``active'' barcode may be used for circulation transactions. A patron may have -more than one ``active'' barcode. Only one barcode may be designated -``primary.'' The ``primary'' barcode displays in the patron’s summary -information in the _Library Card_ field. - -Once you have modified the patron barcode(s), _Save_ the patron record. If you -modified the ``primary'' barcode, the new primary barcode will display in the -patron summary screen. - -== Resetting Patron's Password == - -indexterm:[patrons, passwords] - -A patron’s password may be reset from the OPAC or through the staff client. To -reset the password from the staff client, retrieve the patron record as -described in the section <>. - -Open the patron record in edit mode as described in the section -<>. - -Select the _Generate Password_ button next to the _Password_ field. - -image::media/circulation_patron_records_15.JPG[circulation_patron_records 15] - -NOTE: The existing password is not displayed in patron records for security -reasons. - -A new number will populate the _Password_ text box. -Make note of the new password and _Save_ the patron record. The screen will -refresh and the new password will be suppressed from view. - - -== Barring a Patron == - -indexterm:[patrons, barring] - -A patron may be barred from circulation activities. To bar a patron, retrieve -the patron record as described in the section -<>. - -Open the patron record in edit mode as described in the section -<>. - -Check the box for _Barred_ in the patron account. - -image::media/circulation_patron_records-16.JPG[circulation_patron_records 16] - -_Save_ the user. The screen will refresh. - -NOTE: Barring a patron from one library bars that patron from all consortium -member libraries. - -To unbar a patron, uncheck the Barred checkbox. - - -== Barred vs. Blocked == - -indexterm:[patrons, barring] - -*Barred*: Stops patrons from using their library cards; alerts the staff that -the patron is banned/barred from the library. The ``check-out'' functionality is -disabled for barred patrons (NO option to override – the checkout window is -unusable and the bar must be removed from the account before the patron is able -to checkout items).  These patrons may still log in to the OPAC to view their -accounts. - -indexterm:[patrons, blocking] - -*Blocked*: Often, these are system-generated blocks on patron accounts.  - -Some examples: - -* Patron exceeds fine threshold -* Patron exceeds max checked out item threshold - -A notice appears when a staff person tries to checkout an item to blocked -patrons, but staff may be given permissions to override blocks. - - -== Staff-Generated Messages == - -[[staff_generated_messages]] -indexterm:[patrons, messages] - -There are several types of messages available for staff to leave notes on patron records. - -*Patron Notes*: These notes are added via _Other_ -> _Notes_ in the patron record. These notes can be viewable by staff only or shared with the patron. Staff initials can be required. (See the section <> for more.) - -*Patron Alerts*: This type of alert is added via _Edit_ button in the patron record. There is currently no way to require staff initials for this type of alert. (See the section <> for more.) - -*Staff-Generated Penalties/Messages*: These messages are added via the _Messages_ button in the patron record. They can be a note, alert or block. Staff initials can be required. (See the section <> for more.) - -== Patron Alerts == - -[[circulation_patron_alerts]] -indexterm:[patrons, Alerts] - -When an account has an alert on it, a Stop sign is displayed when the record is -retrieved. - -image::media/circulation_patron_records-18_web_client.png[circulation_patron_records 18] - -Navigating to an area of the patron record using the navigation buttons at the -top of the record (for example, Edit or Bills) will clear the message from view. - -If you wish to view these alerts after they are cleared from view, they may be -retrieved. Use the Other menu to select _Display Alert_ and _Messages_. - -image::media/circulation_patron_records-19_web_client.png[circulation_patron_records 19] - -There are two types of Patron Alerts: - -*System-generated alerts*: Once the cause is resolved (e.g. patron's account has -been renewed), the message will disappear automatically. - -*Staff-generated alerts*: Must be added and removed manually. - -To add an alert to a patron account, retrieve the patron record as described -in the section <>. - -Open the patron record in edit mode as described in the section -<>. - -Enter the alert text in the Alert Message field. - -image::media/circulation_patron_records-20.png[circulation_patron_records 20] - -_Save_ the record. The screen will refresh and the alert will display. - -To remove the alert, retrieve the patron record as described in the section -<>. - -Open the patron record in edit mode as described in the section -<>. - -Delete the alert text in the _Alert Message_ field. - -_Save_ the record. - -The screen will refresh and the indicators for the alert will be removed from -the account. - -== Patron Notes == - -[[circulation_patron_notes]] -indexterm:[patrons, Notes] - -Notes are strictly communicative and may be made visible to the patron via their -account on the OPAC. In the OPAC, these notes display on the account summary -screen in the OPAC. - -image::media/circulation_patron_records-23_web_client.png[circulation_patron_records 23] - -To insert or remove a note, retrieve the patron record as described in the -section <>. - -Open the patron record in edit mode as described in the section -<>. - -Use the Other menu to navigate to _Notes_. - -image::media/circulation_patron_records-24_web_client.png[circulation_patron_records 24] - -Select the _Add New Note_ button. A _Create a new note_ window displays. - -[TIP] -================================================ -Your system administrator can add a box in the _Add Note_ window for staff initials and -require those initials to be entered. They can do so using the "Require staff initials..." -settings in the Library Settings Editor. -================================================ - -Enter note information. - -Select the check box for _Patron Visible_ to display the note in the OPAC. - -image::media/circulation_patron_records-25_web_client.png[circulation_patron_records 25] - -Select _OK_ to save the note to the patron account. - -To delete a note, go to _Other -> Notes_ and use the _Delete_ button -on the right of each note. - -image::media/circulation_patron_records-26_web_client.png[circulation_patron_records 26] - -== Staff-Generated Penalties/Messages == - -[[staff_generated_penalties_web_client]] -To access this feature, use the _Messages_ button in the patron record. - -image::media/staff-penalties-1_web_client.png[Messages screen] - -=== Add a Message === - -Click *Apply Penalty/Message* to begin the process of adding a message to the patron. - -image::media/staff-penalties-2_web_client.png[Apply Penalty Dialog Box] - -There are three options: Notes, Alerts, Blocks - -* *Note*: This will create a non-blocking, non-alerting note visible to staff. Staff can view the message by clicking the _Messages_ button on the patron record. (Notes created in this fashion will not display via _Other_ -> _Notes_, and cannot be shared with the patron. See the <> section for notes which can be shared with the patron.) - -* *Alert*: This will create a non-blocking alert which appears when the patron record is first retrieved. The alert will cause the patron name to display in red, rather than black, text. Alerts may be viewed by clicking the _Messages_ button on the patron record or by selecting _Other_ -> _Display Alerts and Messages_. - -* *Block*: This will create a blocking alert which appears when the patron record is first retrieved, and which behaves much as the non-blocking alert described previously. The patron will be also blocked from circulation, holds and renewals until the block is cleared by staff. - -After selecting the type of message to create, enter the message body into the box. If Staff Initials are required, they must be entered into the _Initials_ box before the message can be added. Otherwise, fill in the optional _Initials_ box and click *OK* - -The message should now be visible in the _Staff-Generated Penalties/Messages_ list. If a blocking or non-blocking alert, the message will also display immediately when the patron record is retrieved. - -image::media/staff-penalties-3_web_client.png[[Messages on a record] - -=== Modify a Message === - -Messages can be edited by staff after they are created. - -image::media/staff-penalties-4_web_client.png[[Actions menu] - -Click to select the message to be modified, then click _Actions_ -> _Modify Penalty/Message_. This menu can also be accessed by right-clicking in the message area. - -image::media/staff-penalties-5_web_client.png[Modify penalty dialog box] - -To change the type of message, click on *Note*, *Alert*, *Block* to select the new type. Edit or add new text in the message body. Enter Staff Initials into the _Initials_ box (may be required.) and click *OK* to submit the alterations. - -image::media/staff-penalties-6_web_client.png[Modified message in the list] - -=== Archive a Message === - -Messages which are no longer current can be archived by staff. This action will remove any alerts or blocks associated with the message, but retains the information contained there for future reference. - -image::media/staff-penalties-4_web_client.png[[Actions menu] - -Click to select the message to be archived, then click _Actions_ -> _Archive Penalty/Message_. This menu can also be accessed by right-clicking in the message area. - -image::media/staff-penalties-7_web_client.png[Archived messages] - -Archived messages will be shown in the section labelled _Archived Penalties/Messages_. To view messages, click *Retrieve Archived Penalties*. By default, messages archived within the past year will be retrieved. To retrieve messages from earlier dates, change the start date to the desired date before clicking *Retrieve Archived Penalties*. - -=== Remove a Message === - -Messages which are no longer current can be removed by staff. This action removes any alerts or blocks associated with the message and deletes the information from the system. - -image::media/staff-penalties-4_web_client.png[[Actions menu] - -Click to select the message to be removed, then click _Actions_ -> _Remove Penalty/Message_. This menu can also be accessed by right-clicking in the message area. - - -== User Buckets == - -User Buckets allow staff to batch delete and make batch modifications to user accounts in Evergreen. Batch modifications can be made to selected fields in the patron account: - -* Home Library -* Profile Group -* Network Access Level -* Barred flag -* Active flag -* Juvenile flag -* Privilege Expiration Date -* Statistical Categories - -Batch modifications and deletions can be rolled back or reversed, with the exception of batch changes to statistical categories. Batch changes made in User Buckets will not activate any Action/Trigger event definitions that would normally be activated when editing an individual account. - -User accounts can be added to User Buckets by scanning individual user barcodes or by uploading a file of user barcodes directly in the User Bucket interface. They can also be added to a User Bucket from the Patron Search screen. Batch changes and batch edit sets are tied to the User Bucket itself, not to the login of the bucket owner. - -=== Create a User Bucket === - -*To add users to a bucket via the Patron Search screen:* - -. Go to *Search->Search for Patrons*. -. Enter your search and select the users you want to add to the user bucket by checking the box next to each user row. You can also hold down the CTRL or SHIFT on your keyboard and select multiple users. -. Click *Add to Bucket* and select an existing bucket from the drop down menu or click *New Bucket* to create a new user bucket. -.. If creating a new user bucket, a dialog box called _Create Bucket_ will appear where you can enter a bucket _Name_ and _Description_ and indicate if the bucket is _Staff Shareable?_. Click *Create Bucket*. -. After adding users to a bucket, an update will appear at the bottom-right hand corner of the screen that says _"Successfully added # users to bucket [Name]"_. - -image::media/userbucket1.PNG[] - -image::media/userbucket2.PNG[] - -*To add users to a bucket by scanning user barcodes in the User Bucket interface:* - -. Go to *Circulation->User Buckets* and select the *Pending Users* tab at the top of the screen. -. Click on *Buckets* and select an existing bucket from the drop down menu or click *New Bucket* to create a new user bucket. -.. If creating a new user bucket, a dialog box called _Create Bucket_ will appear where you can enter a bucket _Name_ and _Description_ and indicate if the bucket is _Staff Shareable?_. Click *Create Bucket*. -.. After selecting or creating a bucket, the Name, Description, number of items, and creation date of the bucket will appear above the _Scan Card_ field. -. Scan in the barcodes of the users that you want to add to the selected bucket into the _Scan Card_ field. Each user account will be added to the Pending Users tab. Hit ENTER on your keyboard after manually typing in a barcode to add it to the list of Pending Users. -. Select the user accounts that you want to add to the bucket by checking the box next to each user row or by using the CTRL or SHIFT key on your keyboard to select multiple users. -. Go to *Actions->Add To Bucket* or right-click on a selected user account to view the _Actions_ menu and select *Add To Bucket*. The user accounts will move to the Bucket View tab and are now in the selected User Bucket. - -image::media/userbucket3.PNG[] - -*To add users to a bucket by uploading a file of user barcodes:* - -. Go to *Circulation->User Buckets* and select the *Pending Users* tab at the top of the screen. -. Click on *Buckets* and select an existing bucket from the drop down menu or click *New Bucket* to create a new user bucket. -.. If creating a new user bucket, a dialog box called _Create Bucket_ will appear where you can enter a bucket _Name_ and _Description_ and indicate if the bucket is _Staff Shareable?_. Click *Create Bucket*. -.. After selecting or creating a bucket, the Name, Description, number of items, and creation date of the bucket will appear above the Scan Card field. -. In the Pending Users tab, click *Choose File* and select the file of barcodes to be uploaded. -.. The file that is uploaded must be a .txt file that contains a single barcode per row. -. The user accounts will automatically appear in the list of Pending Users. -. Select the user accounts that you want to add to the bucket by checking the box next to each user row or by using the CTRL or SHIFT key on your keyboard to select multiple users. -. Go to *Actions->Add To Bucket* or right-click on a selected user account to view the _Actions_ menu and select *Add To Bucket*. The user accounts will move to the Bucket View tab and are now in the selected User Bucket. - -=== Batch Edit All Users === - -To batch edit all users in a user bucket: - -. Go to *Circulation->User Buckets* and select the *Bucket View* tab. -. Click *Buckets* and select the bucket you want to modify from the list of existing buckets. -.. After selecting a bucket, the Name, Description, number of items, and creation date of the bucket will appear at the top of the screen. -. Verify the list of users in the bucket and click *Batch edit all users*. A dialog box called _Update all users_ will appear where you can select the batch modifications to be made to the user accounts. -. Assign a _Name for edit set_. This name will allow staff to identify the batch edit for future verification or rollbacks. -. Set the values that you want to modify. The following fields can be modified in batch: - -* Home Library -* Profile Group -* Network Access Level -* Barred flag -* Active flag -* Juvenile flag -* Privilege Expiration Date - -. Click *Apply Changes*. The modification(s) will be applied in batch. - -image::media/userbucket4.PNG[] - -=== Batch Modify Statistical Categories === - -To batch modify statistical categories for all users in a bucket: - -. Go to *Circulation->User Buckets* and select the *Bucket View* tab. -. Click *Buckets* and select the bucket you want to modify from the list of existing buckets. -.. After selecting a bucket, the Name, Description, number of items, and creation date of the bucket will appear at the top of the screen. -. Verify the list of users in the bucket and click *Batch modify statistical categories*. A dialog box called _Update statistical categories_ will appear where you can select the batch modifications to be made to the user accounts. The existing patron statistical categories will be listed and staff can choose: -.. To leave the stat cat value unchanged in the patron accounts. -.. To select a new stat cat value for the patron accounts. -.. Check the box next to Remove to delete the current stat cat value from the patron accounts. -. Click *Apply Changes*. The stat cat modification(s) will be applied in batch. - -image::media/userbucket12.PNG[] - -=== Batch Delete Users === - -To batch delete users in a bucket: -. Go to *Circulation->User Buckets* and select the *Bucket View* tab. -. Click on *Buckets* and select the bucket you want to modify from the list of existing buckets. -.. After selecting a bucket, the Name, Description, number of items, and creation date of the bucket will appear at the top of the screen. -. Verify the list of users in the bucket and click *Delete all users*. A dialog box called _Delete all users_ will appear. -. Assign a _Name for delete set_. This name will allow staff to identify the batch deletion for future verification or rollbacks. -. Click *Apply Changes*. All users in the bucket will be marked as deleted. - -NOTE: Batch deleting patrons from a user bucket does not use the Purge User functionality, but instead marks the users as deleted. - -image::media/userbucket7.PNG[] - -=== View Batch Changes === - -. The batch changes that have been made to User Buckets can be viewed by going to *Circulation->User Buckets* and selecting the *Bucket View* tab. -. Click *Buckets* to select an existing bucket. -. Click *View batch changes*. A dialog box will appear that lists the _Name_, date _Completed_, and date _Rolled back_ of any batch changes made to the bucket. There is also an option to _Delete_ a batch change. This will remove this batch change from the list of actions that can be rolled back. It will not delete or reverse the batch change. -. Click *OK* to close the dialog box. - -image::media/userbucket8.PNG[] - -=== Roll Back Batch Changes === - -. Batch Changes and Batch Deletions can be rolled back or reversed by going to *Circulation->User Buckets* and selecting the *Bucket View* tab. -. Click *Buckets* to select an existing bucket. -. Click *Roll back batch edit*. A dialog box will appear that contains a drop down menu that lists all batch edits that can be rolled back. Select the batch edit to roll back and click *Roll Back Changes*. The batch change will be reversed and the roll back is recorded under _View batch changes_. - -NOTE: Batch statistical category changes cannot be rolled back. - -image::media/userbucket10.png[] - -image::media/userbucket9.PNG[] - -=== Sharing Buckets === -If a User Bucket has been made Staff Shareable, it can be retrieved via bucket ID by another staff account. The ID for each bucket can be found at the end of the URL for the bucket. For example, in the screenshot below, the bucket ID is 32. - -image::media/userbucket11.PNG[] - -A shared bucket can be retrieved by going to *Circulation->User Buckets* and selecting the *Bucket View* tab. Next, click *Buckets* and select *Shared Bucket*. A dialog box called _Load Shared Bucket by Bucket ID_ will appear. Enter the ID of the bucket you wish to retrieve and click *Load Bucket*. The shared bucket will load in the Bucket View tab. - -=== Permissions === - -All permissions must be granted at the organizational unit that the workstation is registered to or higher and are checked against the users' Home Library at when a batch modification or deletion is executed. - -Permissions for Batch Edits: - -* To batch edit a user bucket, staff accounts must have the VIEW_USER, UPDATE_USER, and CONTAINER_BATCH_UPDATE permissions for all users in the bucket. -* To make a batch changes to Profile Group, staff accounts must have the appropriate group application permissions for the profile groups. -* To make batch changes to the Home Library, staff accounts must have the UPDATE_USER permission at both the old and new Home Library. -* To make batch changes to the Barred Flag, staff accounts must have the appropriate BAR_PATRON or UNBAR_PATRON permission. - -Permissions for Batch Deletion: - -* To batch delete users in a user bucket, staff accounts must have the UPDATE_USER and DELETE_USER permissions for all users in the bucket. - diff --git a/docs-antora/modules/circulation/pages/introduction.adoc b/docs-antora/modules/circulation/pages/introduction.adoc deleted file mode 100644 index 1a39f02bed..0000000000 --- a/docs-antora/modules/circulation/pages/introduction.adoc +++ /dev/null @@ -1,5 +0,0 @@ -= Introduction = -:toc: -Use this section for understanding the circulation procedures in the Evergreen -system. - diff --git a/docs-antora/modules/circulation/pages/offline_circ_webclient.adoc b/docs-antora/modules/circulation/pages/offline_circ_webclient.adoc deleted file mode 100644 index 8acc35b874..0000000000 --- a/docs-antora/modules/circulation/pages/offline_circ_webclient.adoc +++ /dev/null @@ -1,210 +0,0 @@ -= Offline Circulation = -:toc: - -== Introduction == - -Evergreen's Offline Circulation interface is designed to log transactions during a network or server outage. Transactions can be uploaded and processed once connectivity is restored. - -Offline Circulation in the Web Staff Client relies on the use of web service workers to store information for offline use. Prior to using Offline Circulation you must have access to your production server and register your workstation on the computer and in the browser you intend to use. You must also log in from that browser at least once and visit *Search -> Search for Patrons*. Perform a search, select a user from the results, and open the *Patron Edit* interface. This will allow the Offline interface to collect the information it needs, such as workstation information and the patron registration form. - -The web service workers will refresh the cache every 24 hours under normal use. Offline Circulation information is stored via IndexedDB. - -== Using Offline Circulation == - -The Offline Circulation interface can be found by navigating to *Circulation -> Offline Circulation*. - -The permanent link for the Offline Circulation is *https:///eg/staff/offline-interface* and it is recommended that this link be bookmarked on staff workstations. This is the location for both entering transactions while offline as well as processing them later. You will see a slightly different version of this interface depending on whether or not you are logged in. - -If you are logged out, you will see the tab default to *Checkout* and the button on the top-right will read *Export Transactions*. - -image::media/offline_homepage_loggedout.png[Offline homepage logged out] - -If you are logged in, you will see an additional tab on the left for *Session Management* and this will be the default tab. The top-right button will read *Download Block List*. - -image::media/offline_homepage_loggedin.png[Offline homepage logged in] - -If you are logged in and attempt to click on any tab other than *Session Management*, you will see a warning alerting you that you are about to enter offline mode. - -image::media/offline_logout_warning.png[Logout warning] - -This warning is not network-aware and it will appear regardless of network connection state. You must be logged out to record offline transactions. If you see this warning and wish to record offline transactions, click *Proceed* in order to log out. - -== Checkout == - -To check out items in Offline Circulation: - -. Click the *Checkout* tab. -. If you wish to use Strict Barcode for patron and item barcodes, check the box labelled *Strict Barcode*. -. Enter a value in the *Due Date* field or select a date from the Calendar widget. You may also select an option from the *Offset Dropdown*. The date field entry will honor the format set in the Library Settings Editor. -. Scan the Patron Barcode in the box labelled *Patron Barcode*. -. Check out items: -.. For cataloged items, scan the item barcode in the box labelled *Item Barcode*. Each item barcode will appear on the right side of the screen, along with its due date and the patron barcode. If you are manually typing barcodes, you need to click the *Checkout* button or hit the *Enter* key on your keyboard after each Item Barcode entry in order to record the transaction. -.. For non-cataloged items, select a *Non-cataloged Type* from the dropdown and enter the number of items you wish to check out. Click *Checkout*. In the list to the right, the item barcode will appear blank since this item is unbarcoded. The due date and patron barcode will appear, however. -.. If you make an error in entry, click *Clear* to reset the Patron Barcode and Item Barcode fields. -. To print a receipt, check the box labelled *Print Receipt*. -. Click *Save Transactions* in the upper-right of the screen to complete the checkout. - -Note that *Save Transactions* will save any unsaved transactions across the Offline tabs Checkout, Renew, In-House Use, and Checkin. - -In the screenshot, the first two items in the right-hand list are regular checkout items. The third item is a non-cataloged item. - -image::media/offline_checkout.png[Offline checkout] - -A value entered in the Due Date field will take precedence over an existing value in the Offset Dropdown; however, if you change the Offset after setting the Due Date field, the Due Date field will update to reflect the Offset value. - -Due Date and Offset values are sticky between the Checkout and Renew tabs, and also sticky between transactions. Strict Barcode and Print Receipt are sticky among the Checkout, Renew, In-House Use, and Checkin tabs and are also sticky between transactions. - -Pre-cataloged item checkout is not available in Offline Circulation. Any pre-cataloged item checked out through Offline Circulation will result in an entry in the Exception List and will not successfully check out. Pre-cataloged items which are checked in through offline will also result in an entry in the Exception List, but will successfully check in. - -== Renew == - -To renew an item, you must know the item's barcode number. The patron's barcode is optional. - -To renew items in Offline Circulation: - -. Click the *Renew* tab. -. Ensure that the *Due Date* value is correct. -. _(Optional)_: Enter the patron's library card barcode in the *Patron Barcode* field by scanning or typing the barcode. -. For each item to be renewed, scan the item's barcode in the *Item Barcode* field. If you are typing the item barcode, click the *Renew* button or hit the *Enter* key on your keyboard after each item barcode. -. The item barcode, due date, and patron barcode (if entered) appear on the right side of the screen. -. To print a receipt, check the box labelled *Print Receipt*. -. Click *Save Transactions* in the upper-right of the screen to complete the renewal. - -image::media/offline_renew.png[Offline renewal] - -== In-House Use == - -To record in-house use transactions in *Offline Circulation*: - -. Click the *In-House Use* tab. -. Enter the number of uses to record for the item in the *Use Count* field. -. For each item to be recorded as in-house use, scan the item's barcode in the *Item Barcode* field. If you are typing the item barcode, click the *Record Use* button or hit the *Enter* key on your keyboard after each item barcode. -. The item barcode and use count will appear on the right side of the screen. -. To print a receipt, check the box labelled *Print Receipt*. -. Click *Save Transactions* in the upper-right of the screen to record the in-house use. The date of the in-house use is automatically recorded. - -image::media/offline_inhouse.png[Offline in house use] - -== Checkin == - -To checkin items in Offline Circulation: - -. Click the *Checkin* tab. -. Ensure that the *Due Date* value is correct. It will default to today's date. -. For each item to be checked in, scan the item's barcode in the *Item Barcode* field. If you are typing the item barcode, click the *Checkin* button or hit the *Enter* key on your keyboard after each item barcode. -. To print a receipt, check the box labelled *Print Receipt*. -. Click *Save Transactions* in the upper-right of the screen when you are finished entering checkins. - -image::media/offline_checkin.png[Offline checkin] - -Note that existing pre-cataloged items can be checked in through the Offline interface, but they will generate an entry in the Exceptions list when offline transactions are uploaded and processed. - -Items targeted for holds will be captured for their holds when the offline transactions are uploaded and processed; however, there will be no indication in the Exceptions list about this unless the item is also transiting. - -== Patron Registration == - -Patron registration in Evergreen Offline Circulation records patron information for later upload. In the web staff client, the Patron Registration form in Offline is the same as the regular Patron Registration interface. - -image::media/offline_patron_registration.png[Patron registration] - -All fields in the normal Patron Registration interface are available for entry. Required fields are marked in yellow and adhere to Required Fields set in the *Library Settings Editor*. Patron Registration defaults also adhere to settings in the *Library Settings Editor*. Stat cats are not recognized by the Offline Interface, even if they are required. - -Enter patron information and click the *Save* button in the top-right of the Patron Registration interface. You may checkout items to this patron right away, even if you are still in offline mode. - -== Managing Offline Transactions == - -[#offline_block_list] -=== Offline Block List === - -While logged in and still online, you may download an *Offline Block List*. This will locally store a list of all patrons with blocks at the time of the download. If this list is present, the Offline Circulation interface will check transactions against this list. - -To download the block list, navigate to *Circulation -> Offline Circulation* and click the *Download Block List* button in the top-right of the screen. - -If you attempt a checkout or a renewal for a patron on the block list, you will get a modal informing you that the patron has penalties. Click the *Allow* button to override this and proceed with the transaction. Click the *Reject* button to cancel the checkout or renewal. - -image::media/offline_patron_blocked.png[Patron blocked modal] - -=== Exporting Offline Transactions === - -If you anticipate a multi-day closing or if you plan to process your offline transactions at a different workstation, you will want to export your offline transactions. - -To export transactions while you are offline, navigate to *Circulation -> Offline Circulation* and click *Export Transactions* in the top-right of the screen. This will save a file entitled pending.xacts to your browser's default download location. If you will be processing these transactions on another workstation, move this file to an external device like a thumb drive. - -To export transactions while you are logged in, navigate to *Circulation -> Offline Circulation* and click on the *Session Management* tab. Click on the *Export Transactions* button to generate the pending.xacts file as above. If you wish, you can at this point click *Clear Transactions* to clear the list of pending transactions. - -[#processing_offline_transactions] -=== Processing Offline Transactions === - -Once connectivity is restored, navigate back to your *Evergreen Login Page*. You will see a message telling you that there are unprocessed Offline Transactions waiting for upload. - -image::media/offline_unprocessed.png[Login alert about unprocessed transactions] - -Sign in and navigate to *Circulation -> Offline Circulation*. Since you are logged in, you will now see a *Session Management* tab to the left of the Register Patron tab. The Session Management tab includes *Pending Transactions* and *Offline Sessions*. - -In the *Pending Transactions* tab you will see a list of all transactions recorded on that browser. - -image::media/offline_pending_xacts.png[Offline pending transactions] - -If you click *Clear Transactions*, you will be prompted with a warning. - -image::media/offline_clear_pending.png[Warning to clear offline transactions] - -If you are processing transactions right away and from the same browser you recorded them in, follow the steps below: - -. Click on the *Offline Sessions* tab and then on the *Create Session* button. -. Enter a descriptive name for your session in the modal and click *OK/Continue* to proceed. You will see your new session at the top of the *Session List*. The Session List may be sorted ascending or descending by clicking on one of the following column headers: *Organization*, *Created By*, *Description*, *Date Created*, or *Date Completed*. The default sort is descending by Date Created. -+ -image::media/offline_session_list.png[Offline session list] -+ -. Click *Upload* to upload everything listed in the *Pending Transactions* tab. -. Once all transactions are uploaded, the *Upload Count* column will update to show the number of uploaded transactions. -. Click *Process* to process the offline transactions. Click *Refresh* to see the processing progress. Once all transactions are processed the *Date Completed* column will be updated. -+ -image::media/offline_processing_complete.png[Offline processing complete] -+ -. Scroll to the bottom of the screen to see if there are any entries in the xref:#exceptions[*Exception List*]. Some of these may require staff follow up. - -=== Uploading Previously Exported Transactions === - -If you had previous exported your offline transactions you can upload them for processing. - -To import transactions: - -. Log in to the staff client via your *Login Page* -. Navigate to *Circulation -> Offline Circulation* -. Click on the *Session Management* tab. -. Click on the *Import Transactions* button. -. Navigate to the location on your computer where the pending.xacts file is saved. -. Select the file for importing. -. The *Pending Transactions* list will populate with your imported transactions. -. You may now proceed according to the instructions under xref:#processing_offline_transactions[Processing Offline Transactions]. - -[#exceptions] -==== Exceptions ==== - -Exceptions are problems that were encountered during processing. For example, a mis-scanned patron barcode, an open circulation, or an item that was not checked in before it was checked out to another patron would all be listed as exceptions. Those transactions causing exceptions might not be loaded into Evergreen database. Staff should examine the exceptions and take necessary action. - -These are a few notes about possible exceptions. It is not an all-inclusive list. - -* Checking out a item with the wrong date (i.e. the Offline Checkout date is +2 weeks and the item's regular circulation period is +1 week) does not cause an exception. -* Overdue books are not flagged as exceptions. -* Checking out a reference book or another item set to not circulate does not cause an exception. -* Checking out an item belonging to another library does not cause an exception. -* An item that is targeted for a patron hold and captured via offline checkin will not cause an exception unless that item also goes to an In Transit status. -* An item that is on hold for Patron A but checked out to Patron B will not cause an exception. Patron A's hold will be reset and will retarget the next time the hold targeter is run. In order to avoid this it is recommended to not check out holds to other patrons. -* If you check out a book to a patron using a previous barcode for that patron, it will cause an exception and you will have to retrieve that patron while online and re-enter the item barcode in order to checkout the item. -* The Offline Interface can recognize blocked, barred, and expired patrons if you have downloaded the Offline Block List in the browser you are using. You will get an error message indicating the patron status from within the Standalone Interface at check-out time. See the section on the xref:#offline_block_list[Offline Block List] for more information. - -image::media/offline_exceptions.png[Offline exception list] - -At the right side of each exception are buttons for *Item*, *Patron*, and *Debug*. Clicking the *Item* button will retrieve the associated item in a new browser window. Clicking on the *Patron* button will retrieve the associated patron in a new browser window. Clicking the *Debug* button will result in a modal with detailed debugging information. - -Common event names in the Exceptions List include: - -* +ROUTE-ITEM+ - Indicates the book should be routed to another branch or library system. You'll need to find the book and re-check it in while online to get the Transit Slip to print. -* +COPY_STATUS_LOST+ - Indicates a book previously marked as lost was found and checked in. You will need to find the book and re-check it in while online to correctly clear it from the patron's account. -* +CIRC_CLAIMS_RETURNED+ - Indicates a book previously marked as claimed-returned was found and checked in. You will need to find the book and re-check it in while online to correctly clear it from the patron's account. -* +ASSET_COPY_NOT_FOUND+ - Indicates the item barcode was mis-scanned/mis-typed. -* +ACTOR_CARD_NOT_FOUND+ - Indicates the patron's library barcode was mis-scanned, mis-typed, or nonexistent. -* +OPEN_CIRCULATION_EXISTS+ - Indicates a book was checked out that had never been checked in. -* +MAX_RENEWALS_REACHED+ - Indicates the item has already been renewed the maximum times allowed. Note that if the staff member processing the offline transaction set has the +MAX_RENEWALS_REACHED.override+ permission at the appropriate level, the system will automatically override the error and will allow the renewal. diff --git a/docs-antora/modules/circulation/pages/self_check.adoc b/docs-antora/modules/circulation/pages/self_check.adoc deleted file mode 100644 index 853f22251e..0000000000 --- a/docs-antora/modules/circulation/pages/self_check.adoc +++ /dev/null @@ -1,93 +0,0 @@ -= Self checkout = -:toc: - -== Introduction == - -Evergreen includes a self check interface designed for libraries that simply -want to record item circulation without worrying about security mechanisms like -magnetic strips or RFID tags. - -== Initializing the self check == -The self check interface runs in a web browser. Before patrons can use the self -check station, a staff member must initialize the interface by logging in. - -. Open your self check interface page in a web browser. By default, the URL is - `https://[hostname]/eg/circ/selfcheck/main`, where _[hostname]_ - represents the host name of your Evergreen web server. -. Log in with a staff account with circulation permissions. - -image::media/self-check-admin-login.png[Self Check Admin Login] - -== Basic Check Out == - -. Patron scans their barcode. -+ -image::media/self_check_check_out_1.png[self check] -+ -. _Optional_ Patron enters their account password. -+ -image::media/self_check_check_out_2.png[self check] -+ -. Patron scans the barcodes for their items -_OR_ -Patron places items, one at a time, on the RFID pad. -+ -image::media/self_check_check_out_3.png[self check] -+ -. Items will be listed below with a check out confirmation message. -+ -image::media/self_check_check_out_4.png[self check] -+ -. If a check out fails a message will advise patrons. -+ -image::media/self_check_error_1.png[self check] -+ -. Patron clicks *Logout* to print a checkout receipt and logout. -_OR_ -Patron clicks *Logout (No Receipt)* to logout with no receipt. -+ -image::media/self_check_check_out_5.png[self check] -+ -[NOTE] -========== -If the patron forgets to logout the system will automatically log out after the time -period specified in the library setting *Patron Login Timeout (in seconds)*. An inactivity pop-up -will appear to warn patrons 20 seconds before logging out. - -image::media/self_check_check_out_6.png[self check] -========== - -== View Items Out == - -. Patrons are able to view the items they currently have checked out by clicking *View Items Out* -+ -image::media/self_check_view_items_out_1.png[self check] -+ -. The items currently checked out will display with their due dates. -Using the *Print List* button patrons can -print out a receipt listing all of the items they currently have checked out. - -image::media/self_check_view_items_out_2.png[self check] - - -== View Holds == - -. Patrons are able to view their current holds by clicking *View Holds* -+ -image::media/self_check_view_holds_1.png[self check] -+ -. Items currently on hold display. Patrons can also see which, if any, items are ready for pickup. -+ -Using the *Print List* button patrons can print out a receipt listing all of the items they currently have on hold. -+ -image::media/self_check_view_holds_2.png[self check] - -== View Fines == - -. Patrons are able to view the fines they currently owe by clicking *View Details* -+ -image::media/self_check_view_fines_1.png[self check] -+ -. Current fines owed by the patron display. - -image::media/self_check_view_fines_2.png[self check] diff --git a/docs-antora/modules/circulation/pages/self_check_configuration.adoc b/docs-antora/modules/circulation/pages/self_check_configuration.adoc deleted file mode 100644 index d7bf15c1c4..0000000000 --- a/docs-antora/modules/circulation/pages/self_check_configuration.adoc +++ /dev/null @@ -1,51 +0,0 @@ -= Self checkout = -:toc: - -== Introduction == - -Evergreen includes a self check interface designed for libraries that simply -want to record item circulation without worrying about security mechanisms like -magnetic strips or RFID tags. - -== Initializing the self check == -The self check interface runs in a web browser. Before patrons can use the self -check station, a staff member must initialize the interface by logging in. - -. Open your self check interface page in a web browser. By default, the URL is - `https://[hostname]/eg/circ/selfcheck/main`, where _[hostname]_ - represents the host name of your Evergreen web server. -. Log in with a staff account with circulation permissions. - -image::media/self-check-admin-login.png[Self Check Admin Login] - -=== Setting library hours of operation === -When the self check prints a receipt, the default template includes the -library's hours of operation in the receipt. If the library has no configured -hours of operation, the attempt to print a receipt fails and the browser hangs. - -=== Configuring self check behavior === -Several library settings control the behavior of the self check: - -* *Block copy checkout status*: Prevent the staff user's permission override - from enabling patrons to check out items that they would not normally be able - to check out, such as the "On reservation shelf" status. The status IDs are - found in the `config.copy_status` database table. -* *Patron Login Timeout*: Automatically logs the patron out of the self check - after a certain period of inactivity. *NOT CURRENTLY SUPPORTED* -* *Pop-up alert for errors*: In addition to displaying an alert message on the - screen, this setting raises patron awareness of possible problems by raising - an alert box that the patron must dismiss before they can check out another - item. -* *Require Patron Password*: By default, users can enter either their user name - or barcode, without having to enter their password, to access their account. - This setting requires patrons to enter their password for additional - security. -* *Workstation Required*: If set, the URL must either include a - `?ws=[workstation]` parameter, where _[workstation]_ is the name of a - registered Evergreen workstation, or the staff member must register a new - workstation when they login. The workstation parameter ensures that check outs - are recorded as occurring at the correct library. - -== Using the self check == - -See the circulation manual for documentation about using the self check interface. diff --git a/docs-antora/modules/circulation/pages/triggered_events.adoc b/docs-antora/modules/circulation/pages/triggered_events.adoc deleted file mode 100644 index dfdea9e2a3..0000000000 --- a/docs-antora/modules/circulation/pages/triggered_events.adoc +++ /dev/null @@ -1,68 +0,0 @@ -= Triggered Events and Notices = -:toc: - -== Introduction == - -Improvements to the Triggered Events interface enables you to easily filter, -sort, and print triggered events from the patron's account or an item's details. -This feature is especially useful when tracking notice completion from a -patron's account. - -== Access and View == - -You can access *Triggered Events* from two Evergreen interfaces: a patron's -account or an item's details. - -To access this interface in the patron's account, open the patron's record and -click *Other* -> *Triggered Events / Notifications*. - -To access this interface from the item's details, enter the item barcode into -the *Item Status* screen, and click *Actions* -> *Show* -> *Triggered Events*. - -Information about the patron, the item, and the triggered event appear in the -center of the screen. Add or delete columns to the display by right clicking on -any column. The *Column Picker* appears in a pop up box and enables you to -select the columns that you want to display. - -image::media/Triggered_Events_and_Notices1.jpg[Triggered_Events_and_Notices1] - -== Filter == - -The triggered events that display are controlled by the filters on the right -side of the screen. By default, Evergreen displays completed circulation -events. Notice that the default filters display *Event State is Complete* and -*Core Type is Circ*. - -To view completed hold-related events, such as hold capture or hold notice -completion, choose *Event State is Complete* and *Core Type is Hold* from the -drop down menu. - -You can also use the *Event State* filter to view circs and holds that are -*pending* or have an *error*. - -Add and delete filters to customize the list of triggered events that displays. -To add another filter, click *Add Row*. To delete a filter, click the red _X_ -adjacent to a row. - -image::media/Triggered_Events_and_Notices2.jpg[Triggered_Events_and_Notices2] - -== Sort == - -You can sort your results by clicking the column name. - -image::media/Triggered_Events_and_Notices3.jpg[Triggered_Events_and_Notices3] - - -== Print == - -You can select the events that you want to print, or you can print all events. -To print selected events, check the boxes adjacent to the events that you want -to print, and click *Print Selected Events*. To print all events, simply click -*Print All Events*. - -== Reset == - -If the triggered event does not complete or the notice is not sent and the -trigger needs to be run again, then select the event, and click *Reset Selected -Events*. - diff --git a/docs-antora/modules/circulation/pages/user_buckets.adoc b/docs-antora/modules/circulation/pages/user_buckets.adoc deleted file mode 100644 index 2bf401389d..0000000000 --- a/docs-antora/modules/circulation/pages/user_buckets.adoc +++ /dev/null @@ -1,86 +0,0 @@ -= User buckets = -:toc: - -== Introduction == -indexterm:[patron buckets] -indexterm:[patrons, batch operations] - -You can select and group a set of users into a User Bucket. -You can add users to a User Bucket from the Patron Search -interface or directly from the User Bucket interface by user barcode. -It is also possible to add users to a User -Bucket by uploading a text file that contains a list of user barcodes. - -From this interface it is possible to perform a set of specific batch update -operations on the group of users you have identified. - -== Editing users == -indexterm:[batch edit, patrons] - -You can change the following fields in batch: - - * Active flag - * Primary Permission Group (group application permissions consulted) - * Juvenile flag - * Home Library (if you have the UPDATE_USER permission for both the original and destination libraries) - * Privilege Expiration Date - * Barred flag (if you have the BAR_PATRON permission) - * Internet Access Level - -NOTE: You will need the UPDATE_USER permission. - -Each change set requires a name. Buckets may have multiple change sets. All -users in the Bucket at the time of processing are updated when the change -set is processed, and change sets are processed immediately upon successful -creation. The interface delivers progress information regarding the -processing stage and percent of completion. - -While processing the users, the original value for each field edited is -recorded for potential future rollback. Users can examine the success and -failure of applied change sets. - -The user will be able to rollback the entire change set, but not parts thereof. -The rollback will affect only those users that were successfully updated by the -original change set and may be different from the current set of users in the -Bucket. Users can manually discard change sets, removing them from the -interface but preventing future rollback. - -As a batch process, rather than a direct edit, this mechanism explicitly skips -processing of Action/Trigger event definitions for user update, so users will -not receive any notifications that they might otherwise receive when their accounts -are edited. - -== Deleting users == -indexterm:[batch delete, patrons] - -You may also delete users as a batch. - -NOTE: You will need the UPDATE_USER and DELETE_USER permissions. - -Each delete set requires a name. Buckets may have multiple delete sets. All -users in the Bucket at the time of processing are marked as deleted when -the delete set is processed. The interface delivers progress information -regarding the processing stage and percent of completion. - -While processing the users, the original value for the "deleted" field will be -recorded for potential future rollback. Users are able to examine the -success and failure of applied delete sets in the same interface used for the -above described change sets. - -As a batch process, rather than a direct edit, this mechanism explicitly skips -processing of Action/Trigger event definitions for user deletion. - -This mechanism does not completely purge the user from the database. User data -will still be available to system administrators with database access. - -== Editing Statistical Category Entries == - -All users in the bucket can have their Statistical Category Entries -modified. Unlike user data field updates, modification of Statistical -Category Entries is permanent and cannot be rolled back. No named change -sets are required. The interface will deliver progress information regarding -the processing stage and percent of completion. - -As a batch process, rather than a direct edit, this mechanism explicitly skips -processing of Action/Trigger event definitions for user update. - diff --git a/docs-antora/modules/development/_attributes.adoc b/docs-antora/modules/development/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/development/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/development/assets/images/media/CONNECT.png b/docs-antora/modules/development/assets/images/media/CONNECT.png deleted file mode 100644 index b30d4028ac..0000000000 Binary files a/docs-antora/modules/development/assets/images/media/CONNECT.png and /dev/null differ diff --git a/docs-antora/modules/development/assets/images/media/REQUEST.png b/docs-antora/modules/development/assets/images/media/REQUEST.png deleted file mode 100644 index 0f6ac2d40e..0000000000 Binary files a/docs-antora/modules/development/assets/images/media/REQUEST.png and /dev/null differ diff --git a/docs-antora/modules/development/examples/python_client.py b/docs-antora/modules/development/examples/python_client.py deleted file mode 100644 index d0c6dfcdbd..0000000000 --- a/docs-antora/modules/development/examples/python_client.py +++ /dev/null @@ -1,60 +0,0 @@ -#!/usr/bin/env python -"""OpenSRF client example in Python""" -import osrf.system -import osrf.ses - -def osrf_substring(session, text, sub): - """substring: Accepts a string and a number as input, returns a string""" - request = session.request('opensrf.simple-text.substring', text, sub) - - # Retrieve the response from the method - # The timeout parameter is optional - response = request.recv(timeout=2) - - request.cleanup() - # The results are accessible via content() - return response.content() - -def osrf_split(session, text, delim): - """split: Accepts two strings as input, returns an array of strings""" - request = session.request('opensrf.simple-text.split', text, delim) - response = request.recv() - request.cleanup() - return response.content() - -def osrf_statistics(session, strings): - """statistics: Accepts an array of strings as input, returns a hash""" - request = session.request('opensrf.simple-text.statistics', strings) - response = request.recv() - request.cleanup() - return response.content() - - -if __name__ == "__main__": - file = '/openils/conf/opensrf_core.xml' - - # Pull connection settings from section of opensrf_core.xml - osrf.system.System.connect(config_file=file, config_context='config.opensrf') - - # Set up a connection to the opensrf.settings service - session = osrf.ses.ClientSession('opensrf.simple-text') - - result = osrf_substring(session, "foobar", 3) - print(result) - print - - result = osrf_split(session, "This is a test", " ") - print("Received %d elements: [" % len(result)), - print(', '.join(result)), ']' - - many_strings = ( - "First I think I'll have breakfast", - "Then I think that lunch would be nice", - "And then seventy desserts to finish off the day" - ) - result = osrf_statistics(session, many_strings) - print("Length: %d" % result["length"]) - print("Word count: %d" % result["word_count"]) - - # Cleanup connection resources - session.cleanup() diff --git a/docs-antora/modules/development/nav.adoc b/docs-antora/modules/development/nav.adoc deleted file mode 100644 index c8b4558c86..0000000000 --- a/docs-antora/modules/development/nav.adoc +++ /dev/null @@ -1,6 +0,0 @@ -* xref:development:introduction.adoc[Developer Resources] -** xref:development:support_scripts.adoc[Support Scripts] -** xref:development:pgtap.adoc[Developing with pgTAP tests] -** xref:development:intro_opensrf.adoc[Easing gently into OpenSRF] -** xref:development:updating_translations_launchpad.adoc[Updating translations using Launchpad] - diff --git a/docs-antora/modules/development/pages/README b/docs-antora/modules/development/pages/README deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/docs-antora/modules/development/pages/_attributes.adoc b/docs-antora/modules/development/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/development/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/development/pages/data_opensearch.adoc b/docs-antora/modules/development/pages/data_opensearch.adoc deleted file mode 100644 index 9e2a1514d7..0000000000 --- a/docs-antora/modules/development/pages/data_opensearch.adoc +++ /dev/null @@ -1,25 +0,0 @@ -= Using Opensearch as a developer = -:toc: - -== Introduction == - -Evergreen responds to OpenSearch requests. This can be a good way to get -search results delivered in a format that you prefer. - -Throughout this section, replace `` with the domain or subdomain -of your Evergreen installation to try these examples on your own system. - -Opensearch queries will be in the format -`http:///opac/extras/opensearch/1.1/-/html-full?searchTerms=item_type(r)&searchClass=keyword&count=25` - -In this example, - -* html-full is the format you would like. html-full is a good view for troubleshooting your query. -* searchTerms is a url-encoded search query. You can use limiters in the `limiter(value)` format. -For example, you can use a query like `item_lang(spa)` -* count is the number of results per page. The default is 10, and the maximum is 25. - -Other options include: - -* searchSort and searchSortDir, which can be used to display the results in a different order (e.g. for an RSS feed). - diff --git a/docs-antora/modules/development/pages/data_supercat.adoc b/docs-antora/modules/development/pages/data_supercat.adoc deleted file mode 100644 index ff4489c9b4..0000000000 --- a/docs-antora/modules/development/pages/data_supercat.adoc +++ /dev/null @@ -1,252 +0,0 @@ -= Using Supercat = -:toc: - -== Introduction == - -You can use SuperCat to get data about ISBNs, metarecords, bibliographic -records, and authority records. - -Throughout this section, replace `` with the domain or subdomain -of your Evergreen installation to try these examples on your own system. - -== ISBNs == - -Given one ISBN, Evergreen can return a list of related records and ISBNs, -including alternate editions and translations. To use the Supercat -oISBN tool, use http or https to access the following URL. - ----- -http:///opac/extras/oisbn/ ----- - -For example, the URL http://gapines.org/opac/extras/oisbn/0439136350 returns -the following list of catalog record IDs and ISBNs: - -[source,xml] ----------------------------------------------------------------------------- - - - 9780606323475 - 9780780673809 - 9780807286029 - 9780780669642 - 043965548X - 8498386969 - 9780786222742 - 9788478885190 - 0736650962 - 8478885196 - 9780439554923 - 8478885196 - 0807282324 - 8478885196 - 1480614998 - 8478886559 - 9780613371063 - 9782070528189 - 0786222743 - 9780329232696 - 9780807282311 - 0807286028 - 9789500421157 - 9780613359580 - 9781594130021 - 0807283150 - 0747542155 - 8478886559 - ----------------------------------------------------------------------------- - -== Records == - -=== Record formats === - -First, determine which format you'd like to receive data in. To see the -available formats for bibliographic records, visit ----- -http:///opac/extras/supercat/formats/record ----- - -Similarly, authority record formats can be found at -http://libcat.linnbenton.edu/opac/extras/supercat/formats/authority -and metarecord formats can be found at -http://libcat.linnbenton.edu/opac/extras/supercat/formats/metarecord - -For example, http://gapines.org/opac/extras/supercat/formats/authority -shows that the Georgia Pines catalog can return authority records in the -formats _opac_, _marc21_, _marc21-full_, and _marc21-uris_. Supercat -also includes the MIME type of each format, and sometimes also refers -to the documentation for a particular format. - -[source,xml] ----------------------------------------------------------------------------- - - - - opac - text/html - - - marc21 - application/xml - http://www.loc.gov/marc/ - - - marc21-full - application/xml - http://www.loc.gov/marc/ - - - marc21-uris - application/xml - http://www.loc.gov/marc/ - - ----------------------------------------------------------------------------- - -[NOTE] -============================================================================ -atom-full is currently the only format that includes holdings and availability -data for a given bibliographic record. -============================================================================ - - -=== Retrieve records === - -You can retrieve records using URLs in the following format: ----- -http:///opac/extras/supercat/retrieve/// ----- - -For example, http://gapines.org/opac/extras/supercat/retrieve/mods/record/33333 -returns the following record. - -[source,xml] ----------------------------------------------------------------------------- - - - - - Words and pictures / - - - Dodd, Siobhan - - creator - - - text - - - mau - - - Cambridge, Mass - - Candlewick Press - 1992 - 1st U.S. ed. - monographic - - eng - -
print
- 1 v. (unpaged) : col. ill. ; 26 cm. -
- Simple text with picture cues accompany illustrations depicting scenes of everyday life familiar to children, such as getting dressed, attending a party, playing in the park, and taking a bath. - juvenile - Siobhan Dodds. - - Family life - Fiction - - - Vocabulary - Juvenile fiction - - - Rebuses - - - Picture puzzles - Juvenile literature - - - Picture books for children - - - Picture dictionaries, English - Juvenile literature - - - Vocabulary - Juvenile literature - - PZ7.D66275 Wo 1992 - PN6371.5 .D63 1992x - 793.73 - 1564020428 : - 9781564020420 - 91071817 - - DLC - 920206 - 20110608231047.0 - 33333 - -
-
----------------------------------------------------------------------------- - -=== Recent records === - -SuperCat can return feeds of recently edited or created authority and bibliographic records: - ----- -http:///opac/extras/feed/freshmeat///// ----- - -Note the following features: - -* The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. -* If you do not supply a limit, then up to 10 records will be returned. -* feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. - -Example: http://gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 - -==== Filtering by Org Unit ==== - -You can generate a similar list, with the added ability to limit by Org Unit, using the item-age browse axis. - -To produce an RSS feed by item date rather than bib date, and to restrict it to a particular system within a consortium: - -Example: http://gapines.org/opac/extras/browse/atom/item-age/ARL-BOG/1/10 - -Note the following: - -* ARL-BOG should be the short name of the org unit you're interested in -* 1 is the page (since you are browsing through pages of results) -* 10 is the number of results to return per page - -Modifying the 'atom' portion of the URL to 'atom-full' will include catalog links in the results: - -Example: http://gapines.org/opac/extras/browse/atom-full/item-age/ARL-BOG/1/10 - -Modifying the 'atom' portion of the URL to 'html-full' will produce an HTML page that is minimally formatted: - -Example: http://gapines.org/opac/extras/browse/html-full/item-age/ARL-BOG/1/10 - -==== Additional Filters ==== - -If you'd like to limit to a particular status, you can append `?status=0` -where `0` is the ID number of the status you'd like to limit to. If a -number of statuses, you can append multiple status parameters (for example, -`?status=0&status=1` will limit to items with a status of either 0 or 1). - -[TIP] -Limiting to status is a good way to weed out on-order items from your -feeds. - -You can also limit by item location (`?copyLocation=227` where 227 is the -ID of your item location). - diff --git a/docs-antora/modules/development/pages/data_unapi.adoc b/docs-antora/modules/development/pages/data_unapi.adoc deleted file mode 100644 index 5d6fcb18b4..0000000000 --- a/docs-antora/modules/development/pages/data_unapi.adoc +++ /dev/null @@ -1,68 +0,0 @@ -= Using UnAPI = -:toc: - -== URL format == - -Evergreen's unAPI support includes access to many -record types. For example, the following URL would fetch -bib 267 in MODS32 along with holdings and record attribute information: - -https://example.org/opac/extras/unapi?id=tag::U2@bre/267{holdings_xml,acn,acp,mra}&format=mods32 - -To access the new unAPI features, the unAPI ID should have the -following form: - - * +tag::U2@+ - * followed by class name, which may be - ** +bre+ (bibs) - ** +biblio_record_entry_feed+ (multiple bibs) - ** +acl+ (shelving locations) - ** +acn+ (call numbers) - ** +acnp+ (call number prefixes) - ** +acns+ (call number suffixes) - ** +acp+ (items) - ** +acpn+ (item notes) - ** +aou+ (org units) - ** +ascecm+ (item stat cat entries) - ** +auri+ (located URIs) - ** +bmp+ (monographic parts) - ** +cbs+ (bib sources) - ** +ccs+ (item statuses) - ** +circ+ (loan checkout and due dates) - ** +holdings_xml+ (holdings) - ** +mmr+ (metarecords) - ** +mmr_holdings_xml+ (metarecords with holdings) - ** +mmr_mra+ (metarecords with record attributes) - ** +mra+ (record attributes) - ** +sbsum+ (serial basic summaries) - ** +sdist+ (serial distributions) - ** +siss+ (serial issues) - ** +sisum+ (serial index summaries) - ** +sitem+ (serial items) - ** +sssum+ (serial supplement summaries) - ** +sstr+ (serial streams) - ** +ssub+ (serial subscriptions) - ** +sunit+ (serial units) - * followed by +/+ - * followed by a record identifier (or in the case of - the +biblio_record_entry_feed+ class, multiple IDs separated - by commas) - * followed, optionally, by limit and offset in square brackets - * followed, optionally, by a comma-separated list of "includes" - enclosed in curly brackets. The list of includes is - the same as the list of classes with the following addition: - ** +bre.extern+ (information from the non-MARC parts of a bib - record) - * followed, optionally, by +/+ and org unit; "-" signifies - the top of the org unit tree - * followed, optionally, by +/+ and org unit depth - * followed, optionally, by +/+ and a path. If the path - is +barcode+ and the class is +acp+, the record ID is taken - to be an item barcode rather than an item ID; for example, in - +tag::U2@acp/ACQ140{acn,bre,mra}/-/0/barcode+, +ACQ140+ is - meant to be an item barcode. - * followed, optionally, by +&format=+ and the format in which the record - should be retrieved. If this part is omitted, the list of available - formats will be retrieved. - - diff --git a/docs-antora/modules/development/pages/intro_opensrf.adoc b/docs-antora/modules/development/pages/intro_opensrf.adoc deleted file mode 100644 index d512978569..0000000000 --- a/docs-antora/modules/development/pages/intro_opensrf.adoc +++ /dev/null @@ -1,1360 +0,0 @@ -= Easing gently into OpenSRF = -:toc: - -== Abstract == -The Evergreen open-source library system serves library consortia composed of -hundreds of branches with millions of patrons - for example, -http://www.georgialibraries.org/statelibrarian/bythenumbers.pdf[the Georgia -Public Library Service PINES system]. One of the claimed advantages of -Evergreen over alternative integrated library systems is the underlying Open -Service Request Framework (OpenSRF, pronounced "open surf") architecture. This -article introduces OpenSRF, demonstrates how to build OpenSRF services through -simple code examples, and explains the technical foundations on which OpenSRF -is built. - -== Introducing OpenSRF == -OpenSRF is a message routing network that offers scalability and failover -support for individual services and entire servers with minimal development and -deployment overhead. You can use OpenSRF to build loosely-coupled applications -that can be deployed on a single server or on clusters of geographically -distributed servers using the same code and minimal configuration changes. -Although copyright statements on some of the OpenSRF code date back to Mike -Rylander's original explorations in 2000, Evergreen was the first major -application to be developed with, and to take full advantage of, the OpenSRF -architecture starting in 2004. The first official release of OpenSRF was 0.1 in -February 2005 (http://evergreen-ils.org/blog/?p=21), but OpenSRF's development -continues a steady pace of enhancement and refinement, with the release of -1.0.0 in October 2008 and the most recent release of 1.2.2 in February 2010. - -OpenSRF is a distinct break from the architectural approach used by previous -library systems and has more in common with modern Web applications. The -traditional "scale-up" approach to serve more transactions is to purchase a -server with more CPUs and more RAM, possibly splitting the load between a Web -server, a database server, and a business logic server. Evergreen, however, is -built on the Open Service Request Framework (OpenSRF) architecture, which -firmly embraces the "scale-out" approach of spreading transaction load over -cheap commodity servers. The http://evergreen-ils.org/blog/?p=56[initial GPLS -PINES hardware cluster], while certainly impressive, may have offered the -misleading impression that Evergreen is complex and requires a lot of hardware -to run. - -This article hopes to correct any such lingering impression by demonstrating -that OpenSRF itself is an extremely simple architecture on which one can easily -build applications of many kinds – not just library applications – and that you -can use a number of different languages to call and implement OpenSRF methods -with a minimal learning curve. With an application built on OpenSRF, when you -identify a bottleneck in your application's business logic layer, you can -adjust the number of the processes serving that particular bottleneck on each -of your servers; or if the problem is that your service is resource-hungry, you -could add an inexpensive server to your cluster and dedicate it to running that -resource-hungry service. - -=== Programming language support === - -If you need to develop an entirely new OpenSRF service, you can choose from a -number of different languages in which to implement that service. OpenSRF -client language bindings have been written for C, Java, JavaScript, Perl, and -Python, and server language bindings have been written for C, Perl, and Python. -This article uses Perl examples as a lowest common denominator programming -language. Writing an OpenSRF binding for another language is a relatively small -task if that language offers libraries that support the core technologies on -which OpenSRF depends: - - * http://tools.ietf.org/html/rfc3920[Extensible Messaging and Presence -Protocol] (XMPP, sometimes referred to as Jabber) - provides the base messaging -infrastructure between OpenSRF clients and servers - * http://json.org[JavaScript Object Notation] (JSON) - serializes the content -of each XMPP message in a standardized and concise format - * http://memcached.org[memcached] - provides the caching service - * http://tools.ietf.org/html/rfc5424[syslog] - the standard UNIX logging -service - -Unfortunately, the -http://evergreen-ils.org/dokuwiki/doku.php?id=osrf-devel:primer[OpenSRF -reference documentation], although augmented by the -http://evergreen-ils.org/dokuwiki/doku.php?id=osrf-devel:primer[OpenSRF -glossary], blog posts like http://evergreen-ils.org/blog/?p=36[the description -of OpenSRF and Jabber], and even this article, is not a sufficient substitute -for a complete specification on which one could implement a language binding. -The recommended option for would-be developers of another language binding is -to use the Python implementation as the cleanest basis for a port to another -language. - -=== OpenSRF communication flows over XMPP === - -The XMPP messaging service underpins OpenSRF, requiring an XMPP server such -as http://www.ejabberd.im/[ejabberd]. When you start OpenSRF, the first XMPP -clients that connect to the XMPP server are the OpenSRF public and private -_routers_. OpenSRF routers maintain a list of available services and connect -clients to available services. When an OpenSRF service starts, it establishes a -connection to the XMPP server and registers itself with the private router. The -OpenSRF configuration contains a list of public OpenSRF services, each of which -must also register with the public router. Services and clients connect to the -XMPP server using a single set of XMPP client credentials (for example, -`opensrf@private.localhost`), but use XMPP resource identifiers to -differentiate themselves in the Jabber ID (JID) for each connection. For -example, the JID for a copy of the `opensrf.simple-text` service with process -ID `6285` that has connected to the `private.localhost` domain using the -`opensrf` XMPP client credentials could be -`opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285`. - -[#OpenSRFOverHTTP] -=== OpenSRF communication flows over HTTP === -Any OpenSRF service registered with the public router is accessible via the -OpenSRF HTTP Translator. The OpenSRF HTTP Translator implements the -http://www.open-ils.org/dokuwiki/doku.php?id=opensrf_over_http[OpenSRF-over-HTTP -proposed specification] as an Apache module that translates HTTP requests into -OpenSRF requests and returns OpenSRF results as HTTP results to the initiating -HTTP client. - -.Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator -[source,bash] --------------------------------------------------------------------------------- -# curl request broken up over multiple lines for legibility -curl -H "X-OpenSRF-service: opensrf.simple-text" \ # <1> - --data 'osrf-msg=[ \ # <2> - {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", \ # <3> - "type":"REQUEST","payload": {"__c":"osrfMethod","__p": \ - {"method":"opensrf.simple-text.reverse","params":["foobar"]} \ - }} \ - }]' \ -http://localhost/osrf-http-translator \ # <4> --------------------------------------------------------------------------------- - -<1> The `X-OpenSRF-service` header identifies the OpenSRF service of interest. - -<2> The POST request consists of a single parameter, the `osrf-msg` value, -which contains a JSON array. - -<3> The first object is an OpenSRF message (`"__c":"osrfMessage"`) with a set of -parameters (`"__p":{}`) containing: - - * the identifier for the request (`"threadTrace":0`); this value is echoed -back in the result - - * the message type (`"type":"REQUEST"`) - - * the locale for the message; if the OpenSRF method is locale-sensitive, it -can check the locale for each OpenSRF request and return different information -depending on the locale - - * the payload of the message (`"payload":{}`) containing the OpenSRF method -request (`"__c":"osrfMethod"`) and its parameters (`"__p:"{}`), which in turn -contains: - - ** the method name for the request (`"method":"opensrf.simple-text.reverse"`) - - ** a set of JSON parameters to pass to the method (`"params":["foobar"]`); in -this case, a single string `"foobar"` - -<4> The URL on which the OpenSRF HTTP translator is listening, -`/osrf-http-translator` is the default location in the Apache example -configuration files shipped with the OpenSRF source, but this is configurable. - -[#httpResults] -.Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator -[source,bash] --------------------------------------------------------------------------------- -# HTTP response broken up over multiple lines for legibility -[{"__c":"osrfMessage","__p": \ # <1> - {"threadTrace":0, "payload": \ # <2> - {"__c":"osrfResult","__p": \ # <3> - {"status":"OK","content":"raboof","statusCode":200} \ # <4> - },"type":"RESULT","locale":"en-CA" \ # <5> - } -}, -{"__c":"osrfMessage","__p": \ # <6> - {"threadTrace":0,"payload": \ # <7> - {"__c":"osrfConnectStatus","__p": \ # <8> - {"status":"Request Complete","statusCode":205} \ # <9> - },"type":"STATUS","locale":"en-CA" \ # <10> - } -}] --------------------------------------------------------------------------------- - -<1> The OpenSRF HTTP Translator returns an array of JSON objects in its -response. Each object in the response is an OpenSRF message -(`"__c":"osrfMessage"`) with a collection of response parameters (`"__p":`). - -<2> The OpenSRF message identifier (`"threadTrace":0`) confirms that this -message is in response to the request matching the same identifier. - -<3> The message includes a payload JSON object (`"payload":`) with an OpenSRF -result for the request (`"__c":"osrfResult"`). - -<4> The result includes a status indicator string (`"status":"OK"`), the content -of the result response - in this case, a single string "raboof" -(`"content":"raboof"`) - and an integer status code for the request -(`"statusCode":200`). - -<5> The message also includes the message type (`"type":"RESULT"`) and the -message locale (`"locale":"en-CA"`). - -<6> The second message in the set of results from the response. - -<7> Again, the message identifier confirms that this message is in response to -a particular request. - -<8> The payload of the message denotes that this message is an -OpenSRF connection status message (`"__c":"osrfConnectStatus"`), with some -information about the particular OpenSRF connection that was used for this -request. - -<9> The response parameters for an OpenSRF connection status message include a -verbose status (`"status":"Request Complete"`) and an integer status code for -the connection status (`"statusCode":205). - -<10> The message also includes the message type (`"type":"RESULT"`) and the -message locale (`"locale":"en-CA"`). - - -[TIP] -Before adding a new public OpenSRF service, ensure that it does -not introduce privilege escalation or unchecked access to data. For example, -the Evergreen `open-ils.cstore` private service is an object-relational mapper -that provides read and write access to the entire Evergreen database, so it -would be catastrophic to expose that service publicly. In comparison, the -Evergreen `open-ils.pcrud` public service offers the same functionality as -`open-ils.cstore` to any connected HTTP client or OpenSRF client, but the -additional authentication and authorization layer in `open-ils.pcrud` prevents -unchecked access to Evergreen's data. - -=== Stateless and stateful connections === - -OpenSRF supports both _stateless_ and _stateful_ connections. When an OpenSRF -client issues a `REQUEST` message in a _stateless_ connection, the router -forwards the request to the next available service and the service returns the -result directly to the client. - -.REQUEST flow in a stateless connection -image:media/REQUEST.png[REQUEST flow in a stateless connection] - -When an OpenSRF client issues a `CONNECT` message to create a _stateful_ connection, the -router returns the Jabber ID of the next available service to the client so -that the client can issue one or more `REQUEST` message directly to that -particular service and the service will return corresponding `RESULT` messages -directly to the client. Until the client issues a `DISCONNECT` message, that -particular service is only available to the requesting client. Stateful connections -are useful for clients that need to make many requests from a particular service, -as it avoids the intermediary step of contacting the router for each request, as -well as for operations that require a controlled sequence of commands, such as a -set of database INSERT, UPDATE, and DELETE statements within a transaction. - -.CONNECT, REQUEST, and DISCONNECT flow in a stateful connection -image:media/CONNECT.png[CONNECT, REQUEST, and DISCONNECT flow in a stateful connection] - -== Enough jibber-jabber: writing an OpenSRF service == -Imagine an application architecture in which 10 lines of Perl or Python, using -the data types native to each language, are enough to implement a method that -can then be deployed and invoked seamlessly across hundreds of servers. You -have just imagined developing with OpenSRF – it is truly that simple. Under the -covers, of course, the OpenSRF language bindings do an incredible amount of -work on behalf of the developer. An OpenSRF application consists of one or more -OpenSRF services that expose methods: for example, the `opensrf.simple-text` -http://git.evergreen-ils.org/?p=OpenSRF.git;a=blob_plain;f=src/perl/lib/OpenSRF/Application/Demo/SimpleText.pm[demonstration -service] exposes the `opensrf.simple-text.split()` and -`opensrf.simple-text.reverse()` methods. Each method accepts zero or more -arguments and returns zero or one results. The data types supported by OpenSRF -arguments and results are typical core language data types: strings, numbers, -booleans, arrays, and hashes. - -To implement a new OpenSRF service, perform the following steps: - - 1. Include the base OpenSRF support libraries - 2. Write the code for each of your OpenSRF methods as separate procedures - 3. Register each method - 4. Add the service definition to the OpenSRF configuration files - -For example, the following code implements an OpenSRF service. The service -includes one method named `opensrf.simple-text.reverse()` that accepts one -string as input and returns the reversed version of that string: - -[source,perl] --------------------------------------------------------------------------------- -#!/usr/bin/perl - -package OpenSRF::Application::Demo::SimpleText; - -use strict; - -use OpenSRF::Application; -use parent qw/OpenSRF::Application/; - -sub text_reverse { - my ($self , $conn, $text) = @_; - my $reversed_text = scalar reverse($text); - return $reversed_text; -} - -__PACKAGE__->register_method( - method => 'text_reverse', - api_name => 'opensrf.simple-text.reverse' -); --------------------------------------------------------------------------------- - -Ten lines of code, and we have a complete OpenSRF service that exposes a single -method and could be deployed quickly on a cluster of servers to meet your -application's ravenous demand for reversed strings! If you're unfamiliar with -Perl, the `use OpenSRF::Application; use parent qw/OpenSRF::Application/;` -lines tell this package to inherit methods and properties from the -`OpenSRF::Application` module. For example, the call to -`__PACKAGE__->register_method()` is defined in `OpenSRF::Application` but due to -inheritance is available in this package (named by the special Perl symbol -`__PACKAGE__` that contains the current package name). The `register_method()` -procedure is how we introduce a method to the rest of the OpenSRF world. - -[#serviceRegistration] -=== Registering a service with the OpenSRF configuration files === - -Two files control most of the configuration for OpenSRF: - - * `opensrf.xml` contains the configuration for the service itself as well as -a list of which application servers in your OpenSRF cluster should start -the service - * `opensrf_core.xml` (often referred to as the "bootstrap configuration" -file) contains the OpenSRF networking information, including the XMPP server -connection credentials for the public and private routers; you only need to touch -this for a new service if the new service needs to be accessible via the -public router - -Begin by defining the service itself in `opensrf.xml`. To register the -`opensrf.simple-text` service, add the following section to the `` -element (corresponding to the XPath `/opensrf/default/apps/`): - -[source,xml] --------------------------------------------------------------------------------- - - - 3 - 1 - perl - OpenSRF::Application::Demo::SimpleText - 100 - - 1000 - opensrf.simple-text_unix.log - opensrf.simple-text_unix.sock - opensrf.simple-text_unix.pid - 5 - 15 - 2 - 5 - - - - - --------------------------------------------------------------------------------- - -<1> The element name is the name that the OpenSRF control scripts use to refer -to the service. - -<2> Specifies the interval (in seconds) between checks to determine if the -service is still running. - -<3> Specifies whether OpenSRF clients can call methods from this service -without first having to create a connection to a specific service backend -process for that service. If the value is `1`, then the client can simply -issue a request and the router will forward the request to an available -service and the result will be returned directly to the client. - -<4> Specifies the programming language in which the service is implemented - -<5> Specifies the name of the library or module in which the service is implemented - -<6> (C implementations): Specifies the maximum number of requests a process -serves before it is killed and replaced by a new process. - -<7> (Perl implementations): Specifies the maximum number of requests a process -serves before it is killed and replaced by a new process. - -<8> The name of the log file for language-specific log messages such as syntax -warnings. - -<9> The name of the UNIX socket used for inter-process communications. - -<10> The name of the PID file for the master process for the service. - -<11> The minimum number of child processes that should be running at any given -time. - -<12> The maximum number of child processes that should be running at any given -time. - -<13> The minimum number of child processes that should be available to handle -incoming requests. If there are fewer than this number of spare child -processes, new processes will be spawned. - -<14> The maximum number of child processes that should be available to handle -incoming requests. If there are more than this number of spare child processes, -the extra processes will be killed. - -To make the service accessible via the public router, you must also -edit the `opensrf_core.xml` configuration file to add the service to the list -of publicly accessible services: - -.Making a service publicly accessible in `opensrf_core.xml` -[source,xml] --------------------------------------------------------------------------------- - - - router - public.localhost - - opensrf.math - opensrf.simple-text - - --------------------------------------------------------------------------------- - -<1> This section of the `opensrf_core.xml` file is located at XPath -`/config/opensrf/routers/`. - -<2> `public.localhost` is the canonical public router domain in the OpenSRF -installation instructions. - -<3> Each `` element contained in the `` element -offers their services via the public router as well as the private router. - -Once you have defined the new service, you must restart the OpenSRF Router -to retrieve the new configuration and start or restart the service itself. - -=== Calling an OpenSRF method === -OpenSRF clients in any supported language can invoke OpenSRF services in any -supported language. So let's see a few examples of how we can call our fancy -new `opensrf.simple-text.reverse()` method: - -==== Calling OpenSRF methods from the srfsh client ==== -`srfsh` is a command-line tool installed with OpenSRF that you can use to call -OpenSRF methods. To call an OpenSRF method, issue the `request` command and pass -the OpenSRF service and method name as the first two arguments; then pass a list -of JSON objects as the arguments to the method being invoked. - -The following example calls the `opensrf.simple-text.reverse` method of the -`opensrf.simple-text` OpenSRF service, passing the string `"foobar"` as the -only method argument: - -[source,sh] --------------------------------------------------------------------------------- -$ srfsh -srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" - -Received Data: "raboof" - -=------------------------------------ -Request Completed Successfully -Request Time in seconds: 0.016718 -=------------------------------------ --------------------------------------------------------------------------------- - -[#opensrfIntrospection] -==== Getting documentation for OpenSRF methods from the srfsh client ==== - -The `srfsh` client also gives you command-line access to retrieving metadata -about OpenSRF services and methods. For a given OpenSRF method, for example, -you can retrieve information such as the minimum number of required arguments, -the data type and a description of each argument, the package or library in -which the method is implemented, and a description of the method. To retrieve -the documentation for an opensrf method from `srfsh`, issue the `introspect` -command, followed by the name of the OpenSRF service and (optionally) the -name of the OpenSRF method. If you do not pass a method name to the `introspect` -command, `srfsh` lists all of the methods offered by the service. If you pass -a partial method name, `srfsh` lists all of the methods that match that portion -of the method name. - -[NOTE] -The quality and availability of the descriptive information for each -method depends on the developer to register the method with complete and -accurate information. The quality varies across the set of OpenSRF and -Evergreen APIs, although some effort is being put towards improving the -state of the internal documentation. - -[source,sh] --------------------------------------------------------------------------------- -srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" ---> opensrf.simple-text - -Received Data: { - "__c":"opensrf.simple-text", - "__p":{ - "api_level":1, - "stream":0, \ # <1> - "object_hint":"OpenSRF_Application_Demo_SimpleText", - "remote":0, - "package":"OpenSRF::Application::Demo::SimpleText", \ # <2> - "api_name":"opensrf.simple-text.reverse", \ # <3> - "server_class":"opensrf.simple-text", - "signature":{ \ # <4> - "params":[ \ # <5> - { - "desc":"The string to reverse", - "name":"text", - "type":"string" - } - ], - "desc":"Returns the input string in reverse order\n", \ # <6> - "return":{ \ # <7> - "desc":"Returns the input string in reverse order", - "type":"string" - } - }, - "method":"text_reverse", \ # <8> - "argc":1 \ # <9> - } -} --------------------------------------------------------------------------------- - -<1> `stream` denotes whether the method supports streaming responses or not. - -<2> `package` identifies which package or library implements the method. - -<3> `api_name` identifies the name of the OpenSRF method. - -<4> `signature` is a hash that describes the parameters for the method. - -<5> `params` is an array of hashes describing each parameter in the method; -each parameter has a description (`desc`), name (`name`), and type (`type`). - -<6> `desc` is a string that describes the method itself. - -<7> `return` is a hash that describes the return value for the method; it -contains a description of the return value (`desc`) and the type of the -returned value (`type`). - -<8> `method` identifies the name of the function or method in the source -implementation. - -<9> `argc` is an integer describing the minimum number of arguments that -must be passed to this method. - -==== Calling OpenSRF methods from Perl applications ==== - -To call an OpenSRF method from Perl, you must connect to the OpenSRF service, -issue the request to the method, and then retrieve the results. - -[source,perl] --------------------------------------------------------------------------------- -#/usr/bin/perl -use strict; -use OpenSRF::AppSession; -use OpenSRF::System; - -OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); # <1> - -my $session = OpenSRF::AppSession->create("opensrf.simple-text"); # <2> - -print "substring: Accepts a string and a number as input, returns a string\n"; -my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); # <3> -my $request = $result->gather(); # <4> -print "Substring: $request\n\n"; - -print "split: Accepts two strings as input, returns an array of strings\n"; -$request = $session->request("opensrf.simple-text.split", "This is a test", " "); # <5> -my $output = "Split: ["; -my $element; -while ($element = $request->recv()) { # <6> - $output .= $element->content . ", "; # <7> -} -$output =~ s/, $/]/; -print $output . "\n\n"; - -print "statistics: Accepts an array of strings as input, returns a hash\n"; -my @many_strings = [ - "First I think I'll have breakfast", - "Then I think that lunch would be nice", - "And then seventy desserts to finish off the day" -]; - -$result = $session->request("opensrf.simple-text.statistics", @many_strings); # <8> -$request = $result->gather(); # <9> -print "Length: " . $result->{'length'} . "\n"; -print "Word count: " . $result->{'word_count'} . "\n"; - -$session->disconnect(); # <10> --------------------------------------------------------------------------------- - -<1> The `OpenSRF::System->bootstrap_client()` method reads the OpenSRF -configuration information from the indicated file and creates an XMPP client -connection based on that information. - -<2> The `OpenSRF::AppSession->create()` method accepts one argument - the name -of the OpenSRF service to which you want to want to make one or more requests - -and returns an object prepared to use the client connection to make those -requests. - -<3> The `OpenSRF::AppSession->request()` method accepts a minimum of one -argument - the name of the OpenSRF method to which you want to make a request - -followed by zero or more arguments to pass to the OpenSRF method as input -values. This example passes a string and an integer to the -`opensrf.simple-text.substring` method defined by the `opensrf.simple-text` -OpenSRF service. - -<4> The `gather()` method, called on the result object returned by the -`request()` method, iterates over all of the possible results from the result -object and returns a single variable. - -<5> This `request()` call passes two strings to the `opensrf.simple-text.split` -method defined by the `opensrf.simple-text` OpenSRF service and returns (via -`gather()`) a reference to an array of results. - -<6> The `opensrf.simple-text.split()` method is a streaming method that -returns an array of results with one element per `recv()` call on the -result object. We could use the `gather()` method to retrieve all of the -results in a single array reference, but instead we simply iterate over -the result variable until there are no more results to retrieve. - -<7> While the `gather()` convenience method returns only the content of the -complete set of results for a given request, the `recv()` method returns an -OpenSRF result object with `status`, `statusCode`, and `content` fields as -we saw in <>. - -<8> This `request()` call passes an array to the -`opensrf.simple-text.statistics` method defined by the `opensrf.simple-text` -OpenSRF service. - -<9> The result object returns a hash reference via `gather()`. The hash -contains the `length` and `word_count` keys we defined in the method. - -<10> The `OpenSRF::AppSession->disconnect()` method closes the XMPP client -connection and cleans up resources associated with the session. - -=== Accepting and returning more interesting data types === - -Of course, the example of accepting a single string and returning a single -string is not very interesting. In real life, our applications tend to pass -around multiple arguments, including arrays and hashes. Fortunately, OpenSRF -makes that easy to deal with; in Perl, for example, returning a reference to -the data type does the right thing. In the following example of a method that -returns a list, we accept two arguments of type string: the string to be split, -and the delimiter that should be used to split the string. - -.Text splitting method - streaming mode -[source,perl] --------------------------------------------------------------------------------- -sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; - - my @split_text = split $delimiter, $text; - return \@split_text; -} - -__PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split' -); --------------------------------------------------------------------------------- - -We simply return a reference to the list, and OpenSRF does the rest of the work -for us to convert the data into the language-independent format that is then -returned to the caller. As a caller of a given method, you must rely on the -documentation used to register to determine the data structures - if the developer has -added the appropriate documentation. - -=== Accepting and returning Evergreen objects === - -OpenSRF is agnostic about objects; its role is to pass JSON back and forth -between OpenSRF clients and services, and it allows the specific clients and -services to define their own semantics for the JSON structures. On top of that -infrastructure, Evergreen offers the fieldmapper: an object-relational mapper -that provides a complete definition of all objects, their properties, their -relationships to other objects, the permissions required to create, read, -update, or delete objects of that type, and the database table or view on which -they are based. - -The Evergreen fieldmapper offers a great deal of convenience for working with -complex system objects beyond the basic mapping of classes to database -schemas. Although the result is passed over the wire as a JSON object -containing the indicated fields, fieldmapper-aware clients then turn those -JSON objects into native objects with setter / getter methods for each field. - -All of this metadata about Evergreen objects is defined in the -fieldmapper configuration file (`/openils/conf/fm_IDL.xml`), and access to -these classes is provided by the `open-ils.cstore`, `open-ils.pcrud`, and -`open-ils.reporter-store` OpenSRF services which parse the fieldmapper -configuration file and dynamically register OpenSRF methods for creating, -reading, updating, and deleting all of the defined classes. - -.Example fieldmapper class definition for "Open User Summary" -[source,xml] --------------------------------------------------------------------------------- - - - - - - - - - - - - - - - - - - --------------------------------------------------------------------------------- - -<1> The `` element defines the class: - - * The `id` attribute defines the _class hint_ that identifies the class both -elsewhere in the fieldmapper configuration file, such as in the value of the -`field` attribute of the `` element, and in the JSON object itself when -it is instantiated. For example, an "Open User Summary" JSON object would have -the top level property of `"__c":"mous"`. - - * The `controller` attribute identifies the services that have direct access -to this class. If `open-ils.pcrud` is not listed, for example, then there is -no means to directly access members of this class through a public service. - - * The `oils_obj:fieldmapper` attribute defines the name of the Perl -fieldmapper class that will be dynamically generated to provide setter and -getter methods for instances of the class. - - * The `oils_persist:tablename` attribute identifies the schema name and table -name of the database table that stores the data that represents the instances -of this class. In this case, the schema is `money` and the table is -`open_usr_summary`. - - * The `reporter:label` attribute defines a human-readable name for the class -used in the reporting interface to identify the class. These names are defined -in English in the fieldmapper configuration file; however, they are extracted -so that they can be translated and served in the user's language of choice. - -<2> The `` element lists all of the fields that belong to the object. - - * The `oils_persist:primary` attribute identifies the field that acts as the -primary key for the object; in this case, the field with the name `usr`. - - * The `oils_persist:sequence` attribute identifies the sequence object -(if any) in this database provides values for new instances of this class. In -this case, the primary key is defined by a field that is linked to a different -table, so no sequence is used to populate these instances. - -<3> Each `` element defines a single field with the following attributes: - - * The `name` attribute identifies the column name of the field in the -underlying database table as well as providing a name for the setter / getter -method that can be invoked in the JSON or native version of the object. - - * The `reporter:datatype` attribute defines how the reporter should treat -the contents of the field for the purposes of querying and display. - - * The `reporter:label` attribute can be used to provide a human-readable name -for each field; without it, the reporter falls back to the value of the `name` -attribute. - -<4> The `` element contains a set of zero or more `` elements, -each of which defines a relationship between the class being described and -another class. - - * The `field` attribute identifies the field named in this class that links -to the external class. - - * The `reltype` attribute identifies the kind of relationship between the -classes; in the case of `has_a`, each value in the `usr` field is guaranteed -to have a corresponding value in the external class. - - * The `key` attribute identifies the name of the field in the external -class to which this field links. - - * The rarely-used `map` attribute identifies a second class to which -the external class links; it enables this field to define a direct -relationship to an external class with one degree of separation, to -avoid having to retrieve all of the linked members of an intermediate -class just to retrieve the instances from the actual desired target class. - - * The `class` attribute identifies the external class to which this field -links. - -<5> The `` element defines the permissions that must have been -granted to a user to operate on instances of this class. - -<6> The `` element is one of four possible children of the -`` element that define the permissions required for each action: -create, retrieve, update, and delete. - - * The `permission` attribute identifies the name of the permission that must -have been granted to the user to perform the action. - - * The `contextfield` attribute, if it exists, defines the field in this class -that identifies the library within the system for which the user must have -privileges to work. If a user has been granted a given permission, but has not been -granted privileges to work at a given library, they can not perform the action -at that library. - -<7> The rarely-used `` element identifies a linked field (`link` -attribute) in this class which links to an external class that holds the field -(`field` attribute) that identifies the library within the system for which the -user must have privileges to work. - -When you retrieve an instance of a class, you can ask for the result to -_flesh_ some or all of the linked fields of that class, so that the linked -instances are returned embedded directly in your requested instance. In that -same request you can ask for the fleshed instances to in turn have their linked -fields fleshed. By bundling all of this into a single request and result -sequence, you can avoid the network overhead of requiring the client to request -the base object, then request each linked object in turn. - -You can also iterate over a collection of instances and set the automatically -generated `isdeleted`, `isupdated`, or `isnew` properties to indicate that -the given instance has been deleted, updated, or created respectively. -Evergreen can then act in batch mode over the collection to perform the -requested actions on any of the instances that have been flagged for action. - -=== Returning streaming results === - -In the previous implementation of the `opensrf.simple-text.split` method, we -returned a reference to the complete array of results. For small values being -delivered over the network, this is perfectly acceptable, but for large sets of -values this can pose a number of problems for the requesting client. Consider a -service that returns a set of bibliographic records in response to a query like -"all records edited in the past month"; if the underlying database is -relatively active, that could result in thousands of records being returned as -a single network request. The client would be forced to block until all of the -results are returned, likely resulting in a significant delay, and depending on -the implementation, correspondingly large amounts of memory might be consumed -as all of the results are read from the network in a single block. - -OpenSRF offers a solution to this problem. If the method returns results that -can be divided into separate meaningful units, you can register the OpenSRF -method as a streaming method and enable the client to loop over the results one -unit at a time until the method returns no further results. In addition to -registering the method with the provided name, OpenSRF also registers an additional -method with `.atomic` appended to the method name. The `.atomic` variant gathers -all of the results into a single block to return to the client, giving the caller -the ability to choose either streaming or atomic results from a single method -definition. - -In the following example, the text splitting method has been reimplemented to -support streaming; very few changes are required: - -.Text splitting method - streaming mode -[source,perl] --------------------------------------------------------------------------------- -sub text_split { - my $self = shift; - my $conn = shift; - my $text = shift; - my $delimiter = shift || ' '; - - my @split_text = split $delimiter, $text; - foreach my $string (@split_text) { # <1> - $conn->respond($string); - } - return undef; -} - -__PACKAGE__->register_method( - method => 'text_split', - api_name => 'opensrf.simple-text.split', - stream => 1 # <2> -); --------------------------------------------------------------------------------- - -<1> Rather than returning a reference to the array, a streaming method loops -over the contents of the array and invokes the `respond()` method of the -connection object on each element of the array. - -<2> Registering the method as a streaming method instructs OpenSRF to also -register an atomic variant (`opensrf.simple-text.split.atomic`). - -=== Error! Warning! Info! Debug! === -As hard as it may be to believe, it is true: applications sometimes do not -behave in the expected manner, particularly when they are still under -development. The server language bindings for OpenSRF include integrated -support for logging messages at the levels of ERROR, WARNING, INFO, DEBUG, and -the extremely verbose INTERNAL to either a local file or to a syslogger -service. The destination of the log files, and the level of verbosity to be -logged, is set in the `opensrf_core.xml` configuration file. To add logging to -our Perl example, we just have to add the `OpenSRF::Utils::Logger` package to our -list of used Perl modules, then invoke the logger at the desired logging level. - -You can include many calls to the OpenSRF logger; only those that are higher -than your configured logging level will actually hit the log. The following -example exercises all of the available logging levels in OpenSRF: - -[source,perl] --------------------------------------------------------------------------------- -use OpenSRF::Utils::Logger; -my $logger = OpenSRF::Utils::Logger; -# some code in some function -{ - $logger->error("Hmm, something bad DEFINITELY happened!"); - $logger->warn("Hmm, something bad might have happened."); - $logger->info("Something happened."); - $logger->debug("Something happened; here are some more details."); - $logger->internal("Something happened; here are all the gory details.") -} --------------------------------------------------------------------------------- - -If you call the mythical OpenSRF method containing the preceding OpenSRF logger -statements on a system running at the default logging level of INFO, you will -only see the INFO, WARN, and ERR messages, as follows: - -.Results of logging calls at the default level of INFO --------------------------------------------------------------------------------- -[2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! -[2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] Hmm, something bad might have happened. -[2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] Something happened. --------------------------------------------------------------------------------- - -If you then increase the the logging level to INTERNAL (5), the logs will -contain much more information, as follows: - -.Results of logging calls at the default level of INTERNAL --------------------------------------------------------------------------------- -[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! -[2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] Hmm, something bad might have happened. -[2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] Something happened. -[2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] Something happened; here are some more details. -[2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] Something happened; here are all the gory details. -[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] Resolver did not find a cache hit -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] Stored opensrf.simple-text.test_cache.masaa => "here" in memcached server -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] Coderef for [OpenSRF::Application::Demo::SimpleText::test_cache] has been run -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] A top level Request object is responding de nada -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] Method duration for [opensrf.simple-text.test_cache]: 10.005 -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 -[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending RESULT to opensrf@private.localhost/_dan-karmic-liblap_1268880489.752154_5943 with threadTrace [1] -[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending STATUS to opensrf@private.localhost/_dan-karmic-liblap_1268880489.752154_5943 with threadTrace [1] -... --------------------------------------------------------------------------------- - -To see everything that is happening in OpenSRF, try leaving your logging level -set to INTERNAL for a few minutes - just ensure that you have a lot of free disk -space available if you have a moderately busy system! - -=== Caching results: one secret of scalability === -If you have ever used an application that depends on a remote Web service -outside of your control-say, if you need to retrieve results from a -microblogging service-you know the pain of latency and dependability (or the -lack thereof). To improve response time in OpenSRF applications, you can take -advantage of the support offered by the `OpenSRF::Utils::Cache` module for -communicating with a local instance or cluster of memcache daemons to store -and retrieve persistent values. - -[source,perl] --------------------------------------------------------------------------------- -use OpenSRF::Utils::Cache; # <1> -sub test_cache { - my $self = shift; - my $conn = shift; - my $test_key = shift; - my $cache = OpenSRF::Utils::Cache->new('global'); # <2> - my $cache_key = "opensrf.simple-text.test_cache.$test_key"; # <3> - my $result = $cache->get_cache($cache_key) || undef; # <4> - if ($result) { - $logger->info("Resolver found a cache hit"); - return $result; - } - sleep 10; # <5> - my $cache_timeout = 300; # <6> - $cache->put_cache($cache_key, "here", $cache_timeout); # <7> - return "There was no cache hit."; -} --------------------------------------------------------------------------------- - -This example: - -<1> Imports the OpenSRF::Utils::Cache module - -<2> Creates a cache object - -<3> Creates a unique cache key based on the OpenSRF method name and -request input value - -<4> Checks to see if the cache key already exists; if so, it immediately -returns that value - -<5> If the cache key does not exist, the code sleeps for 10 seconds to -simulate a call to a slow remote Web service, or an intensive process - -<6> Sets a value for the lifetime of the cache key in seconds - -<7> When the code has retrieved its value, then it can create the cache -entry, with the cache key, value to be stored ("here"), and the timeout -value in seconds to ensure that we do not return stale data on subsequent -calls - -=== Initializing the service and its children: child labour === -When an OpenSRF service is started, it looks for a procedure called -`initialize()` to set up any global variables shared by all of the children of -the service. The `initialize()` procedure is typically used to retrieve -configuration settings from the `opensrf.xml` file. - -An OpenSRF service spawns one or more children to actually do the work -requested by callers of the service. For every child process an OpenSRF service -spawns, the child process clones the parent environment and then each child -process runs the `child_init()` process (if any) defined in the OpenSRF service -to initialize any child-specific settings. - -When the OpenSRF service kills a child process, it invokes the `child_exit()` -procedure (if any) to clean up any resources associated with the child process. -Similarly, when the OpenSRF service is stopped, it calls the `DESTROY()` -procedure to clean up any remaining resources. - -=== Retrieving configuration settings === -The settings for OpenSRF services are maintained in the `opensrf.xml` XML -configuration file. The structure of the XML document consists of a root -element `` containing two child elements: - - * `` contains an `` element describing all -OpenSRF services running on this system -- see <> --, as -well as any other arbitrary XML descriptions required for global configuration -purposes. For example, Evergreen uses this section for email notification and -inter-library patron privacy settings. - * `` contains one element per host that participates in -this OpenSRF system. Each host element must include an `` element -that lists all of the services to start on this host when the system starts -up. Each host element can optionally override any of the default settings. - -OpenSRF includes a service named `opensrf.settings` to provide distributed -cached access to the configuration settings with a simple API: - - * `opensrf.settings.default_config.get`: accepts zero arguments and returns -the complete set of default settings as a JSON document - * `opensrf.settings.host_config.get`: accepts one argument (hostname) and -returns the complete set of settings, as customized for that hostname, as a -JSON document - * `opensrf.settings.xpath.get`: accepts one argument (an -http://www.w3.org/TR/xpath/[XPath] expression) and returns the portion of -the configuration file that matches the expression as a JSON document - -For example, to determine whether an Evergreen system uses the opt-in -support for sharing patron information between libraries, you could either -invoke the `opensrf.settings.default_config.get` method and parse the -JSON document to determine the value, or invoke the `opensrf.settings.xpath.get` -method with the XPath `/opensrf/default/share/user/opt_in` argument to -retrieve the value directly. - -In practice, OpenSRF includes convenience libraries in all of its client -language bindings to simplify access to configuration values. C offers -osrfConfig.c, Perl offers `OpenSRF::Utils::SettingsClient`, Java offers -`org.opensrf.util.SettingsClient`, and Python offers `osrf.set`. These -libraries locally cache the configuration file to avoid network roundtrips for -every request and enable the developer to request specific values without -having to manually construct XPath expressions. - -== Getting under the covers with OpenSRF == -Now that you have seen that it truly is easy to create an OpenSRF service, we -can take a look at what is going on under the covers to make all of this work -for you. - -=== Get on the messaging bus - safely === -One of the core innovations of OpenSRF was to use the Extensible Messaging and -Presence Protocol (XMPP, more colloquially known as Jabber) as the messaging -bus that ties OpenSRF services together across servers. XMPP is an "XML -protocol for near-real-time messaging, presence, and request-response services" -(http://www.ietf.org/rfc/rfc3920.txt) that OpenSRF relies on to handle most of -the complexity of networked communications. OpenSRF achieves a measure of -security for its services through the use of public and private XMPP domains; -all OpenSRF services automatically register themselves with the private XMPP -domain, but only those services that register themselves with the public XMPP -domain can be invoked from public OpenSRF clients. - -In a minimal OpenSRF deployment, two XMPP users named "router" connect to the -XMPP server, with one connected to the private XMPP domain and one connected to -the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to -the XMPP server via the private and public XMPP domains. When an OpenSRF -service is started, it uses the "opensrf" XMPP user to advertise its -availability with the corresponding router on that XMPP domain; the XMPP server -automatically assigns a Jabber ID (JID) based on the client hostname to each -service's listener process and each connected drone process waiting to carry -out requests. When an OpenSRF router receives a request to invoke a method on a -given service, it connects the requester to the next available listener in the -list of registered listeners for that service. - -The opensrf and router user names, passwords, and domain names, along with the -list of services that should be public, are contained in the `opensrf_core.xml` -configuration file. - -=== Message body format === -OpenSRF was an early adopter of JavaScript Object Notation (JSON). While XMPP -is an XML protocol, the Evergreen developers recognized that the compactness of -the JSON format offered a significant reduction in bandwidth for the volume of -messages that would be generated in an application of that size. In addition, -the ability of languages such as JavaScript, Perl, and Python to generate -native objects with minimal parsing offered an attractive advantage over -invoking an XML parser for every message. Instead, the body of the XMPP message -is a simple JSON structure. For a simple request, like the following example -that simply reverses a string, it looks like a significant overhead: but we get -the advantages of locale support and tracing the request from the requester -through the listener and responder (drone). - -.A request for opensrf.simple-text.reverse("foobar"): -[source,xml] --------------------------------------------------------------------------------- - - 1266781414.366573.12667814146288 - -[ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": - {"__c":"osrfMethod","__p": - {"method":"opensrf.simple-text.reverse","params":["foobar"]} - } - } - } -] - - --------------------------------------------------------------------------------- - -.A response from opensrf.simple-text.reverse("foobar") -[source,xml] --------------------------------------------------------------------------------- - - 1266781414.366573.12667814146288 - -[ - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfResult","__p": - {"status":"OK","content":"raboof","statusCode":200} - } ,"type":"RESULT","locale":"en-US"} - }, - {"__c":"osrfMessage","__p": - {"threadTrace":"1","payload": - {"__c":"osrfConnectStatus","__p": - {"status":"Request Complete","statusCode":205} - },"type":"STATUS","locale":"en-US"} - } -] - - --------------------------------------------------------------------------------- - -The content of the `` element of the OpenSRF request and result should -look familiar; they match the structure of the <> that we previously dissected. - -=== Registering OpenSRF methods in depth === -Let's explore the call to `__PACKAGE__->register_method()`; most of the elements -of the hash are optional, and for the sake of brevity we omitted them in the -previous example. As we have seen in the results of the <>, a -verbose registration method call is recommended to better enable the internal -documentation. So, for the sake of completeness here, is the set of elements -that you should pass to `__PACKAGE__->register_method()`: - - * `method`: the name of the procedure in this module that is being registered as an OpenSRF method - * `api_name`: the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix - * `api_level`: (optional) can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1 - * `argc`: (optional) the minimal number of arguments that the method expects - * `stream`: (optional) if this argument is set to any value, then the method supports returning multiple values from a single call to subsequent requests, and OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a single request; streaming methods are useful if you are returning hundreds of records and want to act on the results as they return - * `signature`: (optional) a hash describing the method's purpose, arguments, and return value - ** `desc`: a description of the method's purpose - ** `params`: an array of hashes, each of which describes one of the method arguments - *** `name`: the name of the argument - *** `desc`: a description of the argument's purpose - *** `type`: the data type of the argument: for example, string, integer, boolean, number, array, or hash - ** `return`: a hash describing the return value of the method - *** `desc`: a description of the return value - *** `type`: the data type of the return value: for example, string, integer, boolean, number, array, or hash - -== Evergreen-specific OpenSRF services == - -Evergreen is currently the primary showcase for the use of OpenSRF as an -application architecture. Evergreen 2.6.0 includes the following -set of OpenSRF services: - - * `open-ils.acq` Supports tasks for managing the acquisitions process - * `open-ils.actor`: Supports common tasks for working with user accounts - and libraries. - * `open-ils.auth`: Supports authentication of Evergreen users. - * `open-ils.auth_proxy`: Support using external services such as LDAP - directories to authenticate Evergreen users - * `open-ils.cat`: Supports common cataloging tasks, such as creating, - modifying, and merging bibliographic and authority records. - * `open-ils.circ`: Supports circulation tasks such as checking out items and - calculating due dates. - * `open-ils.collections`: Supports tasks to assist collections services for - contacting users with outstanding fines above a certain threshold. - * `open-ils.cstore`: Supports unrestricted access to Evergreen fieldmapper - objects. This is a private service. - * `open-ils.fielder` - * `open-ils.justintime`: Support tasks for determining if an action/trigger - event is still valid - * `open-ils.pcrud`: Supports access to Evergreen fieldmapper objects, - restricted by staff user permissions. This is a private service. - objects. - * `open-ils.permacrud`: Supports access to Evergreen fieldmapper objects, - restricted by staff user permissions. This is a private service. - * `open-ils.reporter`: Supports the creation and scheduling of reports. - * `open-ils.reporter-store`: Supports access to Evergreen fieldmapper objects - for the reporting service. This is a private service. - * `open-ils.resolver` Support tasks for integrating with an OpenURL resolver. - * `open-ils.search`: Supports searching across bibliographic records, - authority records, serial records, Z39.50 sources, and ZIP codes. - * `open-ils.serial`: Support tasks for serials management - * `open-ils.storage`: A deprecated method of providing access to Evergreen - fieldmapper objects. Implemented in Perl, this service has largely been - replaced by the much faster C-based `open-ils.cstore` service. - * `open-ils.supercat`: Supports transforms of MARC records into other formats, - such as MODS, as well as providing Atom and RSS feeds and SRU access. - * `open-ils.trigger`: Supports event-based triggers for actions such as - overdue and holds available notification emails. - * `open-ils.url_verify`: Support tasks for validating URLs - * `open-ils.vandelay`: Supports the import and export of batches of - bibliographic and authority records. - * `opensrf.settings`: Supports communicating opensrf.xml settings to other services. - -Of some interest is that the `open-ils.reporter-store` and `open-ils.cstore` -services have identical implementations. Surfacing them as separate services -enables a deployer of Evergreen to ensure that the reporting service does not -interfere with the performance-critical `open-ils.cstore` service. One can also -direct the reporting service to a read-only database replica to, again, avoid -interference with `open-ils.cstore` which must write to the master database. - -There are only a few significant services that are not built on OpenSRF, such -as the SIP and Z39.50 servers. These services implement -different protocols and build on existing daemon architectures (Simple2ZOOM -for Z39.50), but still rely on the other OpenSRF services to provide access -to the Evergreen data. The non-OpenSRF services are reasonably self-contained -and can be deployed on different servers to deliver the same sort of deployment -flexibility as OpenSRF services, but have the disadvantage of not being -integrated into the same configuration and control infrastructure as the -OpenSRF services. - -== Evergreen after one year: reflections on OpenSRF == - -http://projectconifer.ca[Project Conifer] has been live on Evergreen for just -over a year now, and as one of the primary technologists I have had to work -closely with the OpenSRF infrastructure during that time. As such, I am in -a position to identify some of the strengths and weaknesses of OpenSRF based -on our experiences. - -=== Strengths of OpenSRF === - -As a service infrastructure, OpenSRF has been remarkably reliable. We initially -deployed Evergreen on an unreleased version of both OpenSRF and Evergreen due -to our requirements for some functionality that had not been delivered in a -stable release at that point in time, and despite this risky move we suffered -very little unplanned downtime in the opening months. On July 27, 2009 we -moved to a newer (but still unreleased) version of the OpenSRF and Evergreen -code, and began formally tracking our downtime. Since then, we have achieved -more than 99.9% availability - including scheduled downtime for maintenance. -This compares quite favourably to the maximum of 75% availability that we were -capable of achieving on our previous library system due to the nightly downtime -that was required for our backup process. The OpenSRF "maximum request" -configuration parameter for each service that kills off drone processes after -they have served a given number of requests provides a nice failsafe for -processes that might otherwise suffer from a memory leak or hung process. It -also helps that when we need to apply an update to a Perl service that is -running on multiple servers, we can apply the updated code, then restart the -service on one server at a time to avoid any downtime. - -As promised by the OpenSRF infrastructure, we have also been able to tune our -cluster of servers to provide better performance. For example, we were able to -change the number of maximum concurrent processes for our database services -when we noticed that we seeing a performance bottleneck with database access. -Making a configuration change go live simply requires you to restart the -`opensrf.setting` service to pick up the configuration change, then restart the -affected service on each of your servers. We were also able to turn off some of -the less-used OpenSRF services, such as `open-ils.collections`, on one of our -servers to devote more resources on that server to the more frequently used -services and other performance-critical processes such as Apache. - -The support for logging and caching that is built into OpenSRF has been -particularly helpful with the development of a custom service for SFX holdings -integration into our catalogue. Once I understood how OpenSRF works, most of -the effort required to build that SFX integration service was spent on figuring -out how to properly invoke the SFX API to display human-readable holdings. -Adding a new OpenSRF service and registering several new methods for the -service was relatively easy. The support for directing log messages to syslog -in OpenSRF has also been a boon for both development and debugging when -problems arise in a cluster of five servers; we direct all of our log messages -to a single server where we can inspect the complete set of messages for the -entire cluster in context, rather than trying to piece them together across -servers. - -=== Weaknesses === - -The primary weakness of OpenSRF is the lack of either formal or informal -documentation for OpenSRF. There are many frequently asked questions on the -Evergreen mailing lists and IRC channel that indicate that some of the people -running Evergreen or trying to run Evergreen have not been able to find -documentation to help them understand, even at a high level, how the OpenSRF -Router and services work with XMPP and the Apache Web server to provide a -working Evergreen system. Also, over the past few years several developers -have indicated an interest in developing Ruby and PHP bindings for OpenSRF, but -the efforts so far have resulted in no working code. Without a formal -specification, clearly annotated examples, and unit tests for the major OpenSRF -communication use cases that could be ported to the new language as a base set -of expectations for a working binding, the hurdles for a developer new to -OpenSRF are significant. As a result, Evergreen integration efforts with -popular frameworks like Drupal, Blacklight, and VuFind result in the best -practical option for a developer with limited time -- database-level -integration -- which has the unfortunate side effect of being much more likely -to break after an upgrade. - -In conjunction with the lack of documentation that makes it hard to get started -with the framework, a disincentive for new developers to contribute to OpenSRF -itself is the lack of integrated unit tests. For a developer to contribute a -significant, non-obvious patch to OpenSRF, they need to manually run through -various (undocumented, again) use cases to try and ensure that the patch -introduced no unanticipated side effects. The same problems hold for Evergreen -itself, although the -http://git.evergreen-ils.org/?p=working/random.git;a=shortlog;h=refs/heads/collab/berick/constrictor[Constrictor] stress-testing -framework offers a way of performing some automated system testing and -performance testing. - -These weaknesses could be relatively easily overcome with the effort through -contributions from people with the right skill sets. This article arguably -offers a small set of clear examples at both the networking and application -layer of OpenSRF. A technical writer who understands OpenSRF could contribute a -formal specification to the project. With a formal specification at their -disposal, a quality assurance expert could create an automated test harness and -a basic set of unit tests that could be incrementally extended to provide more -coverage over time. If one or more continual integration environments are set -up to track the various OpenSRF branches of interest, then the OpenSRF -community would have immediate feedback on build quality. Once a unit testing -framework is in place, more developers might be willing to develop and -contribute patches as they could sanity check their own code without an intense -effort before exposing it to their peers. - -== Summary == -In this article, I attempted to provide both a high-level and detailed overview -of how OpenSRF works, how to build and deploy new OpenSRF services, how to make -requests to OpenSRF method from OpenSRF clients or over HTTP, and why you -should consider it a possible infrastructure for building your next -high-performance system that requires the capability to scale out. In addition, -I surveyed the Evergreen services built on OpenSRF and reflected on the -strengths and weaknesses of the platform based on the experiences of Project -Conifer after a year in production, with some thoughts about areas where the -right application of skills could make a significant difference to the Evergreen -and OpenSRF projects. - -== Appendix: Python client == - -Following is a Python client that makes the same OpenSRF calls as the Perl -client: - -[source, python] --------------------------------------------------------------------------------- -include::example$python_client.py[] --------------------------------------------------------------------------------- - -NOTE: Python's `dnspython` module refuses to read `/etc/resolv.conf`, so to -access hostnames that are not served up via DNS, such as the extremely common -case of `localhost`, you may need to install a package like `dnsmasq` to act -as a local DNS server for those hostnames. - -// vim: set syntax=asciidoc: diff --git a/docs-antora/modules/development/pages/introduction.adoc b/docs-antora/modules/development/pages/introduction.adoc deleted file mode 100644 index 8fd3a0a5de..0000000000 --- a/docs-antora/modules/development/pages/introduction.adoc +++ /dev/null @@ -1,5 +0,0 @@ -= Introduction = -:toc: -Developers can use this part to learn more about the programming languages, -communication protocols and standards used in Evergreen. - diff --git a/docs-antora/modules/development/pages/perl_client.pl b/docs-antora/modules/development/pages/perl_client.pl deleted file mode 100644 index 7a47232242..0000000000 --- a/docs-antora/modules/development/pages/perl_client.pl +++ /dev/null @@ -1,40 +0,0 @@ -#/usr/bin/perl -use strict; -use OpenSRF::AppSession; -use OpenSRF::System; -use Data::Dumper; - -OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); - -my $session = OpenSRF::AppSession->create("opensrf.simple-text"); - -print "substring: Accepts a string and a number as input, returns a string\n"; -my $request = $session->request("opensrf.simple-text.substring", "foobar", 3); - -my $response; -while ($response = $request->recv()) { - print "Substring: " . $response->content . "\n\n"; -} - -print "split: Accepts two strings as input, returns an array of strings\n"; -$request = $session->request("opensrf.simple-text.split", "This is a test", " ")->gather(); -my $output = "Split: ["; -foreach my $element (@$request) { - $output .= "$element, "; -} -$output =~ s/, $/]/; -print $output . "\n\n"; - -print "statistics: Accepts an array of strings as input, returns a hash\n"; -my @many_strings = [ - "First I think I'll have breakfast", - "Then I think that lunch would be nice", - "And then seventy desserts to finish off the day" -]; - -$request = $session->request("opensrf.simple-text.statistics", @many_strings)->gather(); -print "Length: " . $request->{'length'} . "\n"; -print "Word count: " . $request->{'word_count'} . "\n"; - -$session->disconnect(); - diff --git a/docs-antora/modules/development/pages/pgtap.adoc b/docs-antora/modules/development/pages/pgtap.adoc deleted file mode 100644 index 0b8a15677c..0000000000 --- a/docs-antora/modules/development/pages/pgtap.adoc +++ /dev/null @@ -1,37 +0,0 @@ -= Developing with pgTAP tests = -:toc: - -== Setting up pgTAP on your development server == - -Currently, Evergreen pgTAP tests expect a version of pgTAP (0.93) -that is not yet available in the packages for most Linux distributions. -Therefore, you will have to install pgTAP from source as follows: - -. Download, make, and install pgTAP on your database server. pgTAP can - be downloaded from http://pgxn.org/dist/pgtap/ and the instructions - for building and installing the extension are available from - http://pgtap.org/documentation.html - -. Create the pgTAP extension in your Evergreen database. Using `psql`, - connect to your Evergreen database and then issue the command: -+ -[source,sql] ------------------------------------------------------------------------------- -CREATE EXTENSION pgtap; ------------------------------------------------------------------------------- - -== Running pgTAP tests == -The pgTAP tests can be found in subdirectories of `Open-ILS/src/sql/Pg/` -as follows: - -* `t`: contains pgTAP unit tests that can be run on a freshly installed - Evergreen database -* `live_t`: contains pgTAP unit tests meant to be run on an Evergreen - database that also has had the "concerto" sample data loaded on it - -To run the pgTAP unit and regression tests, use the `pg_prove` command. -For example, from the Evergreen source directory, you can issue the -command: -`pg_prove -U evergreen Open-ILS/src/sql/Pg/t Open-ILS/src/sql/Pg/t/regress` - - diff --git a/docs-antora/modules/development/pages/support_scripts.adoc b/docs-antora/modules/development/pages/support_scripts.adoc deleted file mode 100644 index 04e993cb36..0000000000 --- a/docs-antora/modules/development/pages/support_scripts.adoc +++ /dev/null @@ -1,401 +0,0 @@ -= Support Scripts = -:toc: - -Various scripts are included with Evergreen in the `/openils/bin/` directory -(and in the source code in `Open-ILS/src/support-scripts` and -`Open-ILS/src/extras`). Some of them are used during -the installation process, such as `eg_db_config`, while others are usually -run as cron jobs for routine maintenance, such as `fine_generator.pl` and -`hold_targeter.pl`. Others are useful for less frequent needs, such as the -scripts for importing/exporting MARC records. You may explore these scripts -and adapt them for your local needs. You are also welcome to share your -improvements or ask any questions on the -http://evergreen-ils.org/communicate/[Evergreen IRC channel or email lists]. - -Here is a summary of the most commonly used scripts. The script name links -to more thorough documentation, if available. - - * action_trigger_aggregator.pl - -- Groups together event output for already processed events. Useful for - creating files that contain data from a group of events. Such as a CSV - file with all the overdue data for one day. - * xref:admin:actiontriggers_process.adoc#processing_action_triggers[action_trigger_runner.pl] - -- Useful for creating events for specified hooks and running pending events - * authority_authority_linker.pl - -- Links reference headings in authority records to main entry headings - in other authority records. Should be run at least once a day (only for - changed records). - * xref:#authority_control_fields[authority_control_fields.pl] - -- Links bibliographic records to the best matching authority record. - Should be run at least once a day (only for changed records). - You can accomplish this by running _authority_control_fields.pl --days-back=1_ - * autogen.sh - -- Generates web files used by the OPAC, especially files related to - organization unit hierarchy, fieldmapper IDL, locales selection, - facet definitions, compressed JS files and related cache key - * clark-kent.pl - -- Used to start and stop the reporter (which runs scheduled reports) - * xref:installation:server_installation.adoc#creating_the_evergreen_database[eg_db_config] - -- Creates database and schema, updates config files, sets Evergreen - administrator username and password - * fine_generator.pl - * hold_targeter.pl - * xref:#importing_authority_records_from_command_line[marc2are.pl] - -- Converts authority records from MARC format to Evergreen objects - suitable for importing via pg_loader.pl (or parallel_pg_loader.pl) - * marc2bre.pl - -- Converts bibliographic records from MARC format to Evergreen objects - suitable for importing via pg_loader.pl (or parallel_pg_loader.pl) - * marc2sre.pl - -- Converts serial records from MARC format to Evergreen objects - suitable for importing via pg_loader.pl (or parallel_pg_loader.pl) - * xref:#marc_export[marc_export] - -- Exports authority, bibliographic, and serial holdings records into - any of these formats: USMARC, UNIMARC, XML, BRE, ARE - * osrf_control - -- Used to start, stop and send signals to OpenSRF services - * parallel_pg_loader.pl - -- Uses the output of marc2bre.pl (or similar tools) to generate the SQL - for importing records into Evergreen in a parallel fashion - -[#authority_control_fields] - -== authority_control_fields: Connecting Bibliographic and Authority records == - -indexterm:[authority control] - -This script matches headings in bibliographic records to the appropriate -authority records. When it finds a match, it will add a subfield 0 to the -matching bibliographic field. - -Here is how the matching works: - -[options="header",cols="1,1,3"] -|========================================================= -|Bibliographic field|Authority field it matches|Subfields that it examines - -|100|100|a,b,c,d,f,g,j,k,l,n,p,q,t,u -|110|110|a,b,c,d,f,g,k,l,n,p,t,u -|111|111|a,c,d,e,f,g,j,k,l,n,p,q,t,u -|130|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t -|600|100|a,b,c,d,f,g,h,j,k,l,m,n,o,p,q,r,s,t,v,x,y,z -|610|110|a,b,c,d,f,g,h,k,l,m,n,o,p,r,s,t,v,w,x,y,z -|611|111|a,c,d,e,f,g,h,j,k,l,n,p,q,s,t,v,x,y,z -|630|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t,v,x,y,z -|648|148|a,v,x,y,z -|650|150|a,b,v,x,y,z -|651|151|a,v,x,y,z -|655|155|a,v,x,y,z -|700|100|a,b,c,d,f,g,j,k,l,n,p,q,t,u -|710|110|a,b,c,d,f,g,k,l,n,p,t,u -|711|111|a,c,d,e,f,g,j,k,l,n,p,q,t,u -|730|130|a,d,f,g,h,j,k,m,n,o,p,r,s,t -|751|151|a,v,x,y,z -|800|100|a,b,c,d,e,f,g,j,k,l,n,p,q,t,u,4 -|830|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t -|========================================================= - - -[#marc_export] - -== marc_export: Exporting Bibliographic Records into MARC files == - -indexterm:[marc_export] -indexterm:[MARC records,exporting,using the command line] - -The following procedure explains how to export Evergreen bibliographic -records into MARC files using the *marc_export* support script. All steps -should be performed by the `opensrf` user from your Evergreen server. - -[NOTE] -Processing time for exporting records depends on several factors such as -the number of records you are exporting. It is recommended that you divide -the export ID files (records.txt) into a manageable number of records if -you are exporting a large number of records. - - . Create a text file list of the Bibliographic record IDs you would like -to export from Evergreen. One way to do this is using SQL: -+ -[source,sql] ----- -SELECT DISTINCT bre.id FROM biblio.record_entry AS bre - JOIN asset.call_number AS acn ON acn.record = bre.id - WHERE bre.deleted='false' and owning_lib=101 \g /home/opensrf/records.txt; ----- -+ -This query creates a file called `records.txt` containing a column of -distinct IDs of items owned by the organizational unit with the id 101. - - . Navigate to the support-scripts folder -+ ----- -cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ ----- - - . Run *marc_export*, using the ID file you created in step 1 to define which - files to export. The following example exports the records into MARCXML format. -+ ----- -cat /home/opensrf/records.txt | ./marc_export --store -i -c /openils/conf/opensrf_core.xml \ - -x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml ----- - -[NOTE] -==================== -`marc_export` does not output progress as it executes. -==================== - -=== Options === - -The *marc_export* support script includes several options. You can find a complete list -by running `./marc_export -h`. A few key options are also listed below: - -==== --descendants and --library ==== - -The `marc_export` script has two related options, `--descendants` and -`--library`. Both options take one argument of an organizational unit - -The `--library` option will export records with holdings at the specified -organizational unit only. By default, this only includes physical holdings, -not electronic ones (also known as located URIs). - -The `descendants` option works much like the `--library` option -except that it is aware of the org. tree and will export records with -holdings at the specified organizational unit and all of its descendants. -This is handy if you want to export the records for all of the branches -of a system. You can do that by specifying this option and the system's -shortname, instead of specifying multiple `--library` options for each branch. - -Both the `--library` and `--descendants` options can be repeated. -All of the specified org. units and their descendants will be included -in the output. You can also combine `--library` and `--descendants` -options when necessary. - -==== --items ==== - -The `--items` option will add an 852 field for every relevant item to the MARC -record. This 852 field includes the following information: - -[options="header",cols="2,3"] -|=================================== -|Subfield |Contents -|$b (occurrence 1) |Call number owning library shortname -|$b (occurrence 2) |Item circulating library shortname -|$c |Shelving location -|$g |Circulation modifier -|$j |Call number -|$k |Call number prefix -|$m |Call number suffix -|$p |Barcode -|$s |Status -|$t |Copy number -|$x |Miscellaneous item information -|$y |Price -|=================================== - - -==== --since ==== - -You can use the `--since` option to export records modified after a certain date and time. - -==== --store ==== - -By default, marc_export will use the reporter storage service, which should -work in most cases. But if you have a separate reporter database and you -know you want to talk directly to your main production database, then you -can set the `--store` option to `cstore` or `storage`. - -==== --uris ==== -The `--uris` option (short form: `-u`) allows you to export records with -located URIs (i.e. electronic resources). When used by itself, it will export -only records that have located URIs. When used in conjunction with `--items`, -it will add records with located URIs but no items/copies to the output. -If combined with a `--library` or `--descendants` option, this option will -limit its output to those records with URIs at the designated libraries. The -best way to use this option is in combination with the `--items` and one of the -`--library` or `--descendants` options to export *all* of a library's -holdings both physical and electronic. - -[#pingest_pl] - -== Parallel Ingest with pingest.pl == - -indexterm:[pgingest.pl] -indexterm:[MARC records,importing,using the command line] - -A program named pingest.pl allows fast bibliographic record -ingest. It performs ingest in parallel so that multiple batches can -be done simultaneously. It operates by splitting the records to be -ingested up into batches and running all of the ingest methods on each -batch. You may pass in options to control how many batches are run at -the same time, how many records there are per batch, and which ingest -operations to skip. - -NOTE: The browse ingest is presently done in a single process over all -of the input records as it cannot run in parallel with itself. It -does, however, run in parallel with the other ingests. - -=== Command Line Options === - -pingest.pl accepts the following command line options: - ---host:: - The server where PostgreSQL runs (either host name or IP address). - The default is read from the PGHOST environment variable or - "localhost." - ---port:: - The port that PostgreSQL listens to on host. The default is read - from the PGPORT environment variable or 5432. - ---db:: - The database to connect to on the host. The default is read from - the PGDATABASE environment variable or "evergreen." - ---user:: - The username for database connections. The default is read from - the PGUSER environment variable or "evergreen." - ---password:: - The password for database connections. The default is read from - the PGPASSWORD environment variable or "evergreen." - ---batch-size:: - Number of records to process per batch. The default is 10,000. - ---max-child:: - Max number of worker processes (i.e. the number of batches to - process simultaneously). The default is 8. - ---skip-browse:: ---skip-attrs:: ---skip-search:: ---skip-facets:: ---skip-display:: - Skip the selected reingest component. - ---attr:: - This option allows the user to specify which record attributes to reingest. -It can be used one or more times to specify one or more attributes to -ingest. It can be omitted to reingest all record attributes. This -option is ignored if the `--skip-attrs` option is used. -+ -The `--attr` option is most useful after doing something specific that -requires only a partial ingest of records. For instance, if you add a -new language to the `config.coded_value_map` table, you will want to -reingest the `item_lang` attribute on all of your records. The -following command line will do that, and only that, ingest: -+ ----- -$ /openils/bin/pingest.pl --skip-browse --skip-search --skip-facets \ - --skip-display --attr=item_lang ----- - ---rebuild-rmsr:: - This option will rebuild the `reporter.materialized_simple_record` -(rmsr) table after the ingests are complete. -+ -This option might prove useful if you want to rebuild the table as -part of a larger reingest. If all you wish to do is to rebuild the -rmsr table, then it would be just as simple to connect to the database -server and run the following SQL: -+ -[source,sql] ----- -SELECT reporter.refresh_materialized_simple_record(); ----- - - - - -[#importing_authority_records_from_command_line] -== Importing Authority Records from Command Line == - -indexterm:[marc2are.pl] -indexterm:[pg_loader.pl] -indexterm:[MARC records,importing,using the command line] - -The major advantages of the command line approach are its speed and its -convenience for system administrators who can perform bulk loads of -authority records in a controlled environment. For alternate instructions, -see the cataloging manual. - - . Run *marc2are.pl* against the authority records, specifying the user -name, password, MARC type (USMARC or XML). Use `STDOUT` redirection to -either pipe the output directly into the next command or into an output -file for inspection. For example, to process a file with authority records -in MARCXML format named `auth_small.xml` using the default user name and -password, and directing the output into a file named `auth.are`: -+ ----- -cd Open-ILS/src/extras/import/ -perl marc2are.pl --user admin --pass open-ils --marctype XML auth_small.xml > auth.are ----- -+ -[NOTE] -The MARC type will default to USMARC if the `--marctype` option is not specified. - - . Run *parallel_pg_loader.pl* to generate the SQL necessary for importing the -authority records into your system. This script will create files in your -current directory with filenames like `pg_loader-output.are.sql` and -`pg_loader-output.sql` (which runs the previous SQL file). To continue with the -previous example by processing our new `auth.are` file: -+ ----- -cd Open-ILS/src/extras/import/ -perl parallel_pg_loader.pl --auto are --order are auth.are ----- -+ -[TIP] -To save time for very large batches of records, you could simply pipe the -output of *marc2are.pl* directly into *parallel_pg_loader.pl*. - - . Load the authority records from the SQL file that you generated in the -last step into your Evergreen database using the psql tool. Assuming the -default user name, host name, and database name for an Evergreen instance, -that command looks like: -+ ----- -psql -U evergreen -h localhost -d evergreen -f pg_loader-output.sql ----- - -== Juvenile-to-adult batch script == - -The batch `juv_to_adult.srfsh` script is responsible for toggling a patron -from juvenile to adult. It should be set up as a cron job. - -This script changes patrons to adult when they reach the age value set in the -library setting named "Juvenile Age Threshold" (`global.juvenile_age_threshold`). -When no library setting value is present at a given patron's home library, the -value passed in to the script will be used as a default. - -== MARC Stream Importer == - -indexterm:[MARC records,importing,using the command line] - -The MARC Stream Importer can import authority records or bibliographic records. -A single running instance of the script can import either type of record, based -on the record leader. - -This support script has its own configuration file, _marc_stream_importer.conf_, -which includes settings related to logs, ports, uses, and access control. - -By default, _marc_stream_importer.pl_ will typically be located in the -_/openils/bin_ directory. _marc_stream_importer.conf_ will typically be located -in _/openils/conf_. - -The importer is even more flexible than the staff client import, including the -following options: - - * _--bib-auto-overlay-exact_ and _--auth-auto-overlay-exact_: overlay/merge on -exact 901c matches - * _--bib-auto-overlay-1match_ and _--auth-auto-overlay-1match_: overlay/merge -when exactly one match is found - * _--bib-auto-overlay-best-match_ and _--auth-auto-overlay-best-match_: -overlay/merge on best match - * _--bib-import-no-match_ and _--auth-import-no-match_: import when no match -is found - -One advantage to using this tool instead of the staff client Import interface -is that the MARC Stream Importer can load a group of files at once. - diff --git a/docs-antora/modules/development/pages/updating_translations_launchpad.adoc b/docs-antora/modules/development/pages/updating_translations_launchpad.adoc deleted file mode 100644 index 9b177395f9..0000000000 --- a/docs-antora/modules/development/pages/updating_translations_launchpad.adoc +++ /dev/null @@ -1,52 +0,0 @@ -= Updating translations using Launchpad = -:toc: - -This document describes how to update the translations in an Evergreen branch -by pulling them from Launchpad, as well as update the files to be translated -in Launchpad by updating the POT files in the Evergreen master branch. - -== Prerequisites == -You must install all of the Python prerequisites required for building -translations, per -http://evergreen-ils.org/dokuwiki/doku.php?id=evergreen-admin:customizations:i18n - -* https://bitbucket.org/izi/polib/wiki/Home[polib] -* http://translate.sourceforge.net[translate-toolkit] -* http://pypi.python.org/pypi/python-Levenshtein/[levenshtein] -* http://pypi.python.org/pypi/setuptools[setuptools] -* http://pypi.python.org/pypi/simplejson/[simplejson] -* http://lxml.de/[lxml] - -== Updating the translations == - -. Check out the latest translations from Launchpad by branching the Bazaar -repository: -+ -[source,bash] ------------------------------------------------------------------------------- -bzr branch lp:~denials/evergreen/translation-export ------------------------------------------------------------------------------- -+ -This creates a directory called "translation-export". -+ -. Ensure you have an updated Evergreen release branch. -. Run the `build/i18n/scripts/update_pofiles` script to copy the translations - into the right place and avoid any updates that are purely metadata (dates - generated, etc). -. Commit the lot! And backport to whatever release branches need the updates. -. Build updated POT files: -+ -[source,bash] ------------------------------------------------------------------------------- -cd build/i18n -make newpot ------------------------------------------------------------------------------- -+ -This will extract all of the strings from the latest version of the files in -Evergreen. -+ -. (This part needs automation): Then, via the magic of `git diff` and `git add`, -go through all of the changed files and determine which ones actually have -string changes. Recommended approach is to re-run `git diff` after each -`git add`. -. Commit the updated POT files and backport to the pertinent release branches. diff --git a/docs-antora/modules/installation/_attributes.adoc b/docs-antora/modules/installation/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/installation/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/installation/nav.adoc b/docs-antora/modules/installation/nav.adoc deleted file mode 100644 index ec0db099ec..0000000000 --- a/docs-antora/modules/installation/nav.adoc +++ /dev/null @@ -1,6 +0,0 @@ -* xref:installation:introduction.adoc[Software Installation] -** xref:installation:system_requirements.adoc[System Requirements] -** xref:installation:server_installation.adoc[Installing the Evergreen server] -** xref:installation:server_upgrade.adoc[Upgrading the Evergreen Server] -** xref:installation:edi_setup.adoc[Setting Up EDI Acquisitions] - diff --git a/docs-antora/modules/installation/pages/_attributes.adoc b/docs-antora/modules/installation/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/installation/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/installation/pages/edi_setup.adoc b/docs-antora/modules/installation/pages/edi_setup.adoc deleted file mode 100644 index 9b5bed17f4..0000000000 --- a/docs-antora/modules/installation/pages/edi_setup.adoc +++ /dev/null @@ -1,202 +0,0 @@ -= Setting Up EDI Acquisitions = -:toc: - -== Introduction == - -Electronic Data Interchange (EDI) is used to exchange information between -participating vendors and Evergreen. This chapter contains technical -information for installation and configuration of the components necessary -to run EDI Acquisitions for Evergreen. - -== Installation == - -=== Install EDI Translator === - -The EDI Translator is used to convert data into EDI format. It runs -on localhost and listens on port 9191 by default. This is controlled via -the edi_webrick.cnf file located in the edi_translator directory. It should -not be necessary to edit this configuration if you install EDI Translator -on the same server used for running Action/Triggers events. - -[NOTE] -If you are running Evergreen with a multi-server configuration, make sure -to install EDI Translator on the same server used for Action/Trigger event -generation. - -.Steps for Installing - -1. As the *opensrf* user, copy the EDI Translator code found in - Open-ILS/src/edi_translator to somewhere accessible - (for example, /openils/var/edi): -+ -[source, bash] --------------------------------------------------- -cp -r Open-ILS/src/edi_translator /openils/var/edi --------------------------------------------------- -2. Navigate to where you have saved the code to begin next step: -+ -[source, bash] -------------------- -cd /openils/var/edi -------------------- -3. Next, as the *root* user (or a user with sudo rights), install the - dependencies, via "install.sh". This will perform some apt-get routines - to install the code needed for the EDI translator to function. - (Note: subversion must be installed first) -+ -[source, bash] ------------ -./install.sh ------------ -4. Now, we're ready to start "edi_webrick.bash" which is the script that calls - the "Ruby" code to translate EDI. This script needs to be started in - order for EDI to function so please take appropriate measures to ensure this - starts following reboots/upgrades/etc. As the *opensrf* user: -+ -[source, bash] ------------------ -./edi_webrick.bash ------------------ -5. You can check to see if EDI translator is running. - * Using the command "ps aux | grep edi" should show you something similar - if the script is running properly: -+ -[source, bash] ------------------------------------------------------------------------------------------- -root 30349 0.8 0.1 52620 10824 pts/0 S 13:04 0:00 ruby ./edi_webrick.rb ------------------------------------------------------------------------------------------- - * To shutdown EDI Translator you can use something like pkill (assuming - no other ruby processes are running on that server): -+ -[source, bash] ------------------------ -kill -INT $(pgrep ruby) ------------------------ - -=== Install EDI Scripts === - -The EDI scripts are "edi_pusher.pl" and "edi_fetcher.pl" and are used to -"push" and "fetch" EDI messages for configured EDI accounts. - -1. As the *opensrf* user, copy edi_pusher.pl and edi_fetcher.pl from - Open-ILS/src/support-scripts into /openils/bin: -+ -[source, bash] --------------------------------------------------- -cp Open-ILS/src/support-scripts/edi_pusher.pl /openils/bin -cp Open-ILS/src/support-scripts/edi_fetcher.pl /openils/bin --------------------------------------------------- -2. Setup the edi_pusher.pl and edi_fetcher.pl scripts to run as cron jobs - in order to regularly push and receive EDI messages. - * Add to the opensrf user's crontab the following entries: -+ -[source, bash] ------------------------------------------------------------------------ -10 * * * * cd /openils/bin && /usr/bin/perl ./edi_pusher.pl > /dev/null -0 1 * * * cd /openils/bin && /usr/bin/perl ./edi_fetcher.pl > /dev/null ------------------------------------------------------------------------ - * The example for edi_pusher.pl sets the script to run at - 10 minutes past the hour, every hour. - * The example for edi_fetcher.pl sets the script to run at - 1 AM every night. - -[NOTE] -You may choose to run the EDI scripts more or less frequently based on the -necessary response times from your vendors. - -== Configuration == - -=== Configuring Providers === - -Look in Administration -> Acquisitions Administration -> Providers - -[options="header"] -|====================================================================================== -|Column |Description/Notes -|Provider Name |A unique name to identify the provider -|Code |A unique code to identify the provider -|Owner |The org unit who will "own" the provider. -|Currency |The currency format the provider accepts -|Active |Whether or not the Provider is "active" for use -|Default Claim Policy|?? -|EDI Default |The default "EDI Account" to use (see EDI Accounts Configuration) -|Email |The email address for the provider -|Fax Phone |A fax number for the provider -|Holdings Tag |The holdings tag to be utilized (usually 852, for Evergreen) -|Phone |A phone number for the provider -|Prepayment Required |Whether or not prepayment is required -|SAN |The vendor provided, org unit specific SAN code -|URL |The vendor website -|====================================================================================== - -=== Configuring EDI Accounts === - -Look in Administration -> Acquisitions Administration -> EDI Accounts - -[options="header"] -|=============================================================================================================== -|Column |Description/Notes -|Label |A unique name to identify the provider -|Host |FTP/SFTP/SSH hostname - vendor assigned -|Username |FTP/SFTP/SSH username - vendor assigned -|Password |FTP/SFTP/SSH password - vendor assigned -|Account |Vendor assigned account number associated with your organization -|Owner |The organizational unit who owns the EDI account -|Last Activity |The date of last activity for the account -|Provider |This is a link to one of the "codes" in the "Providers" interface -|Path |The path on the vendor's server where Evergreen will send it's outgoing .epo files -|Incoming Directory |The path on the vendor's server where "incoming" .epo files are stored -|Vendor Account Number|Vendor assigned account number. -|Vendor Assigned Code |Usually a sub-account designation. Can be used with or without the Vendor Account Number. -|=============================================================================================================== - -=== Configuring Organizational Unit SAN code === - -Look in Administration -> Server Administration -> Organizational Units - -This interface allows a library to configure their SAN, alongside -their address, phone, etc. - -== Troubleshooting == - -=== PO JEDI Template Issues === - -Some libraries may run into issues with the action/trigger (PO JEDI). -The template has to be modified to handle different vendor codes that -may be used. For instance, if you use "ingra" instead of INGRAM this -may cause a problem because they are hardcoded in the template. The -following is an example of one modification that seems to work. - -.Original template has: - -[source, bash] ----------------------------------------------------------------------------------------------------------------------------------------------- -"buyer":[ - [% IF target.provider.edi_default.vendcode && (target.provider.code == 'BT' || target.provider.name.match('(?i)^BAKER & TAYLOR')) -%] - {"id-qualifier": 91, "id":"[% target.ordering_agency.mailing_address.san _ ' ' _ target.provider.edi_default.vendcode %]"} - [%- ELSIF target.provider.edi_default.vendcode && target.provider.code == 'INGRAM' -%] - {"id":"[% target.ordering_agency.mailing_address.san %]"}, - {"id-qualifier": 91, "id":"[% target.provider.edi_default.vendcode %]"} - [%- ELSE -%] - {"id":"[% target.ordering_agency.mailing_address.san %]"} - [%- END -%] -], ----------------------------------------------------------------------------------------------------------------------------------------------- - -.Modified template has the following where it matches on provider SAN instead of code: - -[source, bash] ------------------------------------------------------------------------------------------------------------------------------------------- -"buyer":[ - [% IF target.provider.edi_default.vendcode && (target.provider.san == '1556150') -%] - {"id-qualifier": 91, "id":"[% target.ordering_agency.mailing_address.san _ ' ' _ target.provider.edi_default.vendcode %]"} - {"id-qualifier": 91, "id":"[% target.ordering_agency.mailing_address.san _ ' ' _ target.provider.edi_default.vendcode %]"} - [%- ELSIF target.provider.edi_default.vendcode && (target.provider.san == '1697978') -%] - {"id":"[% target.ordering_agency.mailing_address.san %]"}, - {"id-qualifier": 91, "id":"[% target.provider.edi_default.vendcode %]"} - [%- ELSE -%] - {"id":"[% target.ordering_agency.mailing_address.san %]"} - [%- END -%] -], ------------------------------------------------------------------------------------------------------------------------------------------- - diff --git a/docs-antora/modules/installation/pages/introduction.adoc b/docs-antora/modules/installation/pages/introduction.adoc deleted file mode 100644 index c2e81fa90d..0000000000 --- a/docs-antora/modules/installation/pages/introduction.adoc +++ /dev/null @@ -1,4 +0,0 @@ -= Introduction = - -This part will guide you through the installation steps installation or -upgrading your Evergreen system. It is intended for system administrators. diff --git a/docs-antora/modules/installation/pages/server_installation.adoc b/docs-antora/modules/installation/pages/server_installation.adoc deleted file mode 100644 index 44607b80b4..0000000000 --- a/docs-antora/modules/installation/pages/server_installation.adoc +++ /dev/null @@ -1,642 +0,0 @@ -= Installing the Evergreen server = -:toc: - -== Preamble: referenced user accounts == - -In subsequent sections, we will refer to a number of different accounts, as -follows: - - * Linux user accounts: - ** The *user* Linux account is the account that you use to log onto the - Linux system as a regular user. - ** The *root* Linux account is an account that has system administrator - privileges. On Debian you can switch to this account from - your *user* account by issuing the `su -` command and entering the - password for the *root* account when prompted. On Ubuntu you can switch - to this account from your *user* account using the `sudo su -` command - and entering the password for your *user* account when prompted. - ** The *opensrf* Linux account is an account that you create when installing - OpenSRF. You can switch to this account from the *root* account by - issuing the `su - opensrf` command. - ** The *postgres* Linux account is created automatically when you install - the PostgreSQL database server. You can switch to this account from the - *root* account by issuing the `su - postgres` command. - * PostgreSQL user accounts: - ** The *evergreen* PostgreSQL account is a superuser account that you will - create to connect to the PostgreSQL database server. - * Evergreen administrator account: - ** The *egadmin* Evergreen account is an administrator account for - Evergreen that you will use to test connectivity and configure your - Evergreen instance. - -== Preamble: developer instructions == - -[NOTE] -Skip this section if you are using an official release tarball downloaded -from http://evergreen-ils.org/egdownloads - -Developers working directly with the source code from the Git repository, -rather than an official release tarball, must perform one step before they -can proceed with the `./configure` step. - -As the *user* Linux account, issue the following command in the Evergreen -source directory to generate the configure script and Makefiles: - -[source, bash] ------------------------------------------------------------------------------- -autoreconf -i ------------------------------------------------------------------------------- - -== Installing prerequisites == - - * **PostgreSQL**: The minimum supported version is 9.6. - * **Linux**: Evergreen has been tested on - Debian Buster (10), - Debian Stretch (9), - Debian Jessie (8), - Ubuntu Bionic Beaver (18.04), - and Ubuntu Xenial Xerus (16.04). - If you are running an older version of these distributions, you may want - to upgrade before upgrading Evergreen. For instructions on upgrading these - distributions, visit the Debian or Ubuntu websites. - * **OpenSRF**: The minimum supported version of OpenSRF is 3.2.0. - - -Evergreen has a number of prerequisite packages that must be installed -before you can successfully configure, compile, and install Evergreen. - -1. Begin by installing the most recent version of OpenSRF (3.2.0 or later). - You can download OpenSRF releases from http://evergreen-ils.org/opensrf-downloads/ -+ -2. Issue the following commands as the *root* Linux account to install - prerequisites using the `Makefile.install` prerequisite installer, - substituting `debian-buster`,`debian-stretch`,`debian-jessie`,`ubuntu-bionic`, or - `ubuntu-xenial` for below: -+ -[source, bash] ------------------------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install ------------------------------------------------------------------------------- -+ -[#optional_developer_additions] -3. OPTIONAL: Developer additions -+ -To perform certain developer tasks from a Git source code checkout, -including the testing of the Angular web client components, -additional packages may be required. As the *root* Linux account: -+ - * To install packages needed for retrieving and managing web dependencies, - use the -developer Makefile.install target. Currently, - this is only needed for building and installing the web - staff client. -+ -[source, bash] ------------------------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install -developer ------------------------------------------------------------------------------- -+ - * To install packages required for building Evergreen translations, use - the -translator Makefile.install target. -+ -[source, bash] ------------------------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install -translator ------------------------------------------------------------------------------- -+ - * To install packages required for building Evergreen release bundles, use - the -packager Makefile.install target. -+ -[source, bash] ------------------------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install -packager ------------------------------------------------------------------------------- - -== Extra steps for web staff client == - -[NOTE] -Skip this entire section if you are using an official release tarball downloaded -from http://evergreen-ils.org/downloads. Otherwise, ensure you have installed the -xref:#optional_developer_additions[optional developer additions] before proceeding. - -[[install_files_for_web_staff_client]] -=== Install AngularJS files for web staff client === - -1. Building, Testing, Minification: The remaining steps all take place within - the staff JS web root: -+ -[source,sh] ------------------------------------------------------------------------------- -cd $EVERGREEN_ROOT/Open-ILS/web/js/ui/default/staff/ ------------------------------------------------------------------------------- -+ -2. Install Project-local Dependencies. npm inspects the 'package.json' file - for dependencies and fetches them from the Node package network. -+ -[source,sh] ------------------------------------------------------------------------------- -npm install # fetch JS dependencies ------------------------------------------------------------------------------- -+ -3. Run the build script. -+ -[source,sh] ------------------------------------------------------------------------------- -# build, concat+minify -npm run build-prod ------------------------------------------------------------------------------- -+ -4. OPTIONAL: Test web client code if the -developer packages were installed. - CHROME_BIN should be set to the path to chrome or chromimum, e.g., - `/usr/bin/chromium`: -+ -[source,sh] ------------------------------------------------------------------------------- -CHROME_BIN=/path/to/chrome npm run test ------------------------------------------------------------------------------- - -[[install_files_for_angular_web_staff_client]] -=== Install Angular files for web staff client === - -1. Building, Testing, Minification: The remaining steps all take place within - the Angular staff root: -+ -[source,sh] ------------------------------------------------------------------------------- -cd $EVERGREEN_ROOT/Open-ILS/src/eg2/ ------------------------------------------------------------------------------- -+ -2. Install Project-local Dependencies. npm inspects the 'package.json' file - for dependencies and fetches them from the Node package network. -+ -[source,sh] ------------------------------------------------------------------------------- -npm install # fetch JS dependencies ------------------------------------------------------------------------------- -+ -3. Run the build script. -+ -[source,sh] ------------------------------------------------------------------------------- -# build and run tests -ng build --prod ------------------------------------------------------------------------------- -+ -4. OPTIONAL: Test eg2 web client code if the -developer packages were installed: - CHROME_BIN should be set to the path to chrome or chromimum, e.g., - `/usr/bin/chromium`: -+ -[source,sh] ------------------------------------------------------------------------------- -CHROME_BIN=/path/to/chrome npm run test ------------------------------------------------------------------------------- - -== Configuration and compilation instructions == - -For the time being, we are still installing everything in the `/openils/` -directory. From the Evergreen source directory, issue the following commands as -the *user* Linux account to configure and build Evergreen: - -[source, bash] ------------------------------------------------------------------------------- -PATH=/openils/bin:$PATH ./configure --prefix=/openils --sysconfdir=/openils/conf -make ------------------------------------------------------------------------------- - -These instructions assume that you have also installed OpenSRF under `/openils/`. -If not, please adjust PATH as needed so that the Evergreen `configure` script -can find `osrf_config`. - -== Installation instructions == - -1. Once you have configured and compiled Evergreen, issue the following - command as the *root* Linux account to install Evergreen and copy - example configuration files to `/openils/conf`. -+ -[source, bash] ------------------------------------------------------------------------------- -make install ------------------------------------------------------------------------------- - -== Change ownership of the Evergreen files == - -All files in the `/openils/` directory and subdirectories must be owned by the -`opensrf` user. Issue the following command as the *root* Linux account to -change the ownership on the files: - -[source, bash] ------------------------------------------------------------------------------- -chown -R opensrf:opensrf /openils ------------------------------------------------------------------------------- - -== Run ldconfig == - -On Ubuntu 18.04 or Debian Stretch / Buster, run the following command as the root user: - -[source, bash] ------------------------------------------------------------------------------- -ldconfig ------------------------------------------------------------------------------- - -== Additional Instructions for Developers == - -[NOTE] -Skip this section if you are using an official release tarball downloaded -from http://evergreen-ils.org/egdownloads - -Developers working directly with the source code from the Git repository, -rather than an official release tarball, need to install the Dojo Toolkit -set of JavaScript libraries. The appropriate version of Dojo is included in -Evergreen release tarballs. Developers should install the Dojo 1.3.3 version -of Dojo by issuing the following commands as the *opensrf* Linux account: - -[source, bash] ------------------------------------------------------------------------------- -wget http://download.dojotoolkit.org/release-1.3.3/dojo-release-1.3.3.tar.gz -tar -C /openils/var/web/js -xzf dojo-release-1.3.3.tar.gz -cp -r /openils/var/web/js/dojo-release-1.3.3/* /openils/var/web/js/dojo/. ------------------------------------------------------------------------------- - - -== Configure the Apache Web server == - -. Use the example configuration files to configure your Web server for -the Evergreen catalog, web staff client, Web services, and administration -interfaces. Issue the following commands as the *root* Linux account: -+ -[source,bash] ------------------------------------------------------------------------------------- -cp Open-ILS/examples/apache_24/eg_24.conf /etc/apache2/sites-available/eg.conf -cp Open-ILS/examples/apache_24/eg_vhost_24.conf /etc/apache2/eg_vhost.conf -cp Open-ILS/examples/apache_24/eg_startup /etc/apache2/ -# Now set up SSL -mkdir /etc/apache2/ssl -cd /etc/apache2/ssl ------------------------------------------------------------------------------------- -+ -. The `openssl` command cuts a new SSL key for your Apache server. For a -production server, you should purchase a signed SSL certificate, but you can -just use a self-signed certificate and accept the warnings in the -and browser during testing and development. Create an SSL key for the Apache -server by issuing the following command as the *root* Linux account: -+ -[source,bash] ------------------------------------------------------------------------------- -openssl req -new -x509 -days 365 -nodes -out server.crt -keyout server.key ------------------------------------------------------------------------------- -+ -. As the *root* Linux account, edit the `eg.conf` file that you copied into -place. - a. To enable access to the offline upload / execute interface from any - workstation on any network, make the following change (and note that - you *must* secure this for a production instance): - * Replace `Require host 10.0.0.0/8` with `Require all granted` -. Change the user for the Apache server. - * As the *root* Linux account, edit - `/etc/apache2/envvars`. Change `export APACHE_RUN_USER=www-data` to - `export APACHE_RUN_USER=opensrf`. -. As the *root* Linux account, configure Apache with KeepAlive settings - appropriate for Evergreen. Higher values can improve the performance of a - single client by allowing multiple requests to be sent over the same TCP - connection, but increase the risk of using up all available Apache child - processes and memory. - * Edit `/etc/apache2/apache2.conf`. - a. Change `KeepAliveTimeout` to `1`. - b. Change `MaxKeepAliveRequests` to `100`. -. As the *root* Linux account, configure the prefork module to start and keep - enough Apache servers available to provide quick responses to clients without - running out of memory. The following settings are a good starting point for a - site that exposes the default Evergreen catalogue to the web: -+ -.`/etc/apache2/mods-available/mpm_prefork.conf` -[source,bash] ------------------------------------------------------------------------------- - - StartServers 15 - MinSpareServers 5 - MaxSpareServers 15 - MaxRequestWorkers 75 - MaxConnectionsPerChild 500 - ------------------------------------------------------------------------------- -+ -. As the *root* user, enable the mpm_prefork module: -+ -[source,bash] ------------------------------------------------------------------------------- -a2dismod mpm_event -a2enmod mpm_prefork ------------------------------------------------------------------------------- -+ -. As the *root* Linux account, enable the Evergreen site: -+ -[source,bash] ------------------------------------------------------------------------------- -a2dissite 000-default # OPTIONAL: disable the default site (the "It Works" page) -a2ensite eg.conf ------------------------------------------------------------------------------- -+ -. As the *root* Linux account, enable Apache to write - to the lock directory; this is currently necessary because Apache - is running as the `opensrf` user: -+ -[source,bash] ------------------------------------------------------------------------------- -chown opensrf /var/lock/apache2 ------------------------------------------------------------------------------- - -Learn more about additional Apache options in the following sections: - - * xref:admin:apache_rewrite_tricks.adoc#apache_rewrite_tricks[Apache Rewrite Tricks] - * xref:admin:apache_access_handler.adoc#apache_access_handler_perl_module[Apache Access Handler Perl Module] - -== Configure OpenSRF for the Evergreen application == - -There are a number of example OpenSRF configuration files in `/openils/conf/` -that you can use as a template for your Evergreen installation. Issue the -following commands as the *opensrf* Linux account: - -[source, bash] ------------------------------------------------------------------------------- -cp -b /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml -cp -b /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml ------------------------------------------------------------------------------- - -When you installed OpenSRF, you created four Jabber users on two -separate domains and edited the `opensrf_core.xml` file accordingly. Please -refer back to the OpenSRF README and, as the *opensrf* Linux account, edit the -Evergreen version of the `opensrf_core.xml` file using the same Jabber users -and domains as you used while installing and testing OpenSRF. - -[NOTE] -The `-b` flag tells the `cp` command to create a backup version of the -destination file. The backup version of the destination file has a tilde (`~`) -appended to the file name, so if you have forgotten the Jabber users and -domains, you can retrieve the settings from the backup version of the files. - -`eg_db_config`, described in xref:#creating_the_evergreen_database[Creating the Evergreen database], sets the database connection information in `opensrf.xml` for you. - -=== Configure action triggers for the Evergreen application === -_Action Triggers_ provide hooks for the system to perform actions when a given -event occurs; for example, to generate reminder or overdue notices, the -`checkout.due` hook is processed and events are triggered for potential actions -if there is no checkin time. - -To enable the default set of hooks, issue the following command as the -*opensrf* Linux account: - -[source, bash] ------------------------------------------------------------------------------- -cp -b /openils/conf/action_trigger_filters.json.example /openils/conf/action_trigger_filters.json ------------------------------------------------------------------------------- - -For more information about configuring and running action triggers, see -xref:admin:actiontriggers_process.adoc#processing_action_triggers[Notifications / Action Triggers]. - -[#creating_the_evergreen_database] -== Creating the Evergreen database == - -=== Setting up the PostgreSQL server === - -For production use, most libraries install the PostgreSQL database server on a -dedicated machine. Therefore, by default, the `Makefile.install` prerequisite -installer does *not* install the PostgreSQL 9 database server that is required -by every Evergreen system. You can install the packages required by Debian or -Ubuntu on the machine of your choice using the following commands as the -*root* Linux account: - -. Installing PostgreSQL server packages - -Each OS build target provides the postgres server installation packages -required for each operating system. To install Postgres server packages, -use the make target 'postgres-server-'. Choose the most appropriate -command below based on your operating system. This will install PostgreSQL 9.6, -the minimum supported version. - -[source, bash] ------------------------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-buster -make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-stretch -make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-jessie -make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-xenial -make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-bionic ------------------------------------------------------------------------------- - -To install PostgreSQL version 10, use the following command for your operating -system: - -[source, bash] ------------------------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-buster-10 -make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-stretch-10 -make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-jessie-10 -make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-xenial-10 -make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-bionic-10 ------------------------------------------------------------------------------- - -For a standalone PostgreSQL server, install the following Perl modules for your -distribution as the *root* Linux account: - -.(Debian and Ubuntu) -No extra modules required for these distributions. - -You need to create a PostgreSQL superuser to create and access the database. -Issue the following command as the *postgres* Linux account to create a new -PostgreSQL superuser named `evergreen`. When prompted, enter the new user's -password: - -[source, bash] ------------------------------------------------------------------------------- -createuser -s -P evergreen ------------------------------------------------------------------------------- - -.Enabling connections to the PostgreSQL database - -Your PostgreSQL database may be configured by default to prevent connections, -for example, it might reject attempts to connect via TCP/IP or from other -servers. To enable TCP/IP connections from localhost, check your `pg_hba.conf` -file, found in the `/etc/postgresql/` directory on Debian and Ubuntu. -A simple way to enable TCP/IP -connections from localhost to all databases with password authentication, which -would be suitable for a test install of Evergreen on a single server, is to -ensure the file contains the following entries _before_ any "host ... ident" -entries: - ------------------------------------------------------------------------------- -host all all ::1/128 md5 -host all all 127.0.0.1/32 md5 ------------------------------------------------------------------------------- - -When you change the `pg_hba.conf` file, you will need to reload PostgreSQL to -make the changes take effect. For more information on configuring connectivity -to PostgreSQL, see -http://www.postgresql.org/docs/devel/static/auth-pg-hba-conf.html - -=== Creating the Evergreen database and schema === - -Once you have created the *evergreen* PostgreSQL account, you also need to -create the database and schema, and configure your configuration files to point -at the database server. Issue the following command as the *root* Linux account -from inside the Evergreen source directory, replacing , , -, , and with the appropriate values for your -PostgreSQL database (where and are for the *evergreen* -PostgreSQL account you just created), and replace and -with the values you want for the *egadmin* Evergreen administrator account: - -[source, bash] ------------------------------------------------------------------------------- -perl Open-ILS/src/support-scripts/eg_db_config --update-config \ - --service all --create-database --create-schema --create-offline \ - --user --password --hostname --port \ - --database --admin-user --admin-pass ------------------------------------------------------------------------------- - -This creates the database and schema and configures all of the services in -your `/openils/conf/opensrf.xml` configuration file to point to that database. -It also creates the configuration files required by the Evergreen `cgi-bin` -administration scripts, and sets the user name and password for the *egadmin* -Evergreen administrator account to your requested values. - -You can get a complete set of options for `eg_db_config` by passing the -`--help` parameter. - -=== Loading sample data === - -If you add the `--load-all-sample` parameter to the `eg_db_config` command, -a set of authority and bibliographic records, call numbers, copies, staff -and regular users, and transactions will be loaded into your target -database. This sample dataset is commonly referred to as the _concerto_ -sample data, and can be useful for testing out Evergreen functionality and -for creating problem reports that developers can easily recreate with their -own copy of the _concerto_ sample data. - -=== Creating the database on a remote server === - -In a production instance of Evergreen, your PostgreSQL server should be -installed on a dedicated server. - -==== PostgreSQL 9.6 and later ==== - -To create the database instance on a remote database server running PostgreSQL -9.6 or later, simply use the `--create-database` flag on `eg_db_config`. - -== Starting Evergreen == - -1. As the *root* Linux account, start the `memcached` and `ejabberd` services -(if they aren't already running): -+ -[source, bash] ------------------------------------------------------------------------------- -/etc/init.d/ejabberd start -/etc/init.d/memcached start ------------------------------------------------------------------------------- -+ -2. As the *opensrf* Linux account, start Evergreen. The `-l` flag in the -following command is only necessary if you want to force Evergreen to treat the -hostname as `localhost`; if you configured `opensrf.xml` using the real -hostname of your machine as returned by `perl -ENet::Domain 'print -Net::Domain::hostfqdn() . "\n";'`, you should not use the `-l` flag. -+ -[source, bash] ------------------------------------------------------------------------------- -osrf_control -l --start-all ------------------------------------------------------------------------------- -+ - ** If you receive the error message `bash: osrf_control: command not found`, - then your environment variable `PATH` does not include the `/openils/bin` - directory; this should have been set in the *opensrf* Linux account's - `.bashrc` configuration file. To manually set the `PATH` variable, edit the - configuration file `~/.bashrc` as the *opensrf* Linux account and add the - following line: -+ -[source, bash] ------------------------------------------------------------------------------- -export PATH=$PATH:/openils/bin ------------------------------------------------------------------------------- -+ -3. As the *opensrf* Linux account, generate the Web files needed by the web staff - client and catalogue and update the organization unit proximity (you need to do - this the first time you start Evergreen, and after that each time you change the library org unit configuration. -): -+ -[source, bash] ------------------------------------------------------------------------------- -autogen.sh ------------------------------------------------------------------------------- -+ -4. As the *root* Linux account, restart the Apache Web server: -+ -[source, bash] ------------------------------------------------------------------------------- -/etc/init.d/apache2 restart ------------------------------------------------------------------------------- -+ -If the Apache Web server was running when you started the OpenSRF services, you -might not be able to successfully log in to the OPAC or web staff client until the -Apache Web server is restarted. - -== Testing connections to Evergreen == - -Once you have installed and started Evergreen, test your connection to -Evergreen via `srfsh`. As the *opensrf* Linux account, issue the following -commands to start `srfsh` and try to log onto the Evergreen server using the -*egadmin* Evergreen administrator user name and password that you set using the -`eg_db_config` command: - -[source, bash] ------------------------------------------------------------------------------- -/openils/bin/srfsh -srfsh% login ------------------------------------------------------------------------------- - -You should see a result like: - - Received Data: "250bf1518c7527a03249858687714376" - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.045286 - ------------------------------------ - - Received Data: { - "ilsevent":0, - "textcode":"SUCCESS", - "desc":" ", - "pid":21616, - "stacktrace":"oils_auth.c:304", - "payload":{ - "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", - "authtime":420 - } - - } - - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 1.336568 - ------------------------------------ -[[install-troubleshooting-1]] -If this does not work, it's time to do some troubleshooting. - - * As the *opensrf* Linux account, run the `settings-tester.pl` script to see - if it finds any system configuration problems. The script is found at - `Open-ILS/src/support-scripts/settings-tester.pl` in the Evergreen source - tree. - * Follow the steps in the http://evergreen-ils.org/dokuwiki/doku.php?id=troubleshooting:checking_for_errors[troubleshooting guide]. - * If you have faithfully followed the entire set of installation steps - listed here, you are probably extremely close to a working system. - Gather your configuration files and log files and contact the - http://evergreen-ils.org/communicate/mailing-lists/[Evergreen development -mailing list] for assistance before making any drastic changes to your system - configuration. - -== Getting help == - -Need help installing or using Evergreen? Join the mailing lists at -http://evergreen-ils.org/communicate/mailing-lists/ or contact us on the Freenode -IRC network on the #evergreen channel. - -== License == - -This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 -Unported License. To view a copy of this license, visit -http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative -Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA. diff --git a/docs-antora/modules/installation/pages/server_upgrade.adoc b/docs-antora/modules/installation/pages/server_upgrade.adoc deleted file mode 100644 index cbd647b426..0000000000 --- a/docs-antora/modules/installation/pages/server_upgrade.adoc +++ /dev/null @@ -1,322 +0,0 @@ -= Upgrading the Evergreen Server = -:toc: - -Before upgrading, it is important to carefully plan an upgrade strategy to minimize system downtime and service interruptions. -All of the steps in this chapter are to be completed from the command line. - -== Software Prerequisites == - - * **PostgreSQL**: The minimum supported version is 9.6. - * **Linux**: Evergreen 3.X.X has been tested on Debian Stretch (9.0), - Debian Jessie (8.0), Ubuntu Xenial Xerus (16.04), and Ubuntu Bionic Beaver (18.04). - If you are running an older version of these distributions, you may want - to upgrade before upgrading Evergreen. For instructions on upgrading these - distributions, visit the Debian or Ubuntu websites. - * **OpenSRF**: The minimum supported version of OpenSRF is 3.2.0. - - -In the following instructions, you are asked to perform certain steps as either the *root* or *opensrf* user. - - * **Debian**: To become the *root* user, issue the `su` command and enter the password of the root user. - * **Ubuntu**: To become the *root* user, issue the `sudo su` command and enter the password of your current user. - -To switch from the *root* user to a different user, issue the `su - [user]` -command; for example, `su - opensrf`. Once you have become a non-root user, to -become the *root* user again simply issue the `exit` command. - -== Upgrade the Evergreen code == - -The following steps guide you through a simplistic upgrade of a production -server. You must adjust these steps to accommodate your customizations such -as catalogue skins. - -. Stop Evergreen and back up your data: - .. As *root*, stop the Apache web server. - .. As the *opensrf* user, stop all Evergreen and OpenSRF services: -+ -[source, bash] ------------------------------ -osrf_control --localhost --stop-all ------------------------------ -+ - .. Back up the /openils directory. -. Upgrade OpenSRF. Download and install the latest version of OpenSRF from -the https://evergreen-ils.org/opensrf-downloads/[OpenSRF download page]. -. As the *opensrf* user, download and extract Evergreen 3.X.X: -+ -[source, bash] ------------------------------------------------ -wget https://evergreen-ils.org/downloads/Evergreen-ILS-3.X.X.tar.gz -tar xzf Evergreen-ILS-3.X.X.tar.gz ------------------------------------------------ -+ -[NOTE] -For the latest edition of Evergreen, check the https://evergreen-ils.org/egdownloads/[Evergreen download page] and adjust upgrading instructions accordingly. - -. As the *root* user, install the prerequisites: -+ -[source, bash] ---------------------------------------------- -cd /home/opensrf/Evergreen-ILS-3.X.X ---------------------------------------------- -+ -On the next command, replace `[distribution]` with one of these values for your -distribution of Debian or Ubuntu: -+ -indexterm:[Linux, Debian] -indexterm:[Linux, Ubuntu] -+ - * `debian-stretch` for Debian Stretch (9.0) (EDI compatibility in progress) - * `debian-jessie` for Debian Jessie (8.0) (See https://bugs.launchpad.net/evergreen/+bug/1342227[Bug 134222] if you want to use EDI) - * `ubuntu-xenial` for Ubuntu Xenial Xerus (16.04) (EDI compatibility in progress) - -+ -[source, bash] ------------------------------------------------------------- -make -f Open-ILS/src/extras/Makefile.install [distribution] ------------------------------------------------------------- -+ -. As the *opensrf* user, configure and compile Evergreen: -+ -[source, bash] ------------------------------------------------------------- -cd /home/opensrf/Evergreen-ILS-3.X.X -PATH=/openils/bin:$PATH ./configure --prefix=/openils --sysconfdir=/openils/conf -make ------------------------------------------------------------- -+ -These instructions assume that you have also installed OpenSRF under /openils/. If not, please adjust PATH as needed so that the Evergreen configure script can find osrf_config. -+ -. As the *root* user, install Evergreen: -+ -[source, bash] ------------------------------------------------------------- -cd /home/opensrf/Evergreen-ILS-3.X.X -make install ------------------------------------------------------------- -+ - -**Note** that this version of Evergreen does not use the legacy XUL staff -client by default, but if you wish to use a versioned XUL staff client, you -can supply `STAFF_CLIENT_STAMP` during the `make install` step like this: -+ -[source, bash] ------------------------------------------------------------- -cd /home/opensrf/Evergreen-ILS-3.X.X -make STAFF_CLIENT_STAMP_ID=rel_3_x_x install ------------------------------------------------------------- -+ -. As the *root* user, change all files to be owned by the opensrf user and group: -+ -[source, bash] ------------------------------------------------------------- -chown -R opensrf:opensrf /openils ------------------------------------------------------------- -+ -. (Optional, only if you are using the legacy staff client) - As the *opensrf* user, update the server symlink in /openils/var/web/xul/: -+ -[source, bash] ------------------------------------------------------------- -cd /openils/var/web/xul/ -rm server -ln -sf rel_3_x_x/server server ------------------------------------------------------------- -+ -. As the *opensrf* user, update opensrf_core.xml and opensrf.xml by copying the - new example files (/openils/conf/opensrf_core.xml.example and - /openils/conf/opensrf.xml). The _-b_ option creates a backup copy of the old file. -+ -[source, bash] ------------------------------------------------------------- -cp -b /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml -cp -b /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml ------------------------------------------------------------- -+ -[CAUTION] -Copying these configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying them. -+ -. As the *opensrf* user, update the configuration files: -+ -[source, bash] -------------------------------------------------------------------------- -cd /home/opensrf/Evergreen-ILS-3.X.X -perl Open-ILS/src/support-scripts/eg_db_config --update-config --service all \ ---create-offline --database evergreen --host localhost --user evergreen --password evergreen -------------------------------------------------------------------------- -+ -. As the *root* user, update the Apache files: -+ -indexterm:[Apache] -+ -Use the example configuration files in `Open-ILS/examples/apache/` (for -Apache versions below 2.4) or `Open-ILS/examples/apache_24/` (for Apache -versions 2.4 or greater) to configure your Web server for the Evergreen -catalog, staff client, Web services, and administration interfaces. Issue the -following commands as the *root* Linux account: -+ -[CAUTION] -Copying these Apache configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying them. -For example, if you purchased an SSL certificate, you will need to edit eg.conf to point to the appropriate SSL certificate files. -The diff command can be used to show the differences between the distribution version and your customized version. `diff ` -+ -.. Update _/etc/apache2/eg_startup_ by copying the example from _Open-ILS/examples/apache/eg_startup_. -+ -[source, bash] ----------------------------------------------------------- -cp /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/examples/apache/eg_startup /etc/apache2/eg_startup ----------------------------------------------------------- -+ -.. Update /etc/apache2/eg_vhost.conf by copying the example from Open-ILS/examples/apache/eg_vhost.conf. -+ -[source, bash] ----------------------------------------------------------- -cp /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/examples/apache/eg_vhost.conf /etc/apache2/eg_vhost.conf ----------------------------------------------------------- -+ -.. Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/examples/apache/eg.conf. -+ -[source, bash] ----------------------------------------------------------- -cp /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/examples/apache/eg.conf /etc/apache2/sites-available/eg.conf ----------------------------------------------------------- - -== Upgrade the Evergreen database schema == - -indexterm:[database schema] - -The upgrade of the Evergreen database schema is the lengthiest part of the -upgrade process for sites with a significant amount of production data. - -Before running the upgrade script against your production Evergreen database, -back up your database, restore it to a test server, and run the upgrade script -against the test server. This enables you to determine how long the upgrade -will take and whether any local customizations present problems for the -stock upgrade script that require further tailoring of the upgrade script. -The backup also enables you to cleanly restore your production data if -anything goes wrong during the upgrade. - -[NOTE] -============= -Evergreen provides incremental upgrade scripts that allow you to upgrade -from one minor version to the next until you have the current version of -the schema. For example, if you want to upgrade from 2.9.0 to 2.11.0, you -would run the following upgrade scripts: - -- 2.9.0-2.9.1-upgrade-db.sql -- 2.9.1-2.9.2-upgrade-db.sql -- 2.9.2-2.9.3-upgrade-db.sql -- 2.9.3-2.10.0-upgrade-db.sql (this is a major version upgrade) -- 2.10.0-2.10.1-upgrade-db.sql -- 2.10.1-2.10.2-upgrade-db.sql -- 2.10.2-2.10.3-upgrade-db.sql -- 2.10.3-2.10.4-upgrade-db.sql -- 2.10.4-2.10.5-upgrade-db.sql -- 2.10.5-2.10.6-upgrade-db.sql -- 2.10.6-2.10.7-upgrade-db.sql -- 2.10.7-2.11.0-upgrade-db.sql (this is a major version upgrade) - -Note that you do *not* necessarily want to run additional upgrade scripts to -upgrade to the newest version, since currently there is no automated way, for -example to upgrade from 2.9.4+ to 2.10. Only upgrade as far as necessary to -reach the major version upgrade script (in this example, as far as 2.9.3). - -============= - -[CAUTION] -Pay attention to error output as you run the upgrade scripts. If you encounter errors -that you cannot resolve yourself through additional troubleshooting, please -report the errors to the https://evergreen-ils.org/communicate/mailing-lists/[Evergreen -Technical Discussion List]. - -Run the following steps (including other upgrade scripts, as noted above) -as a user with the ability to connect to the database server. - -[source, bash] ----------------------------------------------------------- -cd /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/src/sql/Pg -psql -U evergreen -h localhost -f version-upgrade/3.X.W-3.X.X-upgrade-db.sql evergreen ----------------------------------------------------------- - -[TIP] -After the some database upgrade scripts finish, you may see a -note on how to reingest your bib records. You may run this after you have -completed the entire upgrade and tested your system. Reingesting records -may take a long time depending on the number of bib records in your system. - -== Restart Evergreen and Test == - -. As the *root* user, restart memcached to clear out all old user sessions. -+ -[source, bash] --------------------------------------------------------------- -service memcached restart --------------------------------------------------------------- -+ -. As the *opensrf* user, start all Evergreen and OpenSRF services: -+ -[source, bash] --------------------------------------------------------------- -osrf_control --localhost --start-all --------------------------------------------------------------- -+ -. As the *opensrf* user, run autogen to refresh the static organizational data files: -+ -[source, bash] --------------------------------------------------------------- -cd /openils/bin -./autogen.sh --------------------------------------------------------------- -+ -. Start srfsh and try logging in using your Evergreen username and password: -+ -[source, bash] --------------------------------------------------------------- -/openils/bin/srfsh -srfsh% login username password --------------------------------------------------------------- -+ -You should see a result like: -+ -[source, bash] --------------------------------------------------------------- -Received Data: "250bf1518c7527a03249858687714376" - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 0.045286 - ------------------------------------ - - Received Data: { - "ilsevent":0, - "textcode":"SUCCESS", - "desc":" ", - "pid":21616, - "stacktrace":"oils_auth.c:304", - "payload":{ - "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", - "authtime":420 - } - - } - - ------------------------------------ - Request Completed Successfully - Request Time in seconds: 1.336568 - ------------------------------------ --------------------------------------------------------------- -+ -If this does not work, it's time to do some -xref:installation:server_installation.adoc#install-troubleshooting-1[troubleshooting]. -+ -. As the *root* user, start the Apache web server. -+ -If you encounter errors, refer to the -xref:installation:server_installation.adoc#install-troubleshooting-1[troubleshooting] section -of this documentation for tips on finding solutions and seeking further assistance -from the Evergreen community. - -== Review Release Notes == - -Review this version's release notes for other tasks -that need to be done after upgrading. If you have upgraded over several -major versions, you will need to review the release notes for each version also. diff --git a/docs-antora/modules/installation/pages/system_requirements.adoc b/docs-antora/modules/installation/pages/system_requirements.adoc deleted file mode 100644 index 31cbd72e56..0000000000 --- a/docs-antora/modules/installation/pages/system_requirements.adoc +++ /dev/null @@ -1,35 +0,0 @@ -= System Requirements = -:toc: - -== Server Minimum Requirements == - -The following are the base requirements setting Evergreen up on a test server: - - * An available desktop, server or virtual image - * 4GB RAM, or more if your server also runs a graphical desktop - * Linux Operating System (community supports Debian, Ubuntu, or Fedora) - * Ports 80 and 443 should be opened in your firewall for TCP connections to allow OPAC and staff client connections to the Evergreen server. - -== Web Client Requirements == - -The current stable release of Firefox or Chrome is required to run the web -client in a browser. - -== Staff Client Requirements == - -Staff terminals connect to the central database using the Evergreen staff client, available for download from The Evergreen download page. -The staff client must be installed on each staff workstation and requires at minimum: - - * Windows, Mac OS X, or Linux operating system - * a reliable high speed Internet connection - * 2GB RAM - * The staff client uses the TCP protocol on ports 80 and 443 to communicate with the Evergreen server. - -*Barcode Scanners* - -Evergreen will work with virtually any barcode scanner – if it worked with your legacy system it should work on Evergreen. - -*Printers* - -Evergreen can use any printer configured for your terminal to print receipts, check-out slips, holds lists, etc. The single exception is spine label printing, -which is still under development. Evergreen currently formats spine labels for output to a label roll printer. If you do not have a roll printer manual formatting may be required. diff --git a/docs-antora/modules/local_admin/_attributes.adoc b/docs-antora/modules/local_admin/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/local_admin/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/local_admin/nav.adoc b/docs-antora/modules/local_admin/nav.adoc deleted file mode 100644 index 30fcf92cfc..0000000000 --- a/docs-antora/modules/local_admin/nav.adoc +++ /dev/null @@ -1,13 +0,0 @@ -* xref:local_admin:introduction.adoc[Local Administration] -** xref:admin:librarysettings.adoc[Library Settings Editor] -** xref:admin:lsa-address_alert.adoc[Address Alert] -** xref:admin:lsa-barcode_completion.adoc[Barcode Completion] -** xref:admin:hold_driven_recalls.adoc[Hold-driven recalls] -** xref:admin:emergency_closing_handler.adoc[Emergency Closing Handler] -** xref:admin:actiontriggers.adoc[Notifications / Action Triggers] -*** xref:admin:actiontriggers_process.adoc[Processing Action Triggers] -** xref:admin:staff_client-recent_searches.adoc[Recent Staff Searches] -** xref:admin:lsa-standing_penalties.adoc[Standing Penalties] -** xref:admin:lsa-statcat.adoc[Statistical Categories Editor] -** xref:admin:popularity_badges_web_client.adoc[Statistical Popularity Badges] -** xref:admin:lsa-work_log.adoc[Work Log] diff --git a/docs-antora/modules/local_admin/pages/_attributes.adoc b/docs-antora/modules/local_admin/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/local_admin/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/local_admin/pages/introduction.adoc b/docs-antora/modules/local_admin/pages/introduction.adoc deleted file mode 100644 index b3d20385bc..0000000000 --- a/docs-antora/modules/local_admin/pages/introduction.adoc +++ /dev/null @@ -1,4 +0,0 @@ -= Introduction = - -This part covers the options in the Local Administration menu found in the staff -client. diff --git a/docs-antora/modules/opac/_attributes.adoc b/docs-antora/modules/opac/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/opac/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-01.png b/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-01.png deleted file mode 100644 index c7f91182ec..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-01.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-02.png b/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-02.png deleted file mode 100644 index 6ce6669ecb..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-02.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-03.png b/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-03.png deleted file mode 100644 index df4b5c47cb..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-03.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-04.png b/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-04.png deleted file mode 100644 index 33c901d425..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-04.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-06.png b/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-06.png deleted file mode 100644 index 1a84d018b7..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/BatchActionsSearch-06.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC1.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC1.jpg deleted file mode 100644 index 847bbb5182..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC1.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC10.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC10.jpg deleted file mode 100644 index 944159369e..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC10.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC11.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC11.jpg deleted file mode 100644 index d3ed5bfba8..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC11.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC12.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC12.jpg deleted file mode 100644 index 7255464160..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC12.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC13.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC13.jpg deleted file mode 100644 index 1693ad15eb..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC13.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC14.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC14.jpg deleted file mode 100644 index 3c0214b404..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC14.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC15.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC15.jpg deleted file mode 100644 index a483c1654d..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC15.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC16.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC16.jpg deleted file mode 100644 index 33cce3d7f3..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC16.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC17.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC17.jpg deleted file mode 100644 index c7c845bcd6..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC17.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC2.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC2.jpg deleted file mode 100644 index aebcdfef2c..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC2.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC4.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC4.jpg deleted file mode 100644 index 9b14495aa0..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC4.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC5.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC5.jpg deleted file mode 100644 index 61b6c3a6aa..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC5.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC6.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC6.jpg deleted file mode 100644 index 3bf605bf27..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC6.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC7.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC7.jpg deleted file mode 100644 index 604c76beb1..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC7.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC8.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC8.jpg deleted file mode 100644 index d8b2f0889f..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC8.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/Kids_OPAC9.jpg b/docs-antora/modules/opac/assets/images/media/Kids_OPAC9.jpg deleted file mode 100644 index 8754a8ca28..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/Kids_OPAC9.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/My_Lists.png b/docs-antora/modules/opac/assets/images/media/My_Lists.png deleted file mode 100644 index c19ecd3cdf..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/My_Lists.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/My_Lists1.jpg b/docs-antora/modules/opac/assets/images/media/My_Lists1.jpg deleted file mode 100644 index feb5fe32ec..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/My_Lists1.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/My_Lists3.jpg b/docs-antora/modules/opac/assets/images/media/My_Lists3.jpg deleted file mode 100644 index 562749bad0..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/My_Lists3.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/My_Lists6.jpg b/docs-antora/modules/opac/assets/images/media/My_Lists6.jpg deleted file mode 100644 index ac11709917..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/My_Lists6.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/My_Lists7.jpg b/docs-antora/modules/opac/assets/images/media/My_Lists7.jpg deleted file mode 100644 index 06c2ed7904..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/My_Lists7.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/My_Lists_dd.png b/docs-antora/modules/opac/assets/images/media/My_Lists_dd.png deleted file mode 100644 index 9f41ad5e21..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/My_Lists_dd.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/advholdoption_6.jpg b/docs-antora/modules/opac/assets/images/media/advholdoption_6.jpg deleted file mode 100644 index 71e7585fd9..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/advholdoption_6.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/advsrchpg_1.jpg b/docs-antora/modules/opac/assets/images/media/advsrchpg_1.jpg deleted file mode 100644 index 32d465a1c7..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/advsrchpg_1.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-10.png b/docs-antora/modules/opac/assets/images/media/catalogue-10.png deleted file mode 100644 index 8cb6c4374e..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-10.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-3.png b/docs-antora/modules/opac/assets/images/media/catalogue-3.png deleted file mode 100644 index 610d4a9fea..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-3.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-5.png b/docs-antora/modules/opac/assets/images/media/catalogue-5.png deleted file mode 100644 index dc8cbf81bd..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-5.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-6.png b/docs-antora/modules/opac/assets/images/media/catalogue-6.png deleted file mode 100644 index 2cf678c27c..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-6.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-7.png b/docs-antora/modules/opac/assets/images/media/catalogue-7.png deleted file mode 100644 index 2ebec0c7af..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-7.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-8.png b/docs-antora/modules/opac/assets/images/media/catalogue-8.png deleted file mode 100644 index ae3973f0b3..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-8.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-8a.png b/docs-antora/modules/opac/assets/images/media/catalogue-8a.png deleted file mode 100644 index 2eb504a0f1..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-8a.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/catalogue-9.png b/docs-antora/modules/opac/assets/images/media/catalogue-9.png deleted file mode 100644 index 8692d738ed..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/catalogue-9.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/message_center10.PNG b/docs-antora/modules/opac/assets/images/media/message_center10.PNG deleted file mode 100644 index 9a25289175..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/message_center10.PNG and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/message_center11.PNG b/docs-antora/modules/opac/assets/images/media/message_center11.PNG deleted file mode 100644 index a2b3ed71fb..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/message_center11.PNG and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/message_center12.PNG b/docs-antora/modules/opac/assets/images/media/message_center12.PNG deleted file mode 100644 index d81efdc8f0..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/message_center12.PNG and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/mrholdgf_9.jpg b/docs-antora/modules/opac/assets/images/media/mrholdgf_9.jpg deleted file mode 100644 index 32a2d59c73..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/mrholdgf_9.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/my_list_call_numbers.png b/docs-antora/modules/opac/assets/images/media/my_list_call_numbers.png deleted file mode 100644 index 62e75e36d7..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/my_list_call_numbers.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/opensearch1.png b/docs-antora/modules/opac/assets/images/media/opensearch1.png deleted file mode 100644 index 9311defc00..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/opensearch1.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/opensearch2.png b/docs-antora/modules/opac/assets/images/media/opensearch2.png deleted file mode 100644 index 630cd39701..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/opensearch2.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/opensearch3.png b/docs-antora/modules/opac/assets/images/media/opensearch3.png deleted file mode 100644 index 832febddda..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/opensearch3.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/opensearch4.png b/docs-antora/modules/opac/assets/images/media/opensearch4.png deleted file mode 100644 index 22a04e35a9..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/opensearch4.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/other-formats-and-editions.png b/docs-antora/modules/opac/assets/images/media/other-formats-and-editions.png deleted file mode 100644 index 1c9565f64c..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/other-formats-and-editions.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/placehold_5.jpg b/docs-antora/modules/opac/assets/images/media/placehold_5.jpg deleted file mode 100644 index 0910c3467d..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/placehold_5.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/recorddetailpg_8.jpg b/docs-antora/modules/opac/assets/images/media/recorddetailpg_8.jpg deleted file mode 100644 index 7835c360a1..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/recorddetailpg_8.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/searchfilters1.PNG b/docs-antora/modules/opac/assets/images/media/searchfilters1.PNG deleted file mode 100644 index e5cfe323d5..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/searchfilters1.PNG and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/searchfilters2.PNG b/docs-antora/modules/opac/assets/images/media/searchfilters2.PNG deleted file mode 100644 index 02af8d3d00..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/searchfilters2.PNG and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/srchresultpg2_3.jpg b/docs-antora/modules/opac/assets/images/media/srchresultpg2_3.jpg deleted file mode 100644 index cf1886d2f8..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/srchresultpg2_3.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/srchresultpg3_4.jpg b/docs-antora/modules/opac/assets/images/media/srchresultpg3_4.jpg deleted file mode 100644 index bb21800e32..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/srchresultpg3_4.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/srchresultpg4_7.jpg b/docs-antora/modules/opac/assets/images/media/srchresultpg4_7.jpg deleted file mode 100644 index ceb9783c3c..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/srchresultpg4_7.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/srchresultpg_2.jpg b/docs-antora/modules/opac/assets/images/media/srchresultpg_2.jpg deleted file mode 100644 index 0026285aa5..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/srchresultpg_2.jpg and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/textcn1.png b/docs-antora/modules/opac/assets/images/media/textcn1.png deleted file mode 100644 index 27f19adff8..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/textcn1.png and /dev/null differ diff --git a/docs-antora/modules/opac/assets/images/media/using-opac-view-permalink.png b/docs-antora/modules/opac/assets/images/media/using-opac-view-permalink.png deleted file mode 100644 index a81bbee498..0000000000 Binary files a/docs-antora/modules/opac/assets/images/media/using-opac-view-permalink.png and /dev/null differ diff --git a/docs-antora/modules/opac/nav.adoc b/docs-antora/modules/opac/nav.adoc deleted file mode 100644 index 6787fc0c8a..0000000000 --- a/docs-antora/modules/opac/nav.adoc +++ /dev/null @@ -1,12 +0,0 @@ -* xref:opac:introduction.adoc[Using the Public Access Catalog] -** xref:opac:using_the_public_access_catalog.adoc[Using the Public Access Catalog] -** xref:opac:my_lists.adoc[My Lists] -** xref:opac:batch_actions_from_search.adoc[Batch Actions from Search] -** xref:opac:kids_opac.adoc[Kids OPAC] -** xref:opac:catalog_browse.adoc[Catalog Browse] -** xref:opac:advanced_features.adoc[Bibliographic Search Enhancements] -** xref:opac:tpac_meta_record_holds.adoc[TPAC Metarecord Search and Metarecord Level Holds] -** xref:opac:linked_libraries.adoc[Library Information Pages] -** xref:opac:opensearch.adoc[Adding Evergreen Search to Web Browsers] -** xref:opac:search_form.adoc[Adding an Evergreen search form to a web page] - diff --git a/docs-antora/modules/opac/pages/_attributes.adoc b/docs-antora/modules/opac/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/opac/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/opac/pages/advanced_features.adoc b/docs-antora/modules/opac/pages/advanced_features.adoc deleted file mode 100644 index af27cf697c..0000000000 --- a/docs-antora/modules/opac/pages/advanced_features.adoc +++ /dev/null @@ -1,92 +0,0 @@ -= Bibliographic Search Enhancements = -:toc: - -Enhancements to the bibliographic search function enable you to search for records that were created, edited, or deleted within a date range. You can use the catalog interface or the record feed to search for records with specific date ranges. - -Note that all dates should be formatted as YYYY-MM-DD and should be included in parentheses. - - -== Use the Catalog to Retrieve Records with Specified Date Ranges: == - - -=== Search by Create Date or Range === - -To find records that were created on or after a specific date, enter the term, create_date, and the date in the catalog search field. For example, to find records that were created on or after April 1, 2013, enter the following into the catalog search field: - - -create_date(2013-04-01) - - -To find records that were created within a specific date range, enter the term, create_date, followed by comma-separated dates in parentheses. For example, to find records that were created between April 1, 2013 and April 8, 2013, enter the following into the catalog search field: - - -create_date(2013-04-01,2013-04-08) - - - - -=== Search by Edit Date or Range === - - -To find records that were edited on or before a specific date, enter the term, edit-date, and the date in the catalog search field. The date should be preceded by a comma. For example, to find records that were edited on or before April 1, 2013, enter the following into the catalog search field: - - -edit_date(,2013-04-01) - - -To find records that were edited on or after a specific date, enter the term, edit_date, and the date in the catalog search field. For example, to find records that were edited on or after April 1, 2013, enter the following into the catalog search field: - - -edit_date(2013-04-01) - - -To find records that were edited within a specific range, enter the term, edit_date, followed by comma-separated dates in parentheses. For example, to find records that were edited between April 1, 2013 and April 8, 2013, enter the following into the catalog search field: - - -edit_date(2013-04-01,2013-04-08) - - - - -=== Search by Deleted Status === - - -To search for deleted records, enter in your catalog search field the term, edit_date, the date that you want to search, and the term, #deleted. For example, to find records that were deleted on or after April 1, 2013, enter the following into the catalog search field: - -edit_date(2013-04-01)#deleted - - - -To find records that were deleted within a specific range, enter the term, edit_date, followed by comma-separated dates in parentheses. For example, to find records that were deleted between April 1, 2013 and April 8, 2013, enter the following into the catalog search field: - - -edit_date(2013-04-01,2013-04-08)#deleted - - - -== Use a Feed to Retrieve Records with Specified Date Ranges: == - -You can use a feed to retrieve records that were created, edited, or deleted within specific date ranges by adding the dates to the catalog's URL. You can do this manually, or you can write a script that would automatically retrieve this information. - -To manually retrieve records that were created, edited, or deleted within a specific date, enter the terms and dates as specified above within the search terms in the URL. For example, to retrieve records created on or after April 1, 2013, enter the following in your URL: - - -http://test.esilibrary.com/opac/extras/opensearch/1.1/-/html-full?searchTerms=create_date(2013-04-01)&searchClass=keyword - - -NOTE: To retrieve deleted records, replace the # with %23 in your URL. - - -== Binary MARC21 Feeds == -Evergreen's OpenSearch service can return search results in many formats, including HTML, MARCXML, and MODS. As of version 2.4, it can also return results in binary MARC21 format. - -When making an HTTP request to an Evergreen system using the OpenSearch API, you must include the term "marc21" in the appropriate location within the URL to retrieve a feed of MARC21 records in a binary format. The following example demonstrates the appropriate form of the URL: - -http://test.esilibrary.com/opac/extras/opensearch/1.1/-/marc21?searchTerms=create_date%282013-04-01%29&searchClass=keyword - -You can add this term manually to the URL produced by a catalog search, or you can create a script that would retrieve this information automatically. - - - - - diff --git a/docs-antora/modules/opac/pages/batch_actions_from_search.adoc b/docs-antora/modules/opac/pages/batch_actions_from_search.adoc deleted file mode 100644 index c7da7bb19e..0000000000 --- a/docs-antora/modules/opac/pages/batch_actions_from_search.adoc +++ /dev/null @@ -1,108 +0,0 @@ -[#batch_actions_from_search] -= Batch Actions from Search = -:toc: - -== Introduction == - -The public catalog and staff interface display checkboxes on the search results pages, both for bibliographic records and metarecord constituents. Selecting one or more titles with these checkboxes adds the titles to a basket, which is viewable on the search bar as an icon. Users can then take a variety of actions on titles within the basket: place holds, print or email title details, add the items to a permanent list (from the public catalog) or add the titles to a bucket (from the staff interface). - - -== Using Batch Actions from Search in the Public Catalog == - -. Perform a search in the public catalog and retrieve a list of results. -+ -Checkboxes appear to the left of the number of each result. In the case of a metarecord search, checkboxes only appear on the list of metarecord constituents, as metarecords themselves cannot be placed in lists or in baskets. If you want to place the entire page of results on the list, click the _Select All_ checkbox at the top of the results list. -+ - -. Select one or more titles from the results list by clicking on the checkboxes. -+ -Selected titles are automatically added to the basket. A link above the results list tracks the number of titles selected and added to the basket. -+ -image::media/BatchActionsSearch-01.png[Selecting Search Results] -+ - -. The number of items can also be found with the basket icon above the search bar, next to the _Basket Actions_ drop-down. -+ -image::media/BatchActionsSearch-02.png[Basket Actions Drop-down] -+ - -. Click on the _Basket Actions_ drop-down next to the basket icon to take any of the following actions on titles within the basket: View Basket, Place Hold, Print Title Details, Email Title Details, Add Basket to Saved List, Clear Basket. - -image::media/BatchActionsSearch-03.png[Details of Basket Actions Drop-down] - - -=== Actions Initiated with the Basket Actions Drop-down === -* *View Basket* - This opens the basket in a new screen. Checkboxes allow for the selection of one or more titles within the basket. A drop-down menu appears above the list of titles that can be used to place holds, print title details, email title details, or remove titles from the basket. This menu reads _Actions for these items_. (See the next section for more information about this menu.) - -* *Place Hold* - This allows for placement of holds in batch for all of the items in the basket. If not already authenticated, users will be asked to login. Once authenticated, the holds process begins for all titles within the basket. Users can set _Advanced Hold Options_ for each title, as well as set the pickup location, hold notification and suspend options. - -* *Print Title Details* - This allows for printing details of all titles within the basket. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after printing. - -* *Email Title Details* - This allows for emailing details of all titles within the basket. If not already authenticated, users will be asked to login. Once authenticated, the email process begins. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after emailing. - -* *Add Basket to Saved List* - This allows basket items to be saved to a new permanent list. If not already authenticated, users will be asked to login. Once authenticated, the creation of a new permanent list begins. - -* *Clear Basket* - This removes removes all titles from the basket - -=== View Basket -> _Actions for These Items_ Drop-down Menu === -Most actions described above can be taken on titles from within the basket with the _Actions for these items_ drop-down menu. This menu offers additional flexibility, as users can select some or all of the individual titles in the basket on which to place holds, print or email details, or remove from the basket. Users cannot add titles to permanent lists with this menu. - -image::media/BatchActionsSearch-04.png[Actions for These Items Drop-down Menu] - -== Using Batch Actions from Search in the Staff Interface == - -. Perform a search in the staff interface and retrieve a list of results. -+ -Checkboxes appear to the left of the number of each result. In the case of a metarecord search, checkboxes only appear on the list of metarecord constituents, as metarecords themselves cannot be placed in lists or in baskets. If you want to place the entire page of results on the list, click the Select All checkbox at the top of the results list. -+ - -. Select one or more titles from the results list by clicking on the checkboxes. Selected titles are automatically added to the basket. A link above the results list tracks the number of titles selected and added to the basket. -+ -image::media/BatchActionsSearch-01.png[Selecting Search Results] -+ - -. The number of items can also be found with the basket icon above the search bar, next to the _Basket Actions_ drop-down. -+ -image::media/BatchActionsSearch-02.png[Basket Actions Drop-down] -+ - -. Click on the _Basket Actions_ drop-down next to the basket icon to take any of the following actions on titles within the basket: View Basket, Place Hold, Print Title Details, Email Title Details, Add Basket to Saved List, Clear Basket. - -image::media/BatchActionsSearch-03.png[Details of Basket Actions Drop-down] - - -=== Actions Initiated with the Basket Actions Drop-down === - -* *View Basket* - This opens the basket in a new screen. Checkboxes allow for the selection of one or more titles within the basket. A drop-down menu appears above the list of titles that can be used to place holds, print title details, email title details, or remove titles from the basket. This menu reads _Actions for these items_. (See the next section for more information about this menu.) - -* *Place Hold* - This allows for placement of holds in batch for all of the items in the basket. When initiated, the holds process begins for all titles within the basket. Staff can set _Advanced Hold Options_ for each title placed on hold, as well as set the pickup location, hold notification and suspend options. - -* *Print Title Details* - This allows for printing details of all titles within the basket. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after printing. - -* *Email Title Details* - This allows for emailing details of all titles within the basket. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after printing. - -* *Add Basket to Bucket* - This allows for titles within the basket to be added to an existing or new Record Bucket. -** Click the _Basket Actions_ drop-down and choose _Add Basket to Bucket_ -** To add the titles in your basket to an existing bucket, select the bucket from the _Name of existing bucket_ dropdown and click _Add to Select Bucket_. -** To add the titles in your basket to a new bucket, enter the name of your new bucket in the text box and click _Add to New Bucket_. -+ -image::media/BatchActionsSearch-06.png[Add Basket Titles to Bucket] -+ -* *Clear Basket* - removes all items from the basket - - -=== View Basket -> Actions for These Items Drop-down Menu === - -Most of the basket actions can be taken on titles from within the basket with the _Actions for these items_ drop-down menu. This menu offers additional flexibility, as staff can select some or all of the individual titles within the basket on which to place holds, print or email details, or remove from the basket. Staff cannot place titles in Records Buckets from this menu. - -== Additional Information == - -The basket used to be called a *Temporary List* in previous versions of Evergreen. - -Titles also may be added from the detailed bibliographic record with the _Add to Basket_ link. - -Javascript must be enabled for checkboxes to appear in the public catalog; however, users can still add items to the basket and perform batch actions without Javascript. - -The default limit on the number of basket titles is 500; however, a template config.tt2 setting (+ctx.max_basket_size+) can be used to set a different limit. When the configured limit is reached, checkboxes are disabled unless or until some titles in the basket are removed. - -The permanent list management page within a patron’s account also now includes batch print and email actions. diff --git a/docs-antora/modules/opac/pages/catalog_browse.adoc b/docs-antora/modules/opac/pages/catalog_browse.adoc deleted file mode 100644 index 85b8c8178b..0000000000 --- a/docs-antora/modules/opac/pages/catalog_browse.adoc +++ /dev/null @@ -1,31 +0,0 @@ -= Catalog Browse = -:toc: - -*Abstract* - -Catalog Browse enables you to browse bibliographic headings available in your catalog. You can click the hyperlinked bibliographic headings to retrieve catalog records that contain these headings. Also, if a given bibliographic heading is linked to an authority record, and if that authority is linked to another one via the first authority's See and See Also tags, the additional variants of (e.g.) an author's name will appear in your search results. - - -*Use Catalog Browse* - -. To access this feature, navigate to the catalog search page, and click the link, *Browse the Catalog*. By default, you can browse by title, author, subject, or series. System administrators can revise this list by editing the file at the location 'opac/parts/qtype_selector.tt2', and they can even make use of custom indices based on definitions in the database's 'config.metabib_field' table. - - -. Enter a term or part of a term to browse. Evergreen will retrieve a list of bibliographic headings that match your query. Click the *Back* and *Forward* buttons to page through you results. To limit your browse results to a specific branch or copy location group, select the appropriate unit from the drop down menu, and click *Go*. - -. Select a link from the search results. Each linked heading displays the number of bibliographic records associated with the heading. Appropriate information from linked authority records, if any, appears below the main entry heading. - -. To return to your list of results, click the browser's back button or *Browse the Catalog*. Evergreen will return you to your previous position in your list of results. - - - -*Administration* - -A new global flag warns users when they are entering a browse term that begins with an article. Systems administrators can create a regular expression to configure articles matched with specific indices that would prompt a warning for the user. By default, this setting is not enabled. - -. To enable this feature, click *Administration* -> *Server Administration* -> *Global Flags*. - -. Double click *Map of search classes to regular expressions to warn user about leading articles.* - -. Make changes, and click *Save*. - diff --git a/docs-antora/modules/opac/pages/introduction.adoc b/docs-antora/modules/opac/pages/introduction.adoc deleted file mode 100644 index 4c2e5e7f9e..0000000000 --- a/docs-antora/modules/opac/pages/introduction.adoc +++ /dev/null @@ -1,13 +0,0 @@ -= Introduction = -:toc: - -Evergreen has a public OPAC that meets WCAG guidelines -(http://www.w3.org/WAI/intro/wcag), which helps make the OPAC accessible to -users with a range of disabilities. This part of the documentation explains how -to use the Evergreen public OPAC. It covers the basic catalog and more advanced -search topics. It also describes the ``My Account'' tools users have to find -information and manage their personal library accounts through the OPAC. This -section could be used by staff and patrons but would be more useful for staff as -a generic reference when developing custom guides and tutorials for their users. - - diff --git a/docs-antora/modules/opac/pages/kids_opac.adoc b/docs-antora/modules/opac/pages/kids_opac.adoc deleted file mode 100644 index 8cd50373f2..0000000000 --- a/docs-antora/modules/opac/pages/kids_opac.adoc +++ /dev/null @@ -1,193 +0,0 @@ -= Kids OPAC = -:toc: - -== Introduction == - -The Kids OPAC (KPAC) is a public catalog search that was designed for children -and teens. Colorful menu items,large buttons, and simple navigation make this -an appealing search interface for kids. Librarians will appreciate the flexible -configuration of the KPAC. Librarians can create links to canned search results -for kids and can apply these links by branch. The KPAC uses the same infrastructure -as the Template Toolkit OPAC (TPAC), the adult catalog search, so you can easily -extend the KPAC using the code that already exists in the TPAC. Finally, third -party content, such as reader reviews, can be integrated into the KPAC. - -== Choose a Skin == - -Two skins, or design interfaces, have been created for the KPAC. The KPAC was -designed to run multiple skins on a single web server. A consortium, then, could -allow each library system to choose a skin for their patrons. - -*Default Skin:* - -In this skin, the search bar is the focal point of the top panel and is centered -on the screen. The search grid appears beneath the search bar. Help and Login -links appear at the top right of the interface. You can customize the appearance -and position of these links with CSS. After you login, the user name is displayed -in the top right corner, and the Login link becomes an option to Logout. - -image::media/Kids_OPAC1.jpg[Kids_OPAC1] - -*Alternate Monster Skin:* - -In this skin, the search bar shares the top panel with a playful monster. The -search grid appears beneath the search bar. Help and Login links appear in bold -colors at the top right of the interface although you can customize these with CSS. -After you login, the Login button disappears. - -image::media/Kids_OPAC2.jpg[Kids_OPAC2] - - -== Search the Catalog == - -You can search the catalog using only the search bar, the search grid, or the search -bar and the collection drop down menu. - - -*Search using the Search Bar* - -To search the catalog from the home page, enter text into the search bar in the -center of the main page, or enter text into the search bar to the right of the -results on a results page. Search indices are configurable, but the default search -indices include author, title and (key)word. - -You can use this search bar to search the entire catalog, or, using the configuration -files, you can apply a filter so that search queries entered here retrieve records -that meet specific criteria, such as child-friendly copy locations or MARC audience -codes. - - -*Search using the Grid* - -From the home page, you can search the catalog by clicking on the grid of icons. -An icon search can link to an external web link or to a canned search. For example, -the icon, Musical Instruments, could link to the results of a catalog search on -the subject heading, Musical instruments. - -The labels on the grid of icons and the content that they search are configurable -by branch. You can use the grid to search the entire catalog, or, using the -configuration files, you can apply a filter so that search queries entered here -retrieve records associated with specific criteria, such as child-friendly copy -locations or MARC audience codes. - - -image::media/Kids_OPAC4.jpg[Kids_OPAC4] - - -You can add multiple layers of icons and searches to your grid: - - -image::media/Kids_OPAC5.jpg[Kids_OPAC5] - - - -*Search using the Search Bar and the _Collection_ Drop Down Menu* - -On the search results page, a search bar and drop down menu appear on the right -side of the screen. You can enter a search term and into the search bar and select -a collection from the drop down menu to search these configured collections. -Configured collections might provide more targeted searching for your audience -than a general catalog search. For example, you could create collections by shelving -location or by MARC audience code. - - -image::media/Kids_OPAC17.jpg[Kids_OPAC17] - - -Using any search method, the search results display in the center of the screen. -Brief information displays beneath each title in the initial search result. The -brief information that displays, such as title, author, or publication information, -is configurable. - - -image::media/Kids_OPAC6.jpg[Kids_OPAC6] - - -For full details on a title, click *More Info*. The full details displays the -configured fields from the title record and copy information. Click *Show more -copies* to display up to fifty results. Use the breadcrumbs at the top to trace -your search history. - - -image::media/Kids_OPAC7.jpg[Kids_OPAC7] - - - -== Place a Hold == - -From the search results, click the *Get it!* link to place a hold. - - -image::media/Kids_OPAC11.jpg[Kids_OPAC11] - - -The brief information about the title appears, and, if you have not yet logged in, -the *Get It!* panel appears with fields for username and password. Enter the username -and password, and select the pick up library. Then click *Submit*. If you have -already logged into your account, you need only to select the pick up location, -and click *Submit*. - - -image::media/Kids_OPAC12.jpg[Kids_OPAC12] - - -A confirmation of hold placement appears. You can return to the previous record -or to your search results. - - -image::media/Kids_OPAC13.jpg[Kids_OPAC13] - - - -== Save Items to a List == - -You can save items to a temporary list, or, if you are logged in, you can save to -a list of your own creation. To save items to a list, click the *Get it* button -on the Search Results page. - - -image::media/Kids_OPAC14.jpg[Kids_OPAC14] - - -Select a list in the *Save It!* panel beneath the brief information, and click *Submit*. - - -image::media/Kids_OPAC16.jpg[Kids_OPAC16] - - -A confirmation of the saved item appears. To save the item to a list or to manage -the lists, click the *My Lists* link to return to the list management feature in -the TPAC. - - -image::media/Kids_OPAC15.jpg[Kids_OPAC15] - - - -== Third Party Content == - -Third party content, such as reader reviews, can be viewed in the Kids OPAC. The -reviews link appears adjacent to the brief information. - -image::media/Kids_OPAC8.jpg[Kids_OPAC8] - - -Click the Reviews link to view reader reviews from a third party source. The reader -reviews open beneath the brief information. - - -image::media/Kids_OPAC9.jpg[Kids_OPAC9] - - -Summaries and reviews from other publications appear in separate tabs beneath the -copy information. - - -image::media/Kids_OPAC10.jpg[Kids_OPAC10] - -== Configuration Files == - -Configuration files allow you to define labels for canned searches in the icon -grid, determine how icons lead users to new pages, and define whether those icons -are canned searches or links to external resources. Documentation describing how -to use the configuration files is available in the Evergreen repository. diff --git a/docs-antora/modules/opac/pages/linked_libraries.adoc b/docs-antora/modules/opac/pages/linked_libraries.adoc deleted file mode 100644 index 0e19f15533..0000000000 --- a/docs-antora/modules/opac/pages/linked_libraries.adoc +++ /dev/null @@ -1,44 +0,0 @@ -= Library Information Pages = -:toc: - -The branch name displayed in the copy details section of the search results -page, the record summary page, and the kids catalog record summary page will -link to a library information page. This page is located at -`http://hostname/eg/opac/library/` and at -`http://hostname/eg/opac/library/`. - -Evergreen automatically generates this page based on information entered in -*Administration* -> *Server Administration* -> *Organizational Units* (actor.org_unit). - -The library information page displays: - -* The name of the library -* Opening hours -* E-mail address -* Phone number -* Mailing address -* The branch's parent library system - -An Evergreen site can also display a link to the library's web site on the -information page. - -To display a link: - -. Go to *Administration* -> *Local Administration* -> *Library Settings Editor*. -. Edit the *Library Information URL* setting for the branch. -[NOTE] -If you set the URL at the system level, that URL will be used as the link for -the system and for all child branches that do not have its own URL set. -. Enter the URL in the following format: http://example.com/about.html. - -An Evergreen site may also opt to link directly from the copy details section -of the catalog to the library web site, bypassing the automatically-generated -library information page. To do so: - -. Add the library's URL to the *Library Information URL* setting as described -above. -. Go to *Administration* -> *Local Administration* -> *Library Settings Editor*. -. Set the *Use external "library information URL" in copy table, if available* -setting to true. - -The library information pages publish schema.org structured data, as do parts of the OPAC bibliographic record views, which can enable search engines and other systems to better understand your libraries and their resources. diff --git a/docs-antora/modules/opac/pages/my_account.adoc b/docs-antora/modules/opac/pages/my_account.adoc deleted file mode 100644 index 8bc15502fb..0000000000 --- a/docs-antora/modules/opac/pages/my_account.adoc +++ /dev/null @@ -1,300 +0,0 @@ - -[#my_account] -= My Account = -:toc: - -// ``First Login Password Update'' the following documentation comes from JSPAC -// as of 2013-03-12 this feature did not exist in EG 2.4 TPAC, -// so I am commenting it out for now because it will be added in the future -// see bug report https://bugs.launchpad.net/evergreen/+bug/1013786 -// Yamil Suarez 2013-03-12 - -//// - - -== First Login Password Update == - - -indexterm:[my account, first login password update] - -Patrons are given temporary passwords when new accounts are created, or -forgotten passwords are reset by staff. Patrons MUST change their password to -something more secure when they login or for the first time. Once the password -is updated, they will not have to repeat this process for subsequent logins. - -. Open a web browser and go to your Evergreen OPAC -. Click My Account -. Enter your _Username_ and _Password_. - * By default, your username is your library card number. - * Your password is a 4 digit code provided when your account was created. If -you have forgotten your password, contact your library to have it reset or use -the online the section called ``<>'' tool. -//// - - -== Logging In == - -indexterm:[my account, logging in] - -Logging into your account from the online catalog: - -. Open a web browser and navigate to your Evergreen OPAC. -. Click _My Account_ . -. Enter your _Username_ and _Password_. -** By default, your username is your library card number. -** Your password is a 4 digit code provided when your account was created. If -you have forgotten your password, contact your local library to have it reset or - use the the section called <> tool. -. Click Login. -+ -** At the first login, you may be prompted to change your password. -** If you updated your password, you must enter your _Username_ and _Password_ -again. -+ -. Your _Account Summary_ page displays. - - -To view your account details, click one of the _My Account_ tabs. - -To start a search, enter a term in the search box at the top of the page and -click _Search_! - -[CAUTION] -================= -If using a public computer be sure to log out! -================= - -[#password_reset] - -=== Password Reset === - -indexterm:[my account, password reset] - - -To reset your password: - -. click on the _Forgot your password?_ link located beside the login button. - -. Fill in the _Barcode_ and _User name_ text boxes. - -. A message should appear indicating that your request has been processed and -that you will receive an email with further instructions. - -. An email will be sent to the email addressed you have registered with your -Evergreen library. You should click on the link included in the email to open -the password reset page. Processing time may vary. -+ -[NOTE] -================= -You will need to have a valid email account set up in Evergreen for you to reset -your password. Otherwise, you will need to contact your library to have your -password reset by library staff. -================= -+ - -. At the reset email page you should enter the new password in the _New -password_ field and re-enter it in the _Re-enter new password_ field. - -. Click _Submit_. - -. A message should appear on the page indicating that your password has been reset. - -. Login to your account with your new password. - - -== Account Summary == - -indexterm:[my account, account summary] - -In the *My Account* -> *Account Summary* page, you can see when your account -expires and your total number of items checked out, items on hold, and items -ready for pickup. In addition, the Account Summary page lists your current fines -and payment history. - - -== Items Checked Out == - -indexterm:[my account, items checked out] - -Users can manage items currently checked out, like renew specific items. Users -can also view overdue items and see how many renewals they have remaining for -specific item. - -As of Evergreen version 2.9, sorting of selected columns is available in the - _Items Checked Out_ and _Check Out History_ pages. Clicking on the appropriate - column heads sorts the contents from "ascending" to "descending" to "no sort". -(The "no sort" restores the original list as presented in the screen.) The sort -indicator (an up or down arrow) is placed to the right of the column head, as -appropriate. - -Within *Items Checked Out* -> *Current Items Checked Out*, the following column - headers can be sorted: _Title_, _Author_, _Renewals Left_, _Due Date_, -_Barcode_, and _Call Number_. - -Within *Items Checked Out* -> *Check Out History*, the following column headers -can be sorted: _Title_, _Author_, _Checkout Date_, _Due Date_, _Date Returned_, -_Barcode_, and _Call Number_ - -[NOTE] -========== -To protect patron privacy, the Check Out History will be completely blank unless the patron has previously opted in under the _Account Preferences_ tab, in the _Search and History Preferences_ -area. -========== - - -== Holds == - -indexterm:[my account, holds] - -From *My Account*, patrons can see *Items on Hold* and *Holds History* and -manage items currently being requested. In *Holds* -> *Items on Hold*, the -content shown can be sorted by clicking on the following column headers: -_Title_, _Author_, and _Format_ (based on format name represented by the icon). - -Actions include: - -* Suspend - set a period of time during which the hold will not become active, -such as during a vacation -* Activate - manually remove the suspension -* Cancel - remove the hold request - -Edit options include: - -* Change pick up library -* Change the _Cancel unless filled by_ date, also known as the hold expiration -date -* Change the status of the hold to either active or suspended. -* Change the _If suspended, activate on_ date, which reactivates a suspended -hold at the specified date - -To edit items on hold: - -. Login to _My Account_, click the _Holds_ tab. -. Select the hold to modify. -. Click _Edit_ for selected holds. -. Select the change to make and follow the instructions. - -[NOTE] -========== -To protect patron privacy, the Holds History will be completely blank unless the patron has previously opted in under the _Account Preferences_ tab, in the _Search and History Preferences_ -area. -========== - -== Account Preferences == - -indexterm:[my account, account preferences] - -From here you can manage display preferences including your *Personal -Information*, *Notification Preferences*, and *Search and History Preferences*. -Additional static information, such as your _Account Expiration Date_, can be -found under Personal Information. - -For example: - -* Personal Information - -** change password - allows patrons to change their password - -** change email address - allows patrons to change their email address. - - - -* Notification Preferences - -** _Notify by Email_ by default when a hold is ready for pickup? - -** _Notify by Phone_ by default when a hold is ready for pickup? - -** _Default Phone Number_ - - -* Search and History Preferences - -** Search hits per page - -** Preferred pickup location - -** Keep history of checked out items? - -** Keep history of holds? - -[WARNING] -======== -Turning off the _Keep history of checked out items?_ or _Keep history of holds?_ features will permanently delete all entries in the relevant patron screens. After this is unchecked, -there is no way for a patron to recover those data. -======== - - -After changing any of these settings, you must click _Save_ to store your -preferences. - -=== Authorize other people to use your account === - -indexterm:[Allow others to use my account] -indexterm:[checking out,materials on another patron's account] -indexterm:[holds,picking up another patron's] -indexterm:[privacy waiver] - - -If your library has enabled it, you can authorize other people to use -your account. In the Search and History Preferences tab -under Account Preferences, find the section labeled "Allow others to use -my account". Enter the name and indicate that the -specified person is allowed to place holds, pickup holds, view -borrowing history, and check out items on their account. This -information will also be visible to circulation staff at your library. - - - -indexterm:[holds, preferred pickup location] - -== Patron Messages == - -The Patron Message Center provides a way for libraries to communicate with -patrons through messages that can be accessed through the patron's OPAC account. - Library staff can create messages manually by adding an OPAC visible Patron -Note to an account. Messages can also be automatically generated through an -Action Trigger event. Patrons can access and manage messages within their OPAC -account. See Circulation - Patron Record - Patron Message Center for more -information on adding messages to patron accounts. - -*Viewing Patron Messages in the OPAC* - -Patrons will see a new tab for *Messages* in their OPAC account, as well as a -notification of *Unread Messages* in the account summary. - -image::media/message_center11.PNG[Message Center 11] - -Patrons will see a list of the messages from the library by clicking on the -*Messages* tab. - -image::media/message_center10.PNG[Message Center 10] - -Patrons can click on a message *Subject* to view the message. After viewing the -message, it will automatically be marked as read. Patrons have the options to -mark the message as unread and to delete the message. - -image::media/message_center12.PNG[Message Center 12] - -NOTE: Patron deleted messages will still appear in the patron's account in the -staff client under Other -> Message Center. - -== Reservations == - -When patrons place a reservation for a particular item at a particular time, -they can check on its status using the *Reservations* tab. - -After they initially place a reservation, its status will display as _Reserved_. -After staff capture the reservation, the status will change to _Ready for Pickup_. -After the patron picks up the reservation, the status will change to _Checked Out_. -Finally, after the patron returns the item, the reservation will be removed from -the list. - -[NOTE] -==================== -This interface pulls its timezone from the Library -Settings Editor. Make sure that you have a timezone -listed for your library in the Library Settings Editor -before using this feature. -==================== - diff --git a/docs-antora/modules/opac/pages/my_lists.adoc b/docs-antora/modules/opac/pages/my_lists.adoc deleted file mode 100644 index 5be9c21e41..0000000000 --- a/docs-antora/modules/opac/pages/my_lists.adoc +++ /dev/null @@ -1,68 +0,0 @@ -= My Lists = -:toc: - -The *My Lists* feature replaces the bookbag feature that was available in versions prior to 2.2. The *My Lists* feature is a part of the Template Toolkit OPAC that is available in version 2.2. This feature enables you to create temporary and permanent lists; create and edit notes for items in lists; place holds on items in lists; and share lists via RSS feeds and CSV files. - -There is now a direct link to *My Lists* from the *My Account* area in the top right part of the screen. This gives users the ability to quickly access their lists while logged into the catalog. - -As of version 3.2, xref:opac:batch_actions_from_search.adoc#batch_actions_from_search[Batch Actions from Search Results] has replaced the old Temporary Lists feature, as well as enabled multiple selections from a search results list. - -image::media/My_Lists.png[My Lists] - -== Create New Lists == - -1) Log in to your account in the OPAC. - -2) Search for titles. - -3) Choose a title to add to your list. Click *Add to My List*. - -image::media/My_Lists1.jpg[Add to My List] - -4) Select an existing list, or create the a new list. - -image::media/My_Lists_dd.png[List Dropdown] - -5) Scroll up to the top of the screen and click *My Lists*. Click on the name of your list to see any titles added to it. - -6) The *Actions for these items* menu on the left side of the screen demonstrates the actions that you can apply to this list. You can place holds on titles in your list, print or email title details of titles in your list, and remove titles from your list. - -To perform actions on multiple list rows, check the box adjacent to the title of the item, and select the desired function. - -image::media/My_Lists3.jpg[List Actions] - -7) Click *Edit* to add or edit a note. - -8) Enter desired notes, and click *Save Notes*. - -image::media/My_Lists6.jpg[List Notes] - -9) You can keep your list private, or you can share it. To share your list, click *Share*, and click the orange RSS icon to share through an RSS reader. You can also click *HTML View* to share your list as an HTML link. - -You can also download your list into a CSV file by clicking *Download CSV*. - -image::media/My_Lists7.jpg[Share, Delete, Download List] - -16) When you no longer need a list, click *Delete List*. - - -== Local Call Number in My Lists == - -When a title is added to a list in the TPAC, a local call number will be displayed in the list to assist patrons in locating the physical item. Evergreen will look at the following locations to identify the most relevant call number to display in the list: - -1) Physical location - the physical library location where the search takes place - -2) Preferred library - the Preferred Search Location, which is set in patron OPAC account Search and History Preferences, or the patron's Home Library - -3) Search library - the search library or org unit that is selected in the OPAC search interface - -The call number that is displayed will be the most relevant call number to the searcher. If the patron is searching at the library, Evergreen will display a call number from that library location. If the patron is not searching at a library, but is logged in to their OPAC account, Evergreen will display a call number from their Home Library or Preferred Search Location. If the patron is not searching at the library and is not signed in to their OPAC account, then Evergreen will display a call number from the org unit, or library, that they choose to search in the OPAC search interface. - -The local call number and associated library location will appear in the list: - -image::media/my_list_call_numbers.png[Local Call Number in List] - -== My Lists Preferences == - -Patrons can adjust the number of lists or list items displayed in a page. This setting can be found under the *Account Preferences* tab, in the *My Lists Preferences* section. - diff --git a/docs-antora/modules/opac/pages/new_skin_customizations.adoc b/docs-antora/modules/opac/pages/new_skin_customizations.adoc deleted file mode 100644 index 2e7872966e..0000000000 --- a/docs-antora/modules/opac/pages/new_skin_customizations.adoc +++ /dev/null @@ -1,131 +0,0 @@ -= Creating a New Skin: the Bare Minimum = -:toc: - -== Introduction == - -When you adopt the TPAC as your catalog, you must create a new skin. This -involves a combination of overriding template files and setting Apache -directives to control the look and feel of your customized TPAC. - -== Apache directives == -There are a few Apache directives and environment variables of note for -customizing TPAC behavior. These directives should generally live within a -`` section of your Apache configuration. - -* `OILSWebDefaultLocale` specifies which locale to display when a user lands - on a page in the TPAC and has not chosen a different locale from the TPAC - locale picker. The following example shows the `fr_ca` locale being added - to the locale picker and being set as the default locale: -+ ------------------------------------------------------------------------------- -PerlAddVar OILSWebLocale "fr_ca" -PerlAddVar OILSWebLocale "/openils/var/data/locale/opac/fr-CA.po" -PerlAddVar OILSWebDefaultLocale "fr-CA" ------------------------------------------------------------------------------- -+ -* `physical_loc` is an Apache environment variable that sets the default - physical location, used for setting search scopes and determining the order - in which copies should be sorted. The following example demonstrates the - default physical location being set to library ID 104: -+ ------------------------------------------------------------------------------- -SetEnv physical_loc 104 ------------------------------------------------------------------------------- - -== Customizing templates == -When you install Evergreen, the TPAC templates include many placeholder images, -text, and links. You should override most of these to provide your users with a -custom experience that matches your library. Following is a list of templates -that include placeholder images, text, or links that you should override. - -NOTE: All paths are relative to `/openils/var/templates/opac` - -[[configtt2]] - -* `parts/config.tt2`: contains many configuration settings that affect the - behavior of the TPAC, including: - ** hiding the *Place Hold* button for available items - ** enabling RefWorks support for citation management - ** adding OpenURL resolution for electronic resources - ** enabling Google Analytics tracking for your TPAC - ** displaying the "Forgot your password?" prompt - ** controlling the size of cover art on the record details page - ** defining which facets to display, and in which order - ** controlling basic and advanced search options - ** controlling if the "Show More Details" button is visible or activated by -default in OPAC search results - ** hiding phone notification options (useful for libraries that do not do -phone notifications) - ** disallowing password or e-mail changes (useful for libraries that use -centralized authentication or single sign-on systems) - ** displaying a maintenance message in the public catalog and KPAC (this is -controlled by the _ctx.maintenance_message_ variable) - ** displaying previews of books when available from Google Books. This is -controlled by the _ctx.google_books_preview_ variable, which is set to 0 by -default to protect the privacy of users who might not want to share their -browsing behavior with Google. - ** disabling the "Group Formats and Editions" search. This is controlled by -setting the metarecords.disabled variable to 1. - ** setting the default search to a 'Group Formats and Editions' search. This -is done by setting the search.metarecord_default variable to 1. -* `parts/footer.tt2` and `parts/topnav_links.tt2`: contains customizable - links. Defaults like 'Link 1' will not mean much to your users! -* `parts/homesearch.tt2`: holds the large Evergreen logo on the home page - of the TPAC. Substitute your library's logo, or if you are adventurous, - create a "most recently added items" carousel... and then share your - customization with the Evergreen community. -* `parts/topnav_logo.tt2`: holds the small Evergreen logo that appears on the - top left of every page in the TPAC. You will also want to remove or change - the target of the link that wraps the logo and leads to the - http://evergreen-ils.org[Evergreen site]. -* `parts/login/form.tt2`: contains some assumptions about terminology and - examples that you might prefer to change to be more consistent with your own - site's existing practices. For example, you may not use 'PIN' at your library - because you want to encourage users to use a password that is more secure than - a four-digit number. -* `parts/login/help.tt2`: contains links that point to http://example.com, - images with text on them (which is not an acceptable practice for - accessibility reasons), and promises of answers to frequently asked questions - that might not exist at your site. -* \`parts/login/password_hint.tt2\`: contains a hint about your users' password - on first login that is misleading if your library does not set the initial - password for an account to the last four digits of the phone number associated - with the account. -* `parts/myopac/main_refund_policy.tt2`: describes the policy for refunds for - your library. -* `parts/myopac/prefs_hints.tt2`: suggests that users should have a valid email - on file so they can receive courtesy and overdue notices. If your library - does not send out email notices, you should edit this to avoid misleading your - users. -* `myopac/update_password_msg.tt2`: defines the password format that needs - to be used when setting a user password. If your Evergreen site has set - _Password format_ regex in the Library Settings Editor, you - should update the language to describe the format that should be used. -* `password_reset.tt2`: in the msg_map section, you might want to change the - NOT_STRONG text that appears when the user tries to set a password that - does not match the required format. Ideally, this message will tell the user - how they should format the password. -* \`parts/css/fonts.tt2\`: defines the font sizes for the TPAC in terms of one - base font size, and all other sizes derived from that in percentages. The - default is 12 pixels, but http://goo.gl/WfNkE[some design sites] strongly - suggest a base font size of 16 pixels. Perhaps you want to try '1em' as a - base to respect your users' preferences. You only need to change one number - in this file if you want to experiment with different options for your users. -* `parts/css/colors.tt2`: chances are your library's official colors do not - match Evergreen's wall of dark green. This file defines the colors in use in - the standard Evergreen template. In theory you should be able to change just - a few colors and everything will work, but in practice you will need to - experiment to avoid light-gray-on-white low-contrast combinations. - -The following are templates that are less frequently overridden, but some -libraries benefit from the added customization options. - -* `parts/advanced/numeric.tt2`: defines the search options of the Advanced -Search > Numeric search. If you wanted to add a bib call number search option, -which is different from the item copy call number; you would add the following -code to `numeric.tt2`. -+ ------------------------------------------------------------------------------- - ------------------------------------------------------------------------------- - diff --git a/docs-antora/modules/opac/pages/opensearch.adoc b/docs-antora/modules/opac/pages/opensearch.adoc deleted file mode 100644 index 18883cd1e1..0000000000 --- a/docs-antora/modules/opac/pages/opensearch.adoc +++ /dev/null @@ -1,34 +0,0 @@ -= Adding Evergreen Search to Web Browsers = -:toc: - -== Adding OpenSearch to Firefox browser == - -OpenSearch is a collection of simple formats for the sharing of search results. -More information about OpenSearch can be found on their -http://www.opensearch.org[website]. - -The following example illustrates how to add an OpenSearch source to the list -of search sources in a Firefox browser: - -. Navigate to any catalog page in your Firefox browser and click on the top - right box's dropdown and select the option for *Add "Example Consortium OpenSearch"*. - The label will match the current scope. -+ -image::media/opensearch1.png[opensearch1] - -. At this point, it will add a new search option for the location the catalog - is currently using. In this example, that is CONS (searching the whole - consortium). -+ -image::media/opensearch2.png[opensearch2] - -. Enter search terms to begin a keyword search using this source. The next - image illustrates an example search for "mozart" using the sample bib - record set. -+ -image::media/opensearch3.png[opensearch3] - -. You can select which search source to use by clicking on the dropdown - picker. -+ -image::media/opensearch4.png[opensearch4] diff --git a/docs-antora/modules/opac/pages/search_form.adoc b/docs-antora/modules/opac/pages/search_form.adoc deleted file mode 100644 index 6cc3997241..0000000000 --- a/docs-antora/modules/opac/pages/search_form.adoc +++ /dev/null @@ -1,92 +0,0 @@ -= Adding an Evergreen search form to a web page = -:toc: - -== Introduction == - -To enable users to quickly search your Evergreen catalog, you can add a -simple search form to any HTML page. The following code demonstrates -how to create a quick search box suitable for the header of your web -site: - -== Simple search form == - -[source,html] ------------------------------------------------------------------------------- -
- - - - -
------------------------------------------------------------------------------- -<1> Replace ''example.com'' with the hostname for your catalog. To link to - the Kid's OPAC instead of the TPAC, replace ''opac'' with ''kpac''. -<2> Replace ''keyword'' with ''title'', ''author'', ''subject'', or ''series'' - if you want to provide more specific searches. You can even specify - ''identifier|isbn'' for an ISBN search. -<3> Replace ''4'' with the ID number of the organizational unit at which you - wish to anchor your search. This is the value of the ''locg'' parameter in - your normal search. - -== Advanced search form == - -[source,html] --------------------------------------------------------------------------------- - --------------------------------------------------------------------------------- - -== Encoding == - -For non English characters it is vital to set the attribute `accept-charset="UTF-8"` in the form tag (as in the examples above). If the parameter is not set, records with non English characters will not be retrieved. - -== Setting the document type == - -You can set the document types to be searched using the attribute `option value=` in the form. For the value use MARC 21 code defining the type of record (i.e. https://www.loc.gov/marc/bibliographic/bdleader.html[Leader, position 06]). - -For example, for musical recordings you could use `` - -== Setting the library == - -Instead of searching the entire consortium, you can set the Library to be searched in using the attribute `option value=` in the form. For the value use Evergreen database.organization unit ID. - - diff --git a/docs-antora/modules/opac/pages/search_url.adoc b/docs-antora/modules/opac/pages/search_url.adoc deleted file mode 100644 index d6ea158d3c..0000000000 --- a/docs-antora/modules/opac/pages/search_url.adoc +++ /dev/null @@ -1,51 +0,0 @@ -== Search URL == - -indexterm:[search, URL] - -When performing a search or clicking on the details links, Evergreen constructs -a GET request url with the parameters of the search. The url for searches and -details in Evergreen are persistent links in that they can be saved, shared and -used later. - -Here is a basic search URL structure: - - -+++[hostname]+++/eg/opac/results?query=[search term]&**qtype**=keyword&fi%3Aitem_type=&**locg**=[location id] - -=== locg Parameter === -This is the id of the search location. It is an integer and matches the id of the -location the user selected in the location drop down menu. - -=== qtype Parameter === - -The _qtype_ parameter in the URL represents the search type values and represent -one of the following search or request types: - -* Keyword -* Title -* Journal Title -* Author -* Subject -* Series -* Bib Call Number - -These match the options in the search type drop-down box. - -=== Sorting === - -The _sort_ parameter sorts the results on one of these criteria. - -* `sort=pubdate` (publication date) - chronological order -* `sort=titlesort` - Alphabetical order -* `sort=authorsort` - Alphabetical order on family name first - -To change the sort direction of the results, the _sort_ parameter value has the -".descending" suffix added to it. - -* `sort=titlesort.descending` -* `sort=authorsort.descending` -* `sort=pubdate.descending` - -In the absence of the _sort_ parameter, the search results default to sorting by -relevance. - diff --git a/docs-antora/modules/opac/pages/sitemap.adoc b/docs-antora/modules/opac/pages/sitemap.adoc deleted file mode 100644 index d66d246b22..0000000000 --- a/docs-antora/modules/opac/pages/sitemap.adoc +++ /dev/null @@ -1,18 +0,0 @@ -= Sitemap generator = -:toc: - -A http://www.sitemaps.org[sitemap] directs search engines to the pages of -interest in a web site so that the search engines can intelligently crawl -your site. In the case of Evergreen, the primary pages of interest are the -bibliographic record detail pages. - -The sitemap generator script creates sitemaps that adhere to the -http://sitemaps.org specification, including: - -* limiting the number of URLs per sitemap file to no more than 50,000 URLs; -* providing the date that the bibliographic record was last edited, so - that once a search engine has crawled all of your sites' record detail pages, - it only has to reindex those pages that are new or have changed since the last - crawl; -* generating a sitemap index file that points to each of the sitemap files. - diff --git a/docs-antora/modules/opac/pages/tpac_meta_record_holds.adoc b/docs-antora/modules/opac/pages/tpac_meta_record_holds.adoc deleted file mode 100644 index 59a548009a..0000000000 --- a/docs-antora/modules/opac/pages/tpac_meta_record_holds.adoc +++ /dev/null @@ -1,105 +0,0 @@ -= TPAC Metarecord Search and Metarecord Level Holds = -:toc: - -Metarecords are compilations of individual bibliographic records that represent -the same work. This compilation allows for several records to be represented on -a single line on the TPAC search results page, which can help to reduce result -duplications. - - -*Advanced Search Page* - -Selecting the *Group Formats and Editions* checkbox on the Advanced Search page -allows the user to perform a metarecord search. - -image::media/advsrchpg_1.jpg[] - -[TIP] -Administrators can also configure the catalog to default to a *Group Formats and -Editions* search by enabling the relevant config.tt2 setting on -the server. Setting this option will pre-select the checkbox on the Advanced -Search and Search Result Pages. Users can remove the checkmark, but new searches -will revert to the default search behavior. - -*Search Results Page* - -Within the Search Results page, users can also refine their searches and filter -on metarecord search results by selecting the *Group Formats and Editions* -checkbox. - -image::media/srchresultpg_2.jpg[] - -The metarecord search results will display both the representative metarecord -bibliographic data and the combined metarecord holdings data (if the holdings -data is OPAC visible). - -The number of records represented by the metarecord are displayed in parenthesis -next to the title. - -The formats contained within the metarecord are displayed under the title. - -image::media/srchresultpg2_3.jpg[] - -For the metarecord search result, the *Place Hold* link defaults to a metarecord -level hold. - -image::media/srchresultpg3_4.jpg[] - -To place a metarecord level hold: - -. Click the *Place Hold* link. -. Users who are not logged into their accounts will be directed to the *Log in -to Your Account* screen, where they will enter their username and password. -Users who are already logged into their accounts will be directed to the *Place -Hold* screen. -. Within the *Place Hold* screen, users can select the multiple formats and/or -languages that are available. -. Continue to enter any additional hold information (such as Pickup Location), if needed. -. Click *Submit*. - -image::media/placehold_5.jpg[] - -Selecting multiple formats will not place all of these formats on hold for the -user. For example, a user cannot select CD Audiobook and Book and expect to -place both the CD and book on hold at the same time. Instead, the user is -implying that either the CD format or the book format is the acceptable format -to fill the hold. If no format is selected, then any of the available formats -may be used to fill the hold. The same holds true for selecting multiple -languages. - -*Advanced Hold Options* - -When users place a hold on an individual bibliographic record they will see an -*Advanced Hold Options* link within the Place Hold screen. Clicking the -*Advanced Hold Options* link will take the users into the metarecord level hold -feature, enabling them to select multiple formats and/or languages. - -image::media/advholdoption_6.jpg[] - -*Metarecord Constituent Records Page* - -The TPAC now includes a Metarecord Constituent Records page, which displays a -listing of the individual bibliographic records grouped within the metarecord. -Access the Metarecord Constituent Records page by clicking on the metarecord -title on the Search Results page. - -image::media/srchresultpg4_7.jpg[] - -This will allow the user to view the results for grouped records. - -image::media/recorddetailpg_8.jpg[] - -*Show Holds on Bib* - -Within the staff client, *Show Holds on Bib* for a metarecord level hold will -take the staff member into the Metarecord Constituent Records page. - -*Global Flag: OPAC Metarecord Hold Formats Attribute* - -To utilize the metarecord level hold feature, the Global Flag: OPAC Metarecord -Hold Formats Attribute must be enabled and its value set at mr_hold_format, -which is the system's default configuration. - -image::media/mrholdgf_9.jpg[] - - diff --git a/docs-antora/modules/opac/pages/using_the_public_access_catalog.adoc b/docs-antora/modules/opac/pages/using_the_public_access_catalog.adoc deleted file mode 100644 index 800a8d3ef0..0000000000 --- a/docs-antora/modules/opac/pages/using_the_public_access_catalog.adoc +++ /dev/null @@ -1,566 +0,0 @@ -= Using the Public Access Catalog = -:toc: - -== Basic Search == - -indexterm:[OPAC] - -From the OPAC home, you can conduct a basic search of all materials owned by all -libraries in your Evergreen system. - -This search can be as simple as typing keywords into the search box and clicking -the _Search_ button. Or you can make your search more precise by limiting your -search by fields to search, material type or library location. - -indexterm:[search box] - -The _Homepage_ contains a single search box for you to enter search terms. You -can get to the _Homepage_ at any time by clicking the _Another Search_ link from -the leftmost link on the bar above your search results in the catalogue, or you -can enter a search anywhere you see a search box. - -You can select to search by: - -indexterm:[search, keyword] -indexterm:[search, title] -indexterm:[search, journal title] -indexterm:[search, author] -indexterm:[search, subject] -indexterm:[search, series] -indexterm:[search, bib call number] - -* *Keyword*: finds the terms you enter anywhere in the entire record for an -item, including title, author, subject, and other information. - -* *Title*: finds the terms you enter in the title of an item. - -* *Journal Title*: finds the terms you enter in the title of a serial bib -record. - -* *Author*: finds the terms you enter in the author of an item. - -* *Subject*: finds the terms you enter in the subject of an item. Subjects are -categories assigned to items according to a system such as the Library of -Congress Subject Headings. - -* *Series*: finds the terms you enter in the title of a multi-part series. - -[TIP] -============= -To search an item copy call number, use <> -============= - -=== Formats === - -You can limit your search by formats based on MARC fixed field type: - -indexterm:[formats, books] -indexterm:[formats, audiobooks] -indexterm:[formats, video] -indexterm:[formats, music] - - -* *All Books* -* *All Music* -* *Audiocassette music recording* -* *Blu-ray* -* *Braille* -* *Cassette audiobook* -* *CD Audiobook* -* *CD Music recording* -* *DVD* -* *E-audio* -* *E-book* -* *E-video* -* *Equipment, games, toys* -* *Kit* -* *Large Print Book* -* *Map* -* *Microform* -* *Music Score* -* *Phonograph music recording* -* *Phonograph spoken recording* -* *Picture* -* *Serials and magazines* -* *Software and video games* -* *VHS* - - -==== Libraries ==== - -If you are using a catalogue in a library or accessing a library’s online -catalogue from its homepage, the search will return items for your local -library. If your library has multiple branches, the result will display items -available at your branch and all branches of your library system separately. - - -== Advanced Search == - -Advanced searches allow users to perform more complex searches by providing more -options. Many kinds of searches can be performed from the _Advanced Search_ -screen. You can access by clicking _Advanced Search_ on the catalogue _Homepage_ -or search results screen. - -The available search options are the same as on the basic search. But you may -use one or many of them simultaneously. If you want to combine more than three -search options, use _Add Search Row_ button to add more search input rows. -Clicking the _X_ button will close the search input row. - - -=== Sort Results === - -indexterm:[advanced search, sort results] - -By default, the search results are in order of greatest to least relevance, see - <>. In the sort results menu you may select - to order the search results by relevance, title, author, or publication date. - - -=== Search Library === - -indexterm:[advanced search, search library] - -The current search library is displayed under _Search Library_ drop down menu. -By default it is your library. The search returns results for your local library -only. If your library system has multiple branches, use the _Search Library_ box -to select different branches or the whole library system. - - -=== Limit to Available === - -indexterm:[advanced search, limit to available] - - -This checkbox is at the bottom line of _Search Library_. Select _Limit to -Available_ to limit results to those titles that have items with a circulation -status of "available" (by default, either _Available_ or _Reshelving_). - -=== Exclude Electronic Resources === - -indexterm:[advanced search, exclude electronic resources] - -This checkbox is below _Limit to Available_. Select _Exclude Electronic -Resources_ to limit results to those bibliographic records that do not have an -"o" or "s" in the _Item Form_ fixed field (electronic forms) and overrides other -form limiters. - -This feature is optional and will not appear for patrons or staff until enabled. - -[TIP] -=============== -To display the *Exclude Electronic Resources* checkbox in the advance search -page and search results, set -the 'ctx.exclude_electronic_checkbox' setting in config.tt2 to 1. -=============== - - -=== Search Filter === - -indexterm:[advanced search, search filters] - -You can filter your search by _Item Type_, _Item Form_, _Language_, _Audience_, -_Video Format_, _Bib Level_, _Literary Form_, _Search Library_, and _Publication -Year_. Publication year is inclusive. For example, if you set _Publication Year_ -Between 2005 and 2007, your results can include items published in 2005, 2006 -and 2007. - -For each filter type, you may select multiple criteria by holding down the - _CTRL_ key as you click on the options. If nothing is selected for a filter, -the search will return results as though all options are selected. - -==== Search Filter Enhancements ==== - -Enhancements to the Search Filters now makes it easier to view, remove, and modify search filters while viewing search results in the Evergreen OPAC. Filters that are selected while conducting an advanced search in the Evergreen OPAC now appear below the search box in the search results interface. - -For example, the screenshot below shows a Keyword search for "violin concerto" while filtering on Item Type: Musical sound recording and Shelving Location: Music. - -image::media/searchfilters1.PNG[search using search filters] - -In the search results, the Item Type and Shelving Location filters appear directly below the search box. - -image::media/searchfilters2.PNG[search results with search filter enhancements] - -Each filter can be removed by clicking the X next to the filter name to modify the search within the search results screen. Below the search box on the search results screen, there is also a link to _Refine My Original Search_, which will bring the user back to the advanced search screen where the original search parameters selected can be viewed and modified. - - -[#numeric_search] -indexterm:[advanced search, numeric search] - -=== Numeric Search === - -If you have details on the exact item you wish to search for, use the _Numeric -Search_ tab on the advanced search page. Use the drop-down menu to select your -search by _ISBN_, _ISSN_, _Bib Call Number_, _Call Number (Shelf Browse)_, -_LCCN_, _TCN_, or _Item Barcode_. Enter the information and then click the -_Search_ button. - -=== Expert Search === - -indexterm:[advanced search, expert search] - -If you are familiar with MARC cataloging, you may search by MARC tag in the -_Expert Search_ option on the left of the screen. Enter the three-digit tag -number, the subfield if relevant, and the value or text that corresponds to the -tag. For example, to search by publisher name, enter `260 b Random House`. To -search several tags simultaneously, use the _Add Row_ option. Click _Submit_ to -run the search. - -[TIP] -============= -Use the MARC Expert Search only as a last resort, as it can take much longer to -retrieve results than by using indexed fields. For example, rather than running -an expert search for "245 a Gone with the wind", simply do a regular title -search for "Gone with the wind". -============= - -== Boolean operators == - -indexterm:[search, AND operator] -indexterm:[search, OR operator] -indexterm:[search, NOT operator] -indexterm:[search, boolean] - -Classic search interfaces (that is, those used primarily by librarians) forced -users to learn the art of crafting search phrases with Boolean operators. To a -large extent this was due to the inability of those systems to provide relevancy -ranking beyond a "last in, first out" approach. Thankfully, Evergreen, like most -modern search systems, supports a rather sophisticated relevancy ranking system -that removes the need for Boolean operators in most cases. - -By default, all terms that have been entered in a search query are joined with -an implicit `AND` operator. Those terms are required to appear in the designated - fields to produce a matching record: a search for _golden compass_ will search -for entries that contain both _golden_ *and* _compass_. - -Words that are often considered Boolean operators, such as _AND_, _OR_, and -_NOT_, are not special in Evergreen: they are treated as just another search -term. For example, a title search for `golden and compass` will not return the -title _Golden Compass_. - -However, Evergreen does support Boolean searching for those rare cases where you -might require it, using symbolic operators as follows: - -.Boolean symbolic operators -[width="50%",options="header"] -|================================= -| Operator | Symbol | Example -| AND | `&&` | `a && b` -| OR | `\|\|` | `a \|\| b` -| NOT | `-`_term_ | `a -b` -|================================= - -== Search Tips == - -indexterm:[search, stop words] -indexterm:[search, truncation] - -Evergreen tries to approach search from the perspective of a major search -engine: the user should simply be able to enter the terms they are looking for -as a general keyword search, and Evergreen should return results that are most -relevant given those terms. For example, you do not need to enter author's last -name first, nor do you need to enter an exact title or subject heading. -Evergreen is also forgiving about plurals and alternate verb endings, so if you -enter _dogs_, Evergreen will also find items with _dog_. - -The search engine has no _stop words_ (terms are ignored by the search engine): -a title search for `to be or not to be` (in any order) yields a list of titles -with those words. - -* Don’t worry about white space, exact punctuation, or capitalization. - -. White spaces before or after a word are ignored. So, a search for `[ golden -compass ]` gives the same results as a search for `[golden compass]`. - -. A double dash or a colon between words is reduced to a blank space. So, a -title search for _golden:compass_ or _golden -- compass_ is equivalent to -_golden compass_. - -. Punctuation marks occurring within a word are removed; the exception is \_. -So, a title search for _gol_den com_pass_ gives no result. - -. Diacritical marks and solitary `&` or `|` characters located anywhere in the -search term are removed. Words or letters linked together by `.` (dot) are -joined together without the dot. So, a search for _go|l|den & comp.ass_ is -equivalent to _golden compass_. - -. Upper and lower case letters are equivalent. So, _Golden Compass_ is the same -as _golden compass_. - -* Enter your search words in any order. So, a search for _compass golden_ gives -the same results as a search for _golden compass_. Adding more search words -gives fewer but more specific results. - -** This is also true for author searches. Both _David Suzuki_ and _Suzuki, -David_ will return results for the same author. - -* Use specific search terms. Evergreen will search for the words you specify, -not the meanings, so choose search terms that are likely to appear in an item -description. For example, the search _luxury hotels_ will produce more -relevant results than _nice places to stay_. - -* Search for an exact phrase using double-quotes. For example ``golden compass''. - -** The order of words is important for an exact phrase search. _golden compass_ -is different than _compass golden_. - -** White space, punctuation and capitalization are removed from exact phrases as - described above. So a phrase retains its search terms and its relative order, -but not special characters and not case. - -** Two phrases are joined by and, so a search for _"golden compass"_ _"dark -materials"_ is equivalent to _golden compass_ *and* _dark materials_. - - -* **Truncation** -Words may be right-hand truncated using an asterisk. Use a single asterisk * to -truncate any number of characters. -(example: _environment* agency_) - - -== Search Methodology == - -[#stemming] - -=== Stemming === - -indexterm:[search, stemming] - -A search for _dogs_ will also return hits with the word dog and a search for -parenting will return results with the words parent and parental. This is -because the search uses stemming to help return the most relevant results. That -is, words are reduced to their stem (or root word) before the search is -performed. - -The stemming algorithm relies on common English language patterns - like verbs -ending in _ing_ - to find the stems. This is more efficient than looking up each -search term in a dictionary and usually produces desirable results. However, it -also means the search will sometimes reduce a word to an incorrect stem and -cause unexpected results. To prevent a word or phrase from stemming, put it in -double-quotes to force an exact search. For example, a search for `parenting` -will also return results for `parental`, but a search for `"parenting"` will -not. - -Understanding how stemming works can help you to create more relevant searches, -but it is usually best not to anticipate how a search term will be stemmed. For -example, searching for `gold compass` does not return the same results as -`golden compass`, because `-en` is not a regular suffix in English, and -therefore the stemming algorithm does not recognize _gold_ as a stem of -_golden_. - - -[#order_of_results] - -=== Order of Results === - -indexterm:[search, order of results] - -By default, the results are listed in order of relevance, similar to a search -engine like Google. The relevance is determined using a number of factors, -including how often and where the search terms appear in the item description, -and whether the search terms are part of the title, subject, author, or series. -The results which best match your search are returned first rather than results -appearing in alphabetical or chronological order. - -In the _Advanced Search_ screen, you may select to order the search results by -relevance, title, author, or publication date before you start the search. You -can also re-order your search results using the _Sort Results_ dropdown list on -the search result screen. - - -== Search Results == - -indexterm:[search results] - -The search results are a list of relevant works from the catalogue. If there are -many results, they are divided into several pages. At the top of the list, you -can see the total number of results and go back and forth between the pages -by clicking the links that say _Previous_ or _Next_ on top or bottom of the -list. You can also click on the adjacent results page number listed. These page -number links allow you to skip to that results page, if your search results -needed multiple pages to display. Here is an example: - - -image::media/catalogue-3.png[catalogue-3] - -Brief information about the title, such as author, edition, publication date, -etc. is displayed under each title. The icons beside the brief information -indicate formats such as books, audio books, video recordings, and other -formats. If you hover your mouse over the icon, a text explanation will show up -in a small pop-up box. - -Clicking a title goes to the title details. Clicking an author searches all -works by the author. If you want to place a hold on the title, click _Place -Hold_ beside the format icons. - -On the top right, there is a _Limit to Available_ checkbox. Checking this box -will filter out those titles with no available copies in the library or -libraries at the moment. Usually you will see your search results are -re-displayed with fewer titles. - -When enabled, under the _Limit to Available_ checkbox, there is an _Exclude -Electronic Resources_ checkbox. Checking this box will filter out materials -that are cataloged as electronic in form. - -The _Sort by_ dropdown list is found at the top of the search results, beside -the _Show More Details_ link. Clicking an entry on the list will re-sort your -search results accordingly. - - -=== Facets: Subjects, Authors, and Series === - -indexterm:[search results, facets: subjects, authors, and series] - -At the left, you may see a list of _Facets of Subjects_, _Authors_, and -_Series_. Selecting any one of these links filters your current search results -using that subject, author, or series to narrow down your current results. The -facet filters can be undone by clicking the link a second time, thus returning -your original results before the facet was activated. - -image::media/catalogue-5.png[catalogue-5] - - -=== Availability === - -indexterm:[search results, availability] - -The number of available copies and total copies are displayed under each search -result's call number. If you are using a catalogue inside a library or accessing -a library’s online catalogue from its homepage, you will see how many copies are -available in the library under each title, too. If the library belongs to a -multi-branch library system you will see an extra row under each title showing -how many copies are available in all branches. - - -image::media/catalogue-6.png[catalogue-6] - -image::media/catalogue-7.png[catalogue-7] - -You may also click the _Show More Details_ link at the top of the results page, -next to the _Limit to available items_ check box, to view each search result's -copies' individual call number, status, and shelving location. - - -=== Viewing a record === - -indexterm:[search results, viewing a record] - -Click on a search result's title to view a detailed record of the title, -including descriptive information, location and availability, current holds, and -options for placing holds, add to my list, and print/email. - -image::media/catalogue-8.png[catalogue-8] -image::media/catalogue-8a.png[catalogue-8a] - -== Details == - -indexterm:[search results, details] - -The record shows details such as the cover image, title, author, publication -information, and an abstract or summary, if available. - -Near the top of the record, users can easily see the number of copies that -are currently available in the system and how many current holds are on the -title. - -If there are other formats and editions of the same work in the -database, links to those alternate formats will display. The formats used -in this section are based on the configurable catalog icon formats. - - -image::media/other-formats-and-editions.png[other-formats-and-editions] - -The Record Details view shows how many copies are at the library or libraries -you have selected, and whether they are available or checked out. It also -displays the Call number and Copy Location for locating the item on the shelves. -Clicking on Text beside the call number will allow you to send the item's call -number by text message, if desired. Clicking the location library link will -reveal information about owning library, such as address and open hours. - -Below the local details you can open up various tabs to display more -information. You can select Reviews and More to see the book’s summaries and -reviews, if available. You can select Shelf Browser to view items appearing near -the current item on the library shelves. Often this is a good way to browse for -similar items. You can select MARC Record to display the record in MARC format. -If your library offers the service, clicking on Awards, Reviews, and Suggested -Reads will reveal that additional information. - -[NOTE] -========== -Copies are sorted by (in order): org unit, call number, part label, copy number, -and barcode. -========== - - - -=== Placing Holds === - -indexterm:[search results, placing holds] - -Holds can be placed on either title results or search results page. If the item -is available, it will be pulled from the shelf and held for you. If all copies -at your local library are checked out, you will be placed on a waiting list and -you will be notified when items become available. - -On title details page, you can select the _Place Hold_ link in the upper right -corner of the record to reserve the item. You will need your library account -user name and password. You may choose to be notified by phone or email. - -In the example below, the phone number in your account will automatically show -up. Once you select the Enable phone notifications for this hold checkbox, you -can supply a different phone number for this hold only. The notification method -will be selected automatically if you have set it up in your account references. -But you still have a chance to re-select on this screen. You may also suspend -the hold temporarily by checking the Suspend box. Click the _Help_ beside it for -details. - -You can view and cancel a hold at anytime. Before your hold is captured, which -means an item has been held waiting for you to pick up, you can edit, suspend or - activate it. You need log into your patron account to do it. -From your account you can also set up a _Cancel if not filled by_ date for your -hold. _Cancel if not filled by_ date means after this date, even though your -hold has not been fulfilled you do not need the item anymore. - - -image::media/catalogue-9.png[catalogue-9] - -=== Permalink === - -The record summary page offers a link to a shorter permalink that - can be used for sharing the record with others. All URL parameters are stripped - from the link with the exception of the locg and copy_depth parameters. Those - parameters are maintained so that people can share a link that displays just - the holdings from one library/system or displays holdings from all libraries - with a specific library's holdings floating to the top. - -image::media/using-opac-view-permalink.png[Permalink] - - -=== SMS Call Number === - -If configured by the library system administrator, you may send yourself the -call number via SMS message by clicking on the *Text* link, which appears beside -the call number. - -image::media/textcn1.png[] - -[WARNING] -========== -Carrier charges may apply when using the SMS call number feature. -========== - - -=== Going back === - -indexterm:[search results, going back] - -When you are viewing a specific record, you can always go back to your title -list by clicking the link _Search Results_ on the top right or left bottom of -the page. - -image::media/catalogue-10.png[catalogue-10] - -You can start a new search at any time by entering new search terms in the -search box at the top of the page, or by selecting the _Another Search_ or -_Advanced Search_ links in the left-hand sidebar. - diff --git a/docs-antora/modules/opac/pages/visibility_on_the_web.adoc b/docs-antora/modules/opac/pages/visibility_on_the_web.adoc deleted file mode 100644 index d1fcb6183f..0000000000 --- a/docs-antora/modules/opac/pages/visibility_on_the_web.adoc +++ /dev/null @@ -1,117 +0,0 @@ -= Library visibility on the Web = -:toc: - -== Introduction == - -Evergreen follows a number of best practices to -make Library data integrate with the rest of the -Web. Evergreen's public catalog pages are -designed so that search engines can easily extract -meaningful information about your library and -collections. Evergreen is also preparing for an -eventual shift toward linked open bibliographic -data. - -== Catalog data in search engines == - -Each record in the catalog is displayed to search -engines using http://schema.org[schema.org] microdata. - -[IMPORTANT] -Make sure your system administrator has not added -a restrictive robots.txt file to your server. -These files restrict search engines, up to the -point of not allowing search engines to index your -site at all. - -=== Details of the schema.org mapping === - - * Each item is listed as a - http://schema.org/Offer[schema:Offer], which is - the same category that an online bookseller might - use to describe an item for sale. These Offers - are always listed with a price of $0.00. - * Subject headings are exposed as - http://schema.org/about[schema:about] - properties. - * Electronic resources are assigned a - http://schema.org/url[schema:url] - property, and any notes or link text - are assigned a - http://schema.org/description[schema:description] - property. - * Given a Library of Congress relator code for - 1xx and 7xx fields, Evergreen surfaces the URL - for that relator code along with the - http://schema.org/contributor[schema:contributor] - property to give machines a better chance - of understanding how the person or organization - actually contributed to this work. - * Linking out to related records: - ** Given an LCCN (010 field), Evergreen links to - the corresponding Library of Congress record - using http://schema.org/sameAs[schema:sameAs]. - ** Given an OCLC number (035 field, subfield `a` - beginning with `(OCoLC)`), Evergreen links to - the corresponding WorldCat record using - http://schema.org/sameAs[schema:sameAs]. - ** Given a URI (024 field, subfield 2 = `'uri'`), - Evergreen links to the corresponding OCLC - Work Entity record using - http://schema.org/exampleOfWork[schema:exampleOfWork]. - - -=== Viewing microdata === -You can learn more about how Evergreen publicizes -these data by viewing them directly. The -http://linter.structured-data.org[structured data linter] -is a helpful tool for viewing microdata. - -. Using your favorite Web browser, navigate to a - record in your public catalog. -. Copy the URL that displays in your browser's - address bar. -. Go to http://linter.structured-data.org -. Under the _Lint by URL_ tab, paste your URL - into the text box. -. Click _Submit_ - -=== Other helpful features for search engines === - * Titles of catalog pages follow a - "Page title - Library name" pattern to provide - specific titles in search engine results pages, - browser bookmarks, and browser tabs. - * Links that robots should not crawl, such as search - result links, are marked with the - https://support.google.com/webmasters/answer/96569?hl=en[@rel="nofollow"] - property. - * Catalog pages for record details and for library - descriptions express a - https://support.google.com/webmasters/answer/139066?hl=en[@rel="canonical"] - link to simplify the number of variations of page - URLs that could otherwise have been derived from - different search parameters. - * Catalog pages that do not exist return a proper - 404 "HTTP_NOT_FOUND" HTTP status code, and record - detail pages for records that have been deleted - now return a proper 410 "HTTP_GONE" HTTP status code. - * Record detail and library pages include - http://ogp.me/[Open Graph Protocol] markup. - * Each library has its own page at - _http://localhost/eg/opac/library/LIBRARY_SHORTNAME_ - that provides machine-readable hours and contact - information. - -== SKOS support == - -Some vocabularies used (or which could be used) for -stock record attributes and coded value maps in Evergreen -are published on the web using SKOS. The record -attributes system can now associate Linked Data URIs -with specific attribute values. In particular, seed data -supplying URIs for the RDA Content Type, Media Type, and -Carrier Type has been added. - -This is an experimental, "under-the-hood" feature that -will be built upon in subsequent releases. - diff --git a/docs-antora/modules/reports/_attributes.adoc b/docs-antora/modules/reports/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/reports/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports1.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports1.jpg deleted file mode 100644 index 9436acc161..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports1.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2.jpg deleted file mode 100644 index 320f5310af..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2a.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2a.jpg deleted file mode 100644 index 79faac392c..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2a.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports3.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports3.jpg deleted file mode 100644 index aa5fa81865..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports3.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports4.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports4.jpg deleted file mode 100644 index 89b8125481..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports4.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports5.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports5.jpg deleted file mode 100644 index 567bb86c89..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports5.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports6.jpg b/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports6.jpg deleted file mode 100644 index 27dfe78859..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/2_7_Enhancements_to_Reports6.jpg and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-1.png b/docs-antora/modules/reports/assets/images/media/create-template-1.png deleted file mode 100644 index 0358768eb8..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-1.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-10.png b/docs-antora/modules/reports/assets/images/media/create-template-10.png deleted file mode 100644 index 12deb5ce9b..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-10.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-11.png b/docs-antora/modules/reports/assets/images/media/create-template-11.png deleted file mode 100644 index 003b05bc8d..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-11.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-12.png b/docs-antora/modules/reports/assets/images/media/create-template-12.png deleted file mode 100644 index fe4d999663..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-12.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-13.png b/docs-antora/modules/reports/assets/images/media/create-template-13.png deleted file mode 100644 index 0831126d09..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-13.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-15.png b/docs-antora/modules/reports/assets/images/media/create-template-15.png deleted file mode 100644 index 19734c337a..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-15.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-16.png b/docs-antora/modules/reports/assets/images/media/create-template-16.png deleted file mode 100644 index 71665a0ffb..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-16.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-17.png b/docs-antora/modules/reports/assets/images/media/create-template-17.png deleted file mode 100644 index 0a6308483d..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-17.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-19.png b/docs-antora/modules/reports/assets/images/media/create-template-19.png deleted file mode 100644 index a62b2825f8..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-19.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-2.png b/docs-antora/modules/reports/assets/images/media/create-template-2.png deleted file mode 100644 index 20466a6723..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-2.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-20.png b/docs-antora/modules/reports/assets/images/media/create-template-20.png deleted file mode 100644 index d4beb2bd28..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-20.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-21.png b/docs-antora/modules/reports/assets/images/media/create-template-21.png deleted file mode 100644 index e2cb2f9ade..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-21.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-22.png b/docs-antora/modules/reports/assets/images/media/create-template-22.png deleted file mode 100644 index b7f8532bf7..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-22.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-23.png b/docs-antora/modules/reports/assets/images/media/create-template-23.png deleted file mode 100644 index 03de846b1a..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-23.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-24.png b/docs-antora/modules/reports/assets/images/media/create-template-24.png deleted file mode 100644 index ef381f6934..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-24.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-25.png b/docs-antora/modules/reports/assets/images/media/create-template-25.png deleted file mode 100644 index 88d2a17a59..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-25.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-26.png b/docs-antora/modules/reports/assets/images/media/create-template-26.png deleted file mode 100644 index b6816c88e2..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-26.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-27.png b/docs-antora/modules/reports/assets/images/media/create-template-27.png deleted file mode 100644 index ac60c901a3..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-27.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-28.png b/docs-antora/modules/reports/assets/images/media/create-template-28.png deleted file mode 100644 index 69d6cf1c26..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-28.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-29.png b/docs-antora/modules/reports/assets/images/media/create-template-29.png deleted file mode 100644 index 1dcb26094f..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-29.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-3.png b/docs-antora/modules/reports/assets/images/media/create-template-3.png deleted file mode 100644 index d2bf614be4..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-3.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-30.png b/docs-antora/modules/reports/assets/images/media/create-template-30.png deleted file mode 100644 index 9421cb5f78..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-30.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-31.png b/docs-antora/modules/reports/assets/images/media/create-template-31.png deleted file mode 100644 index 3a07d05822..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-31.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-32.png b/docs-antora/modules/reports/assets/images/media/create-template-32.png deleted file mode 100644 index 3150321434..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-32.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-4.png b/docs-antora/modules/reports/assets/images/media/create-template-4.png deleted file mode 100644 index b6d7201afc..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-4.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-5.png b/docs-antora/modules/reports/assets/images/media/create-template-5.png deleted file mode 100644 index d24ad3c233..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-5.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-6.png b/docs-antora/modules/reports/assets/images/media/create-template-6.png deleted file mode 100644 index 47fd843b46..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-6.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-7.png b/docs-antora/modules/reports/assets/images/media/create-template-7.png deleted file mode 100644 index 8803035b01..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-7.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-8.png b/docs-antora/modules/reports/assets/images/media/create-template-8.png deleted file mode 100644 index 8c46199336..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-8.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/create-template-9.png b/docs-antora/modules/reports/assets/images/media/create-template-9.png deleted file mode 100644 index 49fc2ef426..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/create-template-9.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_bool.png b/docs-antora/modules/reports/assets/images/media/datatypes_bool.png deleted file mode 100644 index c00b467ebe..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_bool.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_id.png b/docs-antora/modules/reports/assets/images/media/datatypes_id.png deleted file mode 100644 index df178e0a7f..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_id.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_int.png b/docs-antora/modules/reports/assets/images/media/datatypes_int.png deleted file mode 100644 index 3182ce0a37..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_int.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_interval.png b/docs-antora/modules/reports/assets/images/media/datatypes_interval.png deleted file mode 100644 index 3c907fa274..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_interval.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_link.png b/docs-antora/modules/reports/assets/images/media/datatypes_link.png deleted file mode 100644 index 559d756ca5..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_link.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_money.png b/docs-antora/modules/reports/assets/images/media/datatypes_money.png deleted file mode 100644 index 34d5f36cad..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_money.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_orgunit.png b/docs-antora/modules/reports/assets/images/media/datatypes_orgunit.png deleted file mode 100644 index bb11f53b96..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_orgunit.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_text.png b/docs-antora/modules/reports/assets/images/media/datatypes_text.png deleted file mode 100644 index e87683d6a0..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_text.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/datatypes_timestamp.png b/docs-antora/modules/reports/assets/images/media/datatypes_timestamp.png deleted file mode 100644 index e2bb18c4a7..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/datatypes_timestamp.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/folder-1.png b/docs-antora/modules/reports/assets/images/media/folder-1.png deleted file mode 100644 index 0e24910efb..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/folder-1.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/generate-report-1.png b/docs-antora/modules/reports/assets/images/media/generate-report-1.png deleted file mode 100644 index a208d89e9e..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/generate-report-1.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/generate-report-10.png b/docs-antora/modules/reports/assets/images/media/generate-report-10.png deleted file mode 100644 index 9980b92096..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/generate-report-10.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/generate-report-14.png b/docs-antora/modules/reports/assets/images/media/generate-report-14.png deleted file mode 100644 index e6846b560a..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/generate-report-14.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/generate-report-2.png b/docs-antora/modules/reports/assets/images/media/generate-report-2.png deleted file mode 100644 index 8ba8a9773d..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/generate-report-2.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/generate-report-3.png b/docs-antora/modules/reports/assets/images/media/generate-report-3.png deleted file mode 100644 index e5cdfdb3ae..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/generate-report-3.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/generate-report-8.png b/docs-antora/modules/reports/assets/images/media/generate-report-8.png deleted file mode 100644 index 72a700271c..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/generate-report-8.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/view-output-1.png b/docs-antora/modules/reports/assets/images/media/view-output-1.png deleted file mode 100644 index 7fa0aec3a2..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/view-output-1.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/view-output-2.png b/docs-antora/modules/reports/assets/images/media/view-output-2.png deleted file mode 100644 index b536d07234..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/view-output-2.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/view-output-4.png b/docs-antora/modules/reports/assets/images/media/view-output-4.png deleted file mode 100644 index 54e364c3c9..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/view-output-4.png and /dev/null differ diff --git a/docs-antora/modules/reports/assets/images/media/view-output-5.png b/docs-antora/modules/reports/assets/images/media/view-output-5.png deleted file mode 100644 index c4d9f61308..0000000000 Binary files a/docs-antora/modules/reports/assets/images/media/view-output-5.png and /dev/null differ diff --git a/docs-antora/modules/reports/nav.adoc b/docs-antora/modules/reports/nav.adoc deleted file mode 100644 index 2c48e8e1f2..0000000000 --- a/docs-antora/modules/reports/nav.adoc +++ /dev/null @@ -1,13 +0,0 @@ -* xref:reports:introduction.adoc[Reports] -** xref:reports:reporter_daemon.adoc[Starting and Stopping the Reporter Daemon] -** xref:reports:reporter_folder.adoc[Folders] -** xref:reports:reporter_create_templates.adoc[Creating Templates] -** xref:reports:reporter_generating_reports.adoc[Generating Reports from Templates] -** xref:reports:reporter_view_output.adoc[Viewing Report Output] -** xref:reports:reporter_cloning_shared_templates.adoc[Cloning Shared Templates] -** xref:reports:reporter_add_data_source.adoc[Adding Data Sources to Reporter] -** xref:reports:reporter_running_recurring_reports.adoc[Running Recurring Reports] -** xref:reports:reporter_template_terminology.adoc[Template Terminology] -** xref:reports:reporter_template_enhancements.adoc[Template Enhancements] -** xref:reports:reporter_export_usingpgAdmin.adoc[Exporting Report Templates Using phpPgAdmin] - diff --git a/docs-antora/modules/reports/pages/README b/docs-antora/modules/reports/pages/README deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/docs-antora/modules/reports/pages/_attributes.adoc b/docs-antora/modules/reports/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/reports/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/reports/pages/introduction.adoc b/docs-antora/modules/reports/pages/introduction.adoc deleted file mode 100644 index 81389652ee..0000000000 --- a/docs-antora/modules/reports/pages/introduction.adoc +++ /dev/null @@ -1,4 +0,0 @@ -= Introduction = -:toc: - -Learn how to create and use reports in Evergreen. diff --git a/docs-antora/modules/reports/pages/reporter_add_data_source.adoc b/docs-antora/modules/reports/pages/reporter_add_data_source.adoc deleted file mode 100644 index 8496222bae..0000000000 --- a/docs-antora/modules/reports/pages/reporter_add_data_source.adoc +++ /dev/null @@ -1,260 +0,0 @@ -= Adding Data Sources to Reporter = -:toc: - -indexterm:[reports, adding data sources] - -You can further customize your Evergreen reporting environment by adding -additional data sources. - -The Evergreen reporter module does not build and execute SQL queries directly, -but instead uses a data abstraction layer called *Fieldmapper* to mediate queries -on the Evergreen database.Fieldmapper is also used by other core Evergreen DAO -services, including cstore and permacrud. The configuration file _fm_IDL.xml_ -contains the mapping between _Fieldmapper_ class definitions and the database. -The _fm_IDL.xml_ file is located in the _/openils/conf_ directory. - -indexterm:[fm_IDL.xml] - -There are 3 basic steps to adding a new data source. Each step will be discussed -in more detail in the - -. Create a PostgreSQL query, view, or table that will provide the data for your -data source. -. Add a new class to _fm_IDL.xml_ for your data source. -. Restart the affected services to see the new data source in Reporter. - -There are two possible sources for new data sources: - -indexterm:[PostgreSQL] - -indexterm:[SQL] - -* An SQL query built directly into the class definition in _fm_IDL.xml_. You can -use this method if you are only going to access this data source through the -Evergreen reporter and/or cstore code that you write. -* A new table or view in the Evergreen PostgreSQL database on which a class -definition in _fm_IDL.xml_. You can use this method if you want to be able to -access this data source through directly through SQL or using other reporting tool. - -== Create a PostgreSQL query, view, or table for your data source == - -indexterm:[PostgreSQL] - -You need to decide whether you will create your data source as a query, a view, -or a table. - -. Create a query if you are planning to access this data source only through the -Evergreen reporter and/or cstore code that you write. You will use this query to -create an IDL only view. -. Create a view if you are planning to access this data source through other -methods in addition to the Evergreen reporter, or if you may need to do -performance tuning to optimize your query. -. You may also need to use an additional table as part of your data source if -you have additional data that's not included in the base Evergreen, or if you -need to use a table to store the results of a query for performance reasons. - -To develop and test queries, views, and tables, you will need - -* Access to the Evergreen PostgreSQL database at the command line. This is -normally the psql application. You -can access the Postgres documentation at the -https://www.postgresql.org/docs/[Official Postgres documentation] for -more information about PostgreSQL. -* Knowledge of the Evergreen database structure for the data that you want to -access. You can find this information by looking at the Evergreen schema -http://docs.evergreen-ils.org/2.2/schema/[Evergreen schema] - -indexterm:[database schema] - -If the views that you are creating are purely local in usage and are not intended -for contribution to the core Evergreen code, create the Views and Tables in the -extend_reporter schema. This schema is intended to be used for local -customizations and will not be modified during upgrades to the Evergreen system. - -You should make sure that you have an appropriate version control process for the SQL -used to create your data sources. - -Here's an example of a view created to incorporate some locally defined user -statistical categories: - -.example view for reports ------------------------------------------------------------- -create view extend_reporter.patronstats as -select u.id, -grp.name as "ptype", -rl.stat_cat_entry as "reg_lib", -gr.stat_cat_entry as "gender", -ag.stat_cat_entry as "age_group", -EXTRACT(YEAR FROM age(u.dob)) as "age", -hl.id as "home_lib", -u.create_date, -u.expire_date, -ms_balance_owed -from actor.usr u -join permission.grp_tree grp - on (u.profile = grp.id and (grp.parent = 2 or grp.name = 'patron')) -join actor.org_unit hl on (u.home_ou = hl.id) -left join money.open_usr_summary ms - on (ms.usr = u.id) -left join actor.stat_cat_entry_usr_map rl - on (u.id = rl.target_usr and rl.stat_cat = 4) -left join actor.stat_cat_entry_usr_map bt - on (u.id = bt.target_usr and bt.stat_cat = 3) -left join actor.stat_cat_entry_usr_map gr - on (u.id = gr.target_usr and gr.stat_cat = 2) -left join actor.stat_cat_entry_usr_map gr - on (u.id = gr.target_usr and gr.stat_cat = 2) -left join actor.stat_cat_entry_usr_map ag - on (u.id = ag.target_usr and ag.stat_cat = 1) -where u.active = 't' and u.deleted <> 't'; ------------------------------------------------------------- - -== Add a new class to fm_IDL.xml for your data source == - -Once you have your data source, the next step is to add that data source as a -new class in _fm_IDL.xml_. - -indexterm:[fm_IDL.xml] -indexterm:[fieldmapper] -indexterm:[report sources] - -You will need to add the following attributes for the class definition: - -* *id*. You should follow a consistent naming convention for your class names -that won't create conflicts in the future with any standard classes added in -future upgrades. Evergreen normally names each class with the first letter of -each word in the schema and table names. You may want to add a local prefix or -suffix to your local class names. -* *controller=”open-ils.cstore”* -* *oils_obj:fieldmapper=”extend_reporter::long_name_of_view”* -* *oils_persist.readonly=”true”* -* *reporter:core=”true”* (if you want this to show up as a “core” reporting source) -* *reporter:label*. This is the name that will appear on the data source list in -the Evergreen reporter. -* *oils_persist:source_definition*. If this is an IDL-only view, add the SQL query -here. You don't need this attribute if your class is based on a PostgreSQL view -or table. -* *oils_persist:tablename="schemaname.viewname or tablename"* If this class is -based on a PostgreSQL view or table, add the table name here. You don't need -this attribute is your class is an IDL-only view. - -For each column in the view or query output, add field element and set the -following attributes. The fields should be wrapped with _ _: - -* *reporter:label*. This is the name that appears in the Evergreen reporter. -* *name*. This should match the column name in the view or query output. -* *reporter:datatype* (which can be id, bool, money, org_unit, int, number, -interval, float, text, timestamp, or link) - -For each linking field, add a link element with the following attributes. The -elements should be wrapped with _ _: - -* *field* (should match field.name) -* *reltype* (“has_a”, “might_have”, or “has_many”) -* *map* (“”) -* *key* (name of the linking field in the foreign table) -* *class* (ID of the IDL class of the table that is to be linked to) - -The following example is a class definition for the example view that was created -in the previous section. - -.example class definition for reports ------------------------------------------------------------- - - - - - - - - - - - - - - - - - - - - - ------------------------------------------------------------- - -NOTE: _fm_IDL.xml_ is used by other core Evergreen DAO services, including cstore -and permacrud. So changes to this file can affect the entire Evergreen -application, not just reporter. After making changes fm_IDL.xml, it is a good -idea to ensure that it is valid XML by using a utility such as *xmllint* – a -syntax error can render much of Evergreen nonfunctional. Set up a good change -control system for any changes to fm_IDL.xml. You will need to keep a separate -copy of you local class definitions so that you can reapply the changes to -_fm_IDL.xml_ after Evergreen upgrades. - -== Restart the affected services to see the new data source in the reporter == - -The following steps are needed to for Evergreen to recognize the changes to -_fm_IDL.xml_ - -. Copy the updated _fm_IDL.xml_ into place: -+ -------------- -cp fm_IDL.xml /openils/conf/. -------------- -+ -. (Optional) Make the reporter version of fm_IDL.xml match the core version. -Evergreen systems supporting only one interface language will normally find -that _/openils/var/web/reports/fm_IDL.xml_ is a symbolic link pointing to -_/openils/conf/fm_IDL.xml_, so no action will be required. However, systems -supporting multiple interfaces will have a different version of _fm_IDL.xml_ in -the _/openils/var/web/reports_ directory. The _right_ way to update this is to -go through the Evergreen internationalization build process to create the -entity form of _fm_IDL.xml_ and the updated _fm_IDL.dtd_ files for each -supported language. However, that is outside the scope of this document. If you -can accept the reporter interface supporting only one language, then you can -simply copy your updated version of _fm_IDL.xml_ into the -_/openils/var/web/reports_ directory: -+ -------------- -cp /openils/conf/fm_IDL.xml /openils/var/web/reports/. -------------- -+ -. As the *opensrf* user, run Autogen to to update the Javascript versions of -the fieldmapper definitions. -+ -------------- -/openils/bin/autogen.sh -------------- -+ -. As the *opensrf* user, restart services: -+ -------------- -osrf_control --localhost --restart-services -------------- -+ -. As the *root* user, restart the Apache web server: -+ -------------- -service apache2 restart -------------- -+ -. As the *opensrf* user, restart the Evergreen reporter. You may need to modify -this command depending on your system configuration and PID path: -+ ------------- -opensrf-perl.pl -l -action restart -service open-ils.reporter \ --config /openils/conf/opensrf_core.xml -pid-dir /openils/var/run ------------- -+ -. Restart the Evergreen staff client, or use *Admin --> For Developers --> - Clear Cache* - diff --git a/docs-antora/modules/reports/pages/reporter_cloning_shared_templates.adoc b/docs-antora/modules/reports/pages/reporter_cloning_shared_templates.adoc deleted file mode 100644 index 3d4b8ba09a..0000000000 --- a/docs-antora/modules/reports/pages/reporter_cloning_shared_templates.adoc +++ /dev/null @@ -1,42 +0,0 @@ -= Cloning Shared Templates = -:toc: - -indexterm:[reports, cloning] - -This chapter describes how to make local copies of shared templates for routine -reports or as a starting point for customization. When creating a new template -it is a good idea to review the shared templates first: even if the exact -template you need does not exist it is often faster to modify an existing -template than to build a brand new one. A Local System Administrator account is -required to clone templates from the _Shared Folders_ section and save them to _My -Folders_. - -The steps below assume you have already created at least one _Templates_ folder. -If you haven’t done this, please see -xref:reports:reporter_folder.adoc#reporter_creating_folders[Creating Folders]. - -. Access the reports interface from _Administration_ -> _Reports_ -. Under _Shared Folders_ expand the _Templates_ folder and the subfolder of the -report you wish to clone. To expand the folders click on the grey arrow or -folder icon. Do not click on the blue underlined hyperlink. -. Click on the subfolder. -. Select the template you wish to clone. From the dropdown menu choose _Clone -selected templates_, then click _Submit_. -+ -NOTE: By default Evergreen only displays the first 10 items in any folder. To view -all content, change the Limit output setting from 10 to All. -+ -. Choose the folder where you want to save the cloned template, then click -_Select Folder_. Only template folders created with your account will be visible. -If there are no folders to choose from please see -xref:reports:reporter_folder.adoc#reporter_creating_folders[Creating Folders]. - -. The cloned template opens in the template editor. From here you may modify -the template by adding, removing, or editing fields and filters as described in -xref:reports:reporter_create_templates.adoc#reporter_creating_templates[Creating Templates]. _Template Name_ and -_Description_ can also be edited. When satisfied with your changes click _Save_. - -. Click _OK_ in the resulting confirmation windows. - -Once saved it is not possible to edit a template. To make changes, clone a -template and change the clone. diff --git a/docs-antora/modules/reports/pages/reporter_create_templates.adoc b/docs-antora/modules/reports/pages/reporter_create_templates.adoc deleted file mode 100644 index 73d2417d70..0000000000 --- a/docs-antora/modules/reports/pages/reporter_create_templates.adoc +++ /dev/null @@ -1,289 +0,0 @@ -[[reporter_creating_templates]] -= Creating Templates = -:toc: - -indexterm:[reports, creating templates] - -Once you have created a folder, the next step in building a report is to create -or clone a template. Templates allow you to run a report more than once without -building it anew every time, by changing definitions to suit current -requirements. For example, you can create a shared template that reports on -circulation at a given library. Then, other libraries can use your template and -simply select their own library when they run the report. - -It may take several tries to refine a report to give the output that you want. -It can be useful to plan out your report on paper before getting started with -the reporting tool. Group together related fields and try to identify the key -fields that will help you select the correct source. - -It may be useful to create complex queries in several steps. For example, first -add all fields from the table at the highest source level. Run a report and check -to see that you get results that seem reasonable. Then clone the report, add any -filters on fields at that level and run another report. Then drill down to the -next table and add any required fields. Run another report. Add any filters at -that level. Run another report. Continue until you’ve drilled down to all the -fields you need and added all the filters. This might seem time consuming and -you will end up cloning your initial report several times. However, it will help -you to check the correctness of your results, and will help to debug if you run -into problems because you will know exactly what changes caused the problem. -Also consider adding extra fields in the intermediate steps to help you check -your results for correctness. - -This example illustrates creating a template for circulation statistics. This is -an example of the most basic template that you can create. The steps required to -create a template are the same every time, but the tables chosen, how the data -is transformed and displayed, and the filters used will vary depending on your -needs. - -== Choosing Report Fields == - -indexterm:[reports, creating templates, choosing reports fields] - -. Click on the My Folder template folder where you want the template to be saved. -+ -image::media/create-template-1.png[create-template-1] -+ -. Click on Create a new Template for this folder. -+ -image::media/create-template-2.png[create-template-2] -+ -. You can now see the template creating interface. The upper half of the screen -is the _Database Source Browser_. The top left hand pane contains the database -_Sources_ drop-down list. This is the list of tables available as a starting point -for your report. Commonly used sources are _Circulation_ (for circ stats and -overdue reports), _ILS User_ (for patron reports), and _Item_ (for reports on a -library's holdings). -+ -image::media/create-template-3.png[create-template-3] -+ -The Enable source nullability checkbox below the sources list is for advanced -reporting and should be left unchecked by default. -+ -. Select _Circulation_ in the _Sources_ dropdown menu. Note that the _Core -Sources_ for reporting are listed first, however it is possible to access all -available sources at the bottom of this dropdown menu. You may only specify one -source per template. -+ -image::media/create-template-4.png[create-template-4] -+ -. Click on _Circulation_ to retrieve all the field names in the Field Name pane. -Note that the _Source_ Specifier (above the middle and right panes) shows the -path that you took to get to the specific field. -+ -image::media/create-template-5.png[create-template-5] -+ -. Select _Circ ID_ in the middle _Field Name_ pane, and _Count Distinct_ from the -right _Field Transform_ pane. The _Field Transform_ pane is where you choose how -to manipulate the data from the selected fields. You are counting the number of -circulations. -+ -indexterm:[reports, field transform] -+ -image::media/create-template-6.png[create-template-6] -+ -_Field Transforms_ have either an _Aggregate_ or _Non-Aggregate_ output type. -See the section called -xref:reports:reporter_template_terminology.adoc#field_transforms[Field Transforms] for more about -_Count, _Count Distinct_, and other transform options. -+ -. Click _Add Selected Fields_ underneath the _Field Transform_ pane to add this -field to your report output. Note that _Circ ID_ now shows up in the bottom left -hand pane under the _Displayed Fields_ tab. -+ -image::media/create-template-7.png[create-template-7] -+ -. _Circ ID_ will be the column header in the report output. You can rename -default display names to something more meaningful. To do so in this example, -select the _Circ ID_ row and click _Alter Display Header_. -+ -image::media/create-template-8.png[create-template-8] -+ -Double-clicking on the displayed field name is a shortcut to altering the -display header. -+ -. Type in the new column header name, for example _Circ count_ and click _OK_. -+ -image::media/create-template-9.png[create-template-9] -+ -. Add other data to your report by going back to the _Sources_ pane and selecting -the desired fields. In this example, we are going to add _Circulating Item --> -Shelving Location_ to further refine the circulation report. -+ -In the top left hand _Sources_ pane, expand _Circulation_. Depending on your -computer you will either click on the _+_ sign or on an arrow to expand the tree. -+ -image::media/create-template-10.png[create-template-10] -+ -Click on the _+_ or arrow to expand _Circulating Item_. Select -_Shelving Location_. -+ -image::media/create-template-11.png[create-template-11] -+ -When you are creating a template take the shortest path to the field you need in -the left hand Sources pane. Sometimes it is possible to find the same field name -further in the file structure, but the shortest path is the most efficient. -+ -In the Field Name pane select Name. -+ -image::media/create-template-12.png[create-template-12] -+ -In the upper right _Field Transform_ pane, select _Raw Data_ and click _Add Selected_ -Fields. Use _Raw Data_ when you do not wish to transform field data in any manner. -+ -image::media/create-template-13.png[create-template-13] -+ -Name will appear in the bottom left pane. Select the Name row and click _Alter -Display Header_. -+ -image::media/create-template-15.png[create-template-15] -+ -. Enter a new, more descriptive column header, for example, _Shelving location_. -Click _OK_. -+ -image::media/create-template-16.png[create-template-16] -+ -. Note that the order of rows (top to bottom) will correspond to the order of -columns (left to right) on the final report. Select _Shelving location_ and click -on _Move Up_ to move _Shelving location_ before _Circ count_. -+ -image::media/create-template-17.png[create-template-17] -+ -. Return to the _Sources_ pane to add more fields to your template. Under -_Sources_ click _Circulation_, then select _Check Out Date/Time_ from the middle -_Field Name_ pane. -+ -image::media/create-template-19.png[create-template-19] -+ -. Select _Year + Month_ in the right hand _Field Transform_ pane and click _Add -Selected Fields_ -+ -image::media/create-template-20.png[create-template-20] -+ -. _Check Out Date/Time_ will appear in the _Displayed Fields_ pane. In the report -it will appear as a year and month _(YYYY-MM)_ corresponding to the selected transform. -+ -image::media/create-template-21.png[create-template-21] -+ -. Select the _Check Out Date/Time_ row. Click _Alter Display Header_ and change -the column header to _Check out month_. -+ -image::media/create-template-22.png[create-template-22] -+ -. Move _Check out month_ to the top of the list using the _Move Up_ button, so -that it will be the first column in an MS Excel spreadsheet or in a chart. -Report output will sort by the first column. - -image::media/create-template-23.png[create-template-23] - -[NOTE] -====== -Note the _Change Transform_ button in the bottom left hand pane. It has the same -function as the upper right _Field Transform_ pane for fields that have already -been added. - -image::media/create-template-24.png[create-template-24] -====== - - -== Applying Filters == - -indexterm:[reports, applying filters] - -Evergreen reports access the entire database, so to limit report output to a -single library or library system you need to apply filters. - -After following the steps in the previous section you will see three fields in -the bottom left hand _Template Configuration_ pane. There are three tabs in this -pane: _Displayed Fields_ (covered in the previous section), _Base Filters_ and -_Aggregate Filters_. A filter allows you to return only the results that meet -the criteria you set. - -indexterm:[reports, applying filters, base filter] - -indexterm:[reports, applying filters, aggregate filters] - -_Base Filters_ apply to non-aggregate output types, while _Aggregate Filters_ are -used for aggregate types. In most reports you will be using the _Base Filters_ tab. -For more information on aggregate and non-aggregate types see the section called -“Field Transforms”. - -There are many available operators when using filters. Some examples are _Equals_, -_In list_, is _NULL_, _Between_, _Greater than_ or _equal to_, and so on. _In list_ -is the most flexible operator, and in this case will allow you flexibility when -running a report from this template. For example, it would be possible to run a -report on a list of timestamps (in this case will be trimmed to year and month -only), run a report on a single month, or run a report comparing two months. It -is also possible to set up recurring reports to run at the end of each month. - -In this example we are going to use a Base Filter to filter out one library’s -circulations for a specified time frame. The time frame in the template will be -configured so that you can change it each time you run the report. - -=== Using Base Filters === - -indexterm:[reports, applying filters, base filter] - -. Select the _Base Filters_ tab in the bottom _Template Configuration_ pane. - -. For this circulation statistics example, select _Circulation --> Check Out -Date/Time --> Year + Month_ and click on _Add Selected Fields_. You are going to -filter on the time period. -+ -image::media/create-template-25.png[create-template-25] -+ -. Select _Check Out Date/Time_. Click on _Change Operator_ and select _In list_ -from the dropdown menu. -+ -image::media/create-template-26.png[create-template-26] -+ -. To filter on the location of the circulation select -_Circulation --> Circulating library --> Raw Data_ and click on _Add Selected Fields_. -+ -image::media/create-template-27.png[create-template-276] -+ -. Select _Circulating Library_ and click on _Change Operator_ and select _Equals_. -Note that this is a template, so the value for _Equals_ will be filled out when -you run the report. -+ -image::media/create-template-28.png[create-template-28] -+ -For multi-branch libraries, you would select _Circulating Library_ with _In list_ -as the operator, so you could specify the branch(es) when you run the report. This -leaves the template configurable to current requirements. In comparison, sometimes -you will want to hardcode true/false values into a template. For example, deleted -bibliographic records remain in the database, so perhaps you want to hardcode -_deleted=false_, so that deleted records don’t show up in the results. You might -want to use _deleted=true_, for a template for a report on deleted items in the -last month. -+ -. Once you have configured your template, you must name and save it. Name this -template _Circulations by month for one library_. You can also add a description. -In this example, the title is descriptive enough, so a description is not necessary. -Click _Save_. -+ -image::media/create-template-29.png[create-template-29] -+ -. Click _OK_. -+ -image::media/create-template-30.png[create-template-30] -+ -. You will get a confirmation dialogue box that the template was successfully -saved. Click OK. -+ -image::media/create-template-31.png[create-template-31] -+ -After saving it is not possible to edit a template. To make changes you will -need to clone it and edit the clone - -[NOTE] -========== -The bottom right hand pane is also a source specifier. By selecting one of these -rows you will limit the fields that are visible to the sources you have specified. -This may be helpful when reviewing templates with many fields. Use *Ctrl+Click* to -select or deselect items. - -image::media/create-template-32.png[create-template-32] -========== - - - diff --git a/docs-antora/modules/reports/pages/reporter_daemon.adoc b/docs-antora/modules/reports/pages/reporter_daemon.adoc deleted file mode 100644 index 4066851821..0000000000 --- a/docs-antora/modules/reports/pages/reporter_daemon.adoc +++ /dev/null @@ -1,62 +0,0 @@ -= Starting and Stopping the Reporter Daemon = -:toc: - -indexterm:[reports, starting server application] - -indexterm:[reporter, starting daemon] - -Before you can view reports, the Evergreen administrator must start -the reporter daemon from the command line of the Evergreen server. - -The reporter daemon periodically checks for requests for new reports or -scheduled reports and gets them running. - -== Starting the Reporter Daemon == - -indexterm:[reporter, starting] - -To start the reporter daemon, run the following command as the opensrf user: - ----- -clark-kent.pl --daemon ----- - -You can also specify other options: - -* *sleep=interval*: number of seconds to sleep between checks for new reports to -run; defaults to 10 -* *lockfile=filename*: where to place the lockfile for the process; defaults to -/tmp/reporter-LOCK -* *concurrency=integer*: number of reporter daemon processes to run; defaults to -1 -* *bootstrap=filename*: OpenSRF bootstrap configuration file; defaults to -/openils/conf/opensrf_core.xml - - -[NOTE] -============= -The open-ils.reporter process must be running and enabled on the gateway before -the reporter daemon can be started. - -Remember that if the server is restarted, the reporter daemon will need to be -restarted before you can view reports unless you have configured your server to -start the daemon automatically at start up time. -============= - -== Stopping the Reporter Daemon == - -indexterm:[reports, stopping server application] - -indexterm:[reporter, stopping daemon] - -To stop the reporter daemon, you have to kill the process and remove the -lockfile. Assuming you're running just a single process and that the -lockfile is in the default location, perform the following commands as the -opensrf user: - ----- -kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` - -rm /tmp/reporter-LOCK ----- - diff --git a/docs-antora/modules/reports/pages/reporter_export_usingpgAdmin.adoc b/docs-antora/modules/reports/pages/reporter_export_usingpgAdmin.adoc deleted file mode 100644 index 9fb5370362..0000000000 --- a/docs-antora/modules/reports/pages/reporter_export_usingpgAdmin.adoc +++ /dev/null @@ -1,54 +0,0 @@ -= Exporting Report Templates Using phpPgAdmin = -:toc: - -indexterm:[reports, exporting templates] - -Once the data is exported. Database Administrators/Systems Administrators can -easily import this data into the templates folder to make it available in the -client. - -== Dump the Entire Reports Template Table == - -The data exported in this method can create issues importing into a different -system if you do not have a matching folder and owner. This is going to export -report templates created in your system. The most important fields for importing -into the new system are _name_, _description_, and _data_. Data defines the actual -structure of the report. The _owner_ and _folder_ fields will unique to the system -they were exported from and will have to be altered to ensure they match the -appropriate owner and folder information for the new system. - -. Go to the *Reporter* schema. Report templates are located in the *Template* table -. Click on the link to the *Template* table -. Click the *export* button at the top right of the phpPgAdmin screen -. Make sure the following is selected -.. _Data Only_ (checked) -.. _Format_: Select _CSV_ or _Tabbed_ did get the data in a text format -.. _Download_ checked -. Click _export_ button at the bottom -. A text file will download to your local system - -== Dump Data with an SQL Statement == - - -The following statement could be used to grab the data in the folder and dump it -with admin account as the owner and the first folder in your system. - -------------- -SELECT 1 as owner, name, description, data, 1 as folder FROM reporter.template -------------- - -or use the following to capture your folder names for export - --------------- -SELECT 1 as owner, t.name, t.description, t.data, f.name as folder - FROM reporter.template t - JOIN reporter.template_folder f ON t.folder=f.id --------------- - -. Run the above query -. Click the *download* link at the bottom of the page -. Select the file format (_CSV_ or _Tabbed_) -. Check _download_ -. A text file with the report template data will be downloaded. - - diff --git a/docs-antora/modules/reports/pages/reporter_folder.adoc b/docs-antora/modules/reports/pages/reporter_folder.adoc deleted file mode 100644 index 239e85e69b..0000000000 --- a/docs-antora/modules/reports/pages/reporter_folder.adoc +++ /dev/null @@ -1,74 +0,0 @@ -[[reporter_folders]] -= Folders = -:toc: - -indexterm:[reports, folders] - -There are three main components to reports: _Templates_, _Reports_, and _Output_. -Each of these components must be stored in a folder. Folders can be private -(accessible to your login only) or shared with other staff at your library, -other libraries in your system or consortium. It is also possible to selectively -share -only certain folders and/or subfolders. - -There are two parts to the folders pane. The _My Folders_ section contains folders -created with your Evergreen account. Folders that other users have shared with -you appear in the _Shared Folders_ section under the username of the sharing -account. - -image::media/folder-1.png[folder-1] - -[[reporter_creating_folders]] -== Creating Folders == - - -indexterm:[reports, folders, creating] - -Whether you are creating a report from scratch or working from a shared template -you must first create at least one folder. - -The steps for creating folders are similar for each reporting function. It is -easier to create folders for templates, reports, and output all at once at the -beginning, though it is possible to do it before each step. This example -demonstrates creating a folder for a template. - -. Click on _Templates_ in the _My Folders_ section. -. Name the folder. Select _Share_ or _Do not share_ from the dropdown menu. -. If you want to share your folder, select who you want to share this folder -with from the dropdown menu. -. Click _Create Sub Folder_. -. Click _OK_. -. Next, create a folder for the report definition to be saved to. Click on -_Reports_. -. Repeat steps 2-5 to create a Reports folder also called _Circulation_. -. Finally, you need to create a folder for the report’s output to be saved in. -Click on _Output_. -. Repeat steps 2-5 to create an Output folder named _Circulation_. - - -TIP: Using a parallel naming scheme for folders in Templates, Reports, -and Output helps keep your reports organized and easier to find - -The folders you just created will now be visible by clicking the arrows in _My -Folders_. Bracketed after the folder name is whom the folder is shared with. For -example, _Circulation (BNCLF)_ is shared with the North Coast Library Federation. -If it is not a shared folder there will be nothing after the folder name. You -may create as many folders and sub-folders as you like. - -== Managing Folders == - -indexterm:[reports, folders, managing] - -Once a folder has been created you can change the name, delete it, create a new -subfolder, or change the sharing settings. This example demonstrates changing a -folder name; the other choices follow similar steps - -. Click on the folder that you wish to rename. -. Click _Manage Folder_. -. Select _Change folder name_ from the dropdown menu and click _Go_. -. Enter the new name and click _Submit_. -. Click _OK_. -. You will get a confirmation box that the _Action Succeeded_. Click _OK_. - - - diff --git a/docs-antora/modules/reports/pages/reporter_generating_reports.adoc b/docs-antora/modules/reports/pages/reporter_generating_reports.adoc deleted file mode 100644 index 12859236f2..0000000000 --- a/docs-antora/modules/reports/pages/reporter_generating_reports.adoc +++ /dev/null @@ -1,109 +0,0 @@ -[[generating_reports]] -= Generating Reports from Templates = -:toc: - -indexterm:[reports, generating] - -Now you are ready to run the report from the template you have created. - -. In the My Folders section click the arrow next to _Templates_ to expand this -folder and select _circulation_. -+ -image::media/generate-report-1.png[generate-report-1] -+ -. Select the box beside _Circulations by month for one library_. Select _Create a -new report_ from selected template from the dropdown menu. Click _Submit_. -+ -image::media/generate-report-2.png[generate-report-2] -+ -. Complete the first part of report settings. Only _Report Name_ and _Choose a -folder_... are required fields. -+ -image::media/generate-report-3.png[generate-report-3] -+ -1) _Template Name_, _Template Creator_, and _Template Description_ are for -informational purposes only. They are hard coded when the template is created. -At the report definition stage it is not possible to change them. -+ -2) _Report Name_ is required. Reports stored in the same folder must have unique -names. -+ -3) _Report Description_ is optional but may help distinguish among similar -reports. -+ -4) _Report Columns_ lists the columns that will appear in the output. This is -derived from the template and cannot be changed during report definition. -+ -5) _Pivot Label Column_ and _Pivot Data Column_ are optional. Pivot tables are a -different way to view data. If you currently use pivot tables in MS Excel it is -better to select an Excel output and continue using pivot tables in Excel. -+ -6) You must choose a report folder to store this report definition. Only report -folders under My Folders are available. Click on the desired folder to select it. -+ -. Select values for the _Circulation > Check Out Date/Time_. Use the calendar -widget or manually enter the desired dates, then click Add to include the date -on the list. You may add multiple dates. -+ -image::media/generate-report-8.png[generate-report-8] -+ -The Transform for this field is Year + Month, so even if you choose a specific -date (2009-10-20) it will appear as the corresponding month only (2009-10). -+ -It is possible to select *relative dates*. If you select a relative date 1 month -ago you can schedule reports to automatically run each month. If you want to run -monthly reports that also show comparative data from one year ago, select a -relative date 1 month ago, and 13 months ago. -+ -. Select a value for the _Circulating Library_. -. Complete the bottom portion of the report definition interface, then click -_Save_. -+ -image::media/generate-report-10.png[generate-report-10] -+ -1) Select one or more output formats. In this example the report output will be -available as an Excel spreadsheet, an HTML table (for display in the staff -client or browser), and as a bar chart. -+ -2) If you want the report to be recurring, check the box and select the -_Recurrence Interval_ as described in -xref:reports:reporter_running_recurring_reports.adoc#recurring_reports[Recurring Reports]. -In this example, as this is a report that will only be run once, the _Recurring -Report_ box is not checked. -+ -3) Select _Run_ as soon as possible for immediate output. It is also possible to -set up reports that run automatically at future intervals. -+ -4) It is optional to fill out an email address where a completion notice can be -sent. The email will contain a link to password-protected report output (staff -login required). If you have an email address in your Local System Administrator -account it will automatically appear in the email notification box. However, -you can enter a different email address or multiple addresses separated by commas. -+ -. Select a folder for the report's output. -. You will get a confirmation dialogue box that the Action Succeeded. Click _OK_. -+ -image::media/generate-report-14.png[generate-report-14] -+ -Once saved, reports stay there forever unless you delete them. - -== Viewing and Editing Report Parameters == - -New options to view or edit report parameters are available from the reports folder. - -To view the parameters of a report, select the report that you want to view from the *Reports* folder, and click *View*. This will enable you to view the report, including links to external documentation and field hints. However, you cannot make any changes to the report. - -image::media/2_7_Enhancements_to_Reports4.jpg[Reports4] - - -To edit the parameters of a report, select the report that you want to view from the *Reports* folder, and click *Edit*. After making changes, you can *Save [the] Report* or *Save as New*. If you *Save the Report*, any subsequent report outputs that are generated from this report will reflect the changes that you have made. - -In addition, whenever there is a pending (scheduled, but not yet started) report output, the interface will warn you that the pending output will be modified. At that point, you can either continue or choose the alternate *Save as New* option, leaving the report output untouched. - - -image::media/2_7_Enhancements_to_Reports6.jpg[Reports6] - - -If, after making changes, you select, *Save as New*, then you have created a new report by cloning and amending a previously existing report. Note that if you create a new report, you will be prompted to rename the new report. Evergreen does not allow two reports with the same name to exist. To view or edit your new report, select the reports folder to which you saved it. - -image::media/2_7_Enhancements_to_Reports5.jpg[Reports5] diff --git a/docs-antora/modules/reports/pages/reporter_running_recurring_reports.adoc b/docs-antora/modules/reports/pages/reporter_running_recurring_reports.adoc deleted file mode 100644 index eec0f39bea..0000000000 --- a/docs-antora/modules/reports/pages/reporter_running_recurring_reports.adoc +++ /dev/null @@ -1,42 +0,0 @@ -[[recurring_reports]] -= Running Recurring Reports = -:toc: - -indexterm:[reports, recurring] - -Recurring reports are a useful way to save time by scheduling reports that you -run on a regular basis, such as monthly circulation and monthly patron -registration statistics. When you have set up a report to run on a monthly basis -you’ll get an email informing you that the report has successfully run. You can -click on a link in the email that will take you directly to the report output. -You can also access the output through the reporter interface as described in -xref:reports:reporter_view_output.adoc#viewing_report_output[Viewing Report Output]. - -To set up a monthly recurring report follow the procedure in -xref:reports:reporter_generating_reports.adoc#generating_reports[Generating Reports from Templates] but make the changes described below. - -. Select the Recurring Report check-box and set the recurrence interval to 1 month. -. Do not select Run ASAP. Instead schedule the report to run early on the first -day of the next month. Enter the date in _YYYY-MM-DD_ format. -. Ensure there is an email address to receive completion emails. You will -receive an email completion notice each month when the output is ready. -. Select a folder for the report’s output. -. Click Save Report. -. You will get a confirmation dialogue box that the Action Succeeded. Click OK. - -You will get an email on the 1st of each month with a link to the report output. -By clicking this link it will open the output in a web browser. It is still -possible to login to the staff client and access the output in Output folder. - -*How to stop or make changes to an existing recurring report?* Sometimes you may -wish to stop or make changes to a recurring report, e.g. the recurrence interval, -generation date, email address to receive completion email, output format/folder -or even filter values (such as the number of days overdue). You will need to -delete the current report from the report folder, then use the above procedure -to set up a new recurring report with the desired changes. Please note that -deleting a report also deletes all output associated with it. - -TIP: Once you have been on Evergreen for a year, you could set up your recurring -monthly reports to show comparative data from one year ago. To do this select -relative dates of 1 month ago and 13 months ago. - diff --git a/docs-antora/modules/reports/pages/reporter_template_enhancements.adoc b/docs-antora/modules/reports/pages/reporter_template_enhancements.adoc deleted file mode 100644 index 31b948c809..0000000000 --- a/docs-antora/modules/reports/pages/reporter_template_enhancements.adoc +++ /dev/null @@ -1,30 +0,0 @@ -= Template Enhancements = -:toc: - -== Documentation URL == - -You can add a link to local documentation that can help staff create a report template. To add documentation to a report template, click *Admin* -> *Local Administration* -> *Reports*, and create a new report template. A new field, *Documentation URL*, appears in the *Template Configuration* panel. Enter a URL that points to relevant documentation. - - -image::media/2_7_Enhancements_to_Reports1.jpg[Reports1] - - -The link to this documentation will also appear in your list of report templates. - - -image::media/2_7_Enhancements_to_Reports2a.jpg[Reports2a] - -== Field Hints == - -Descriptive information about fields or filters in a report template can be added to the *Field Hints* portion of the *Template Configuration* panel. For example, a circulation report template might include the field, *Circ ID*. You can add content to the *Field hints* to further define this field for staff and provide a reminder about the type of information that they should select for this field. - - -To view a field hint, click the *Column Picker*, and select *Field Hint*. The column will be added to the display. - -image::media/2_7_Enhancements_to_Reports2.jpg[Reports2] - - -To add or edit a field hint, select a filter or field, and click *Change Field Hint*. Enter text, and click *Ok*. - - -image::media/2_7_Enhancements_to_Reports3.jpg[Reports3] diff --git a/docs-antora/modules/reports/pages/reporter_template_terminology.adoc b/docs-antora/modules/reports/pages/reporter_template_terminology.adoc deleted file mode 100644 index 81185d9628..0000000000 --- a/docs-antora/modules/reports/pages/reporter_template_terminology.adoc +++ /dev/null @@ -1,124 +0,0 @@ -= Template Terminology = -:toc: - -== Data Types == - -indexterm:[reports, data types] - -The information in Evergreen's database can be classified in nine data types, formats that describe the type of data and/or its use. These were represented by text-only labels in prior versions of Evergreen. Evergreen 3.0 has replaced the text labels with icons. When building templates in _Reports_, you will find these icons in the Field Name Pane of the template creation interface. - -=== timestamp === -image::media/datatypes_timestamp.png[] - -An exact date and time (year, month, day, hour, minutes, and seconds). Remember to select the appropriate date/time transform. Raw Data includes second and timezone information, which is usually more than is required for a report. - -=== link === - -image::media/datatypes_link.png[] - -A link to another database table. Link outputs a number that is a meaningful reference for the database but not of much use to a human user. You will usually want to drill further down the tree in the Sources pane and select fields from the linked table. However, in some instances you might want to use a link field. For example, to count the number of patrons who borrowed items you could do a count on the Patron link data. - -=== text === -image::media/datatypes_text.png[] - -A field of text. You will usually want to use the Raw Data transform. - -=== bool === -image::media/datatypes_bool.png[] - -True or False. Commonly used to filter out deleted item or patron records. - -=== org_unit === -image::media/datatypes_orgunit.png[] - -Organizational Unit - a number representing a library, library system, or federation. When you want to filter on a library, make sure that the field name is on an org_unit or id data type. - -=== id === - -image::media/datatypes_id.png[] - -A unique number assigned by the database to identify each record. These numbers are meaningful references for the database but not of much use to a human user. Use in displayed fields when counting records or in filters. - -=== money === - -image::media/datatypes_money.png[] - -A monetary amount. - -=== int === - -image::media/datatypes_int.png[] - -Integer (a number) - -=== interval === - -image::media/datatypes_interval.png[] - -A period of time. - -[[field_transforms]] -== Field Transforms == - -indexterm:[reports, field transforms] - -A _Field Transform_ tells the reporter how to process a field for output. -Different data types have different transform options. - -indexterm:[reports, field transforms, raw data] - -*Raw Data*. To display a field exactly as it appears in the database use the -_Raw Data_ transform, available for all data types. - -indexterm:[reports, field transforms, count] - -indexterm:[reports, field transforms, raw distinct] - -*Count and Count Distinct*. These transforms apply to the _id_ data type and -are used to count database records (e.g. for circulation statistics). Use Count -to tally the total number of records. Use _Count Distinct_ to count the number -of unique records, removing duplicates. - -To demonstrate the difference between _Count_ and _Count Distinct_, consider an -example where you want to know the number of active patrons in a given month, -where ``active" means they borrowed at least one item. Each circulation is linked -to a _Patron ID_, a number identifying the patron who borrowed the item. If we use -the _Count Distinct_ transform for Patron IDs we will know the number of unique -patrons who circulated at least one book (2 patrons in the table below). If -instead, we use _Count_, we will know how many books were circulated, since every -circulation is linked to a _patron ID_ and duplicate values are also counted. To -identify the number of active patrons in this example the _Count Distinct_ -transform should be used. - -[options="header,footer"] -|==================================== -|Title |Patron ID |Patron Name -|Harry Potter and the Chamber of Secrets |001 |John Doe -|Northern Lights |001 |John Doe -|Harry Potter and the Philosopher’s Stone |222 |Jane Doe -|==================================== - -indexterm:[reports, field transforms, output type] - -*Output Type*. Note that each transform has either an _Aggregate_ or -_Non-Aggregate_ output type. - -indexterm:[reports, field transforms, output type, non-aggregate] - -indexterm:[reports, field transforms, output type, aggregate] - -Selecting a _Non-Aggregate_ output type will return one row of output in your -report for each row in the database. Selecting an Aggregate output type will -group together several rows of the database and return just one row of output -with, say, the average value or the total count for that group. Other common -aggregate types include minimum, maximum, and sum. - -When used as filters, non-aggregate and aggregate types correspond to _Base_ and -_Aggregate_ filters respectively. To see the difference between a base filter and -an aggregate filter, imagine that you are creating a report to count the number -of circulations in January. This would require a base filter to specify the -month of interest because the month is a non-aggregate output type. Now imagine -that you wish to list all items with more than 25 holds. This would require an -aggregate filter on the number of holds per item because you must use an -aggregate output type to count the holds. - diff --git a/docs-antora/modules/reports/pages/reporter_view_output.adoc b/docs-antora/modules/reports/pages/reporter_view_output.adoc deleted file mode 100644 index dcba21c09c..0000000000 --- a/docs-antora/modules/reports/pages/reporter_view_output.adoc +++ /dev/null @@ -1,41 +0,0 @@ -[[viewing_report_output]] -= Viewing Report Output = -:toc: - -indexterm:[reports, output] - -indexterm:[reports, output, tabular] - -indexterm:[reports, output, Excel] - -indexterm:[reports, output, spreadsheet] - -When a report runs Evergreen sends an email with a link to the output to the -address defined in the report. Output is also stored in the specified Output -folder and will remain there until manually deleted. - -. To view report output in the staff client, open the reports interface from -_Administration --> Local Administration --> Reports_ -. Click on Output to expand the folder. Select _Circulation_ (where you just -saved the circulation report output). -+ -image::media/view-output-1.png[view-output-1] -+ -. View report output is the default selection in the dropdown menu. Select -_Recurring Monthly Circ by Location_ by clicking the checkbox and click _Submit_. -+ -image::media/view-output-2.png[view-output-2] -+ -. A new tab will open for the report output. Select either _Tabular Output_ or -_Excel Output_. If Bar Charts was selected during report definition the chart -will also appear. -. Tabular output looks like this: -+ -image::media/view-output-4.png[view-output-4] -+ -. If you want to manipulate, filter or graph this data, Excel output would be -more useful. Excel output will generate a ".xlsx" file. Excel output looks like this in Excel: -+ -image::media/view-output-5.png[view-output-5] - - diff --git a/docs-antora/modules/serials/_attributes.adoc b/docs-antora/modules/serials/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/serials/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC2.jpg b/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC2.jpg deleted file mode 100644 index 4c775be36b..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC2.jpg and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC5.jpg b/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC5.jpg deleted file mode 100644 index f1dd239985..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC5.jpg and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC7.jpg b/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC7.jpg deleted file mode 100644 index 574aaf0f30..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC7.jpg and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_ct1.PNG b/docs-antora/modules/serials/assets/images/media/serials_ct1.PNG deleted file mode 100644 index 5f78c5a162..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_ct1.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_extra1.PNG b/docs-antora/modules/serials/assets/images/media/serials_extra1.PNG deleted file mode 100644 index 0bdbfe74ac..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_extra1.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_extra2.PNG b/docs-antora/modules/serials/assets/images/media/serials_extra2.PNG deleted file mode 100644 index af795b91c9..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_extra2.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_mfhd1.PNG b/docs-antora/modules/serials/assets/images/media/serials_mfhd1.PNG deleted file mode 100644 index 8b0f1c5185..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_mfhd1.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_mfhd3.PNG b/docs-antora/modules/serials/assets/images/media/serials_mfhd3.PNG deleted file mode 100644 index 3b652d48b4..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_mfhd3.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_mfhd6.PNG b/docs-antora/modules/serials/assets/images/media/serials_mfhd6.PNG deleted file mode 100644 index 222b1e6537..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_mfhd6.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_routing1.PNG b/docs-antora/modules/serials/assets/images/media/serials_routing1.PNG deleted file mode 100644 index 12aba412f7..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_routing1.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub0.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub0.PNG deleted file mode 100644 index 5efad47c43..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub0.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub1.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub1.PNG deleted file mode 100644 index 34435de02c..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub1.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub10.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub10.PNG deleted file mode 100644 index ca2f1c3010..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub10.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub11.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub11.PNG deleted file mode 100644 index a190a81c69..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub11.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub2.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub2.PNG deleted file mode 100644 index e2c808cff5..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub2.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub3.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub3.PNG deleted file mode 100644 index 89ef1be219..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub3.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub4.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub4.PNG deleted file mode 100644 index e749b25ba8..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub4.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub5.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub5.PNG deleted file mode 100644 index 33ffd0429b..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub5.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub6.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub6.PNG deleted file mode 100644 index 44ebb6ee90..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub6.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub7.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub7.PNG deleted file mode 100644 index 48e7e5ee76..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub7.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub8.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub8.PNG deleted file mode 100644 index be1812e900..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub8.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_sub9.PNG b/docs-antora/modules/serials/assets/images/media/serials_sub9.PNG deleted file mode 100644 index f34c61783f..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_sub9.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_wizard1.PNG b/docs-antora/modules/serials/assets/images/media/serials_wizard1.PNG deleted file mode 100644 index 9b6345dabe..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_wizard1.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_wizard2.PNG b/docs-antora/modules/serials/assets/images/media/serials_wizard2.PNG deleted file mode 100644 index 96c430908b..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_wizard2.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_wizard3.PNG b/docs-antora/modules/serials/assets/images/media/serials_wizard3.PNG deleted file mode 100644 index ccd7ba8396..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_wizard3.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_wizard4.PNG b/docs-antora/modules/serials/assets/images/media/serials_wizard4.PNG deleted file mode 100644 index 50f0c98002..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_wizard4.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_wizard5.PNG b/docs-antora/modules/serials/assets/images/media/serials_wizard5.PNG deleted file mode 100644 index 6b94925a7f..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_wizard5.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/assets/images/media/serials_wizard6.PNG b/docs-antora/modules/serials/assets/images/media/serials_wizard6.PNG deleted file mode 100644 index 7184b6c27b..0000000000 Binary files a/docs-antora/modules/serials/assets/images/media/serials_wizard6.PNG and /dev/null differ diff --git a/docs-antora/modules/serials/nav.adoc b/docs-antora/modules/serials/nav.adoc deleted file mode 100644 index 53d37ed18b..0000000000 --- a/docs-antora/modules/serials/nav.adoc +++ /dev/null @@ -1,10 +0,0 @@ -* xref:serials:A-intro.adoc[Serials] -** xref:serials:B-serials_admin.adoc[Serials Administration] -** xref:serials:C-serials_workflow.adoc[Serials Module] -** xref:serials:D-Receiving.adoc[Receiving] -** xref:serials:E-routing_lists.adoc[Routing Lists] -** xref:serials:F-Special_issue.adoc[Special Issues] -** xref:serials:G-binding.adoc[Binding Issues] -** xref:serials:H-holdings_statements.adoc[Holdings] -** xref:serials:Group_Serials_Issues_in_the_OPAC_2.2.adoc[Group Serials Issues in the OPAC] - diff --git a/docs-antora/modules/serials/pages/A-intro.adoc b/docs-antora/modules/serials/pages/A-intro.adoc deleted file mode 100644 index 88bb993eb1..0000000000 --- a/docs-antora/modules/serials/pages/A-intro.adoc +++ /dev/null @@ -1,6 +0,0 @@ -= Serials = -:toc: - -== MFHD Records == - -MARC Format for Holdings Display (MFHD) display in the catalog in addition to holding statements generated by Evergreen from subscriptions created in the Serials Module. The MFHDs are editable as MARC but the holdings statements generated from the control view are system generated. Multiple MFHDs can be created and are tied to Organizational Units. diff --git a/docs-antora/modules/serials/pages/B-serials_admin.adoc b/docs-antora/modules/serials/pages/B-serials_admin.adoc deleted file mode 100644 index d645371cfb..0000000000 --- a/docs-antora/modules/serials/pages/B-serials_admin.adoc +++ /dev/null @@ -1,145 +0,0 @@ -= Serials Administration = -:toc: - -The serials module can be administered under a new menu option: *Administration->Serials Administration*. The new Serials Administration menu currently allows staff to configure _Serial Copy Templates_ and _Pattern Templates_. - - -== Serial Copy Templates == -Serials copy templates enable you to specify item attributes that should be applied by default to copies of serials. Serials copy templates are associated with distributions in a subscription and are applied when serials copies are received. Serial copy templates can also be used as a binding template to apply specific item attributes to copies that are being bound together. - - -=== Creating a Serial Copy Template === - -To create a serial copy template, go to *Administration->Serials Administration->Serial Copy Templates*: - -. Click *Create Template* in the upper-right hand corner. A dialog box will appear. -. Within the dialog box assign the template a _Template Name_ and set any item attributes that you want in the template: -.. *Circulate?*: indicate if the items can circulate. -.. *Circulation Library*: Select the circulation library from the drop down menu. -.. *Shelving Location*: Select the shelving location for the item from the drop down menu. This menu is populated from the locations created in Admin->Local Administration->Copy Locations Editor. -.. *Circulation Modifier*: Select the circulation modifier for the item from the drop down menu. This menu is populated from the modifiers created in Admin->Server Administration->Circulation Modifiers. -.. *Loan Duration*: Select a loan duration from the drop down menu. This menu is populated from the loan durations created in Admin->Server Administration->Circulation Duration Rules. This field is required. -.. *Circulate as Type*: Select a Type of record from the drop down menu if you want to control circulation based on the Type fixed field in the MARC bibliographic record. Most libraries choose to control circulation based on Circulation Modifier instead of Circulate as Type in Evergreen. -.. *Holdable?*: Yes or No-- indicate if holds can be placed on the items. -.. *Age-based Hold Protection*: Select a rule from the drop down menu. Age-based hold protection allows you to control the extent to which an item can circulate after it has been received. For example, you may want to protect new copies of a serial so that only patrons who check out the item at your branch can use it. -.. *Fine Level*: Select a fine level from the drop down menu. This menu is populated from the fine levels created in Admin->Server Administration->Circulation Recurring Fine Rules. This field is required. -.. *Floating*: Select a Floating policy from the drop down menu if the items belong to a floating collection. -.. *Status*: Select a copy status from the Status drop down menu. This menu is populated from the statuses created in Admin → Server Administration → Copy Statuses. -.. *Reference?*: Yes or No-- indicate if the item is a reference item. -.. *OPAC Visible?*: Yes or No-- indicate if the item should be visible in the OPAC. -.. *Price*: Enter the price of the item. -.. *Deposit?*: Yes or No-- indicate if patrons must place a deposit on the copy before they can use it. -.. *Deposit Amount*: Enter a Deposit Amount if patrons must place a deposit on the copy before they can use it. -.. *Quality*: Good or Damaged-- indicate the physical condition of the item. -. Click *Save*. -. The new serial copy template will now appear in the list of templates. - -image::media/serials_ct1.PNG[] - - -=== Modifying a Serial Copy Template === - -To modify a Serial Copy Template: - -. Select the template to modify by checking the box for the template or clicking anywhere on the template row. Go to *Actions->Edit Template* or _right-click_ on the template row and select *Edit Template*. -. The dialog box will appear. Make any changes to the item attributes and click *Save*. - - -=== Deleting a Serial Copy Template === - -To delete a Serial Copy Template: - -. Select the template to modify by checking the box for the template or clicking anywhere on the template row. -. Go to *Actions->Delete Template* or _right-click_ on the template row and select *Delete Template*. - -NOTE: Serials copy templates that are being used by subscriptions cannot be deleted. - - -== Prediction Pattern Templates == - -Prediction pattern templates allow you to create templates for prediction patterns that can be shared with other staff users in your library branch, system, or throughout the consortium. Prediction patterns are used to predict issues on serials subscriptions. Templates can be created in the Administration module, as described below, and can also be created and shared directly in a subscription. - - -=== Creating a Prediction Pattern Template === -To create a template, go to *Administration->Serials Administration->Prediction Pattern Templates*: - -. Click *New Record* in the upper-right hand corner. A dialog box called _Prediction Pattern Template_ will appear. -. Assign a _Name_ to the template, such as "Monthly", to create a monthly publication pattern. -. Next to Pattern Code click *Pattern Wizard*. The Prediction Pattern Code Wizard will appear. This wizard has five tabs that will step you through creating a prediction pattern for your publication. - -.. Enumeration Labels -... _If the publication does not use enumeration and instead only uses dates_, select the radio button adjacent to _Use Calendar Dates Only_ and click *Next* in the upper right-hand corner and go to b. Chronology Display in this document. -... _If the publication uses enumerations (commonly used)_, select the radio button adjacent to _Use enumerations_. The enumerations conform to $a-$h of the 853,854, and 855 MARC tags. -... Enter the first level of enumeration in the field labeled _Level 1_. A common first level enumeration is volume, or "v.". If there are additional levels of enumeration, click *Add Level*. -... A second field labeled _Level 2_ will appear. Enter the second level of enumeration in the field. A common second level enumeration is number, or "no.". -.... Select if the second level of enumeration is a set _Number_, _Varies_, or is _Undetermined_. -.... If _Number_ is selected (commonly used): -..... Enter the number of bibliographic units per next higher level (e.g. 12 no. per v.). This conforms to $u in the 853, 854, and 855 MARC tags. -..... Select the radio button for the enumeration scheme: _Restarts at unit completion_ or _Increments continuously_. This conforms to $v in the 853, 854, and 855 MARC tags. -.... You can add up to six levels of enumeration. -... Check the box adjacent to _Add alternative enumeration_ if the publication uses an alternative enumeration. -... Check the box adjacent to _First level enumeration changes during subscription year_ to configure calendar changes if needed. A common calendar change is for the first level of enumeration to increment every January. -.... Select when the Change occurs from the drop down menu: _Start of the month_, _Specific date_, or _Start of season_. -.... From the corresponding drop down menu select the specific point in time at which the first level of enumeration should change. -.... Click *Add more* to add additional calendar changes if needed. -... When you have completed the enumerations, click *Next* in the upper right-hand corner. - - -image::media/serials_wizard1.PNG[] - - -.. Chronology Display -... To use chronological captions for the subscription, check the box adjacent to _Use Chronology Captions?_ -... Choose a chronological unit for the first level. If you want to display the term for the unit selected, such as "Year" and "Month" next to the chronology caption in the catalog, then select the checkbox for Display level descriptor? (not commonly used). -... To add additional levels of chronology for display, click *Add level*. -.... Note: Each level that you add must be a smaller chronological unit than the previous level (e.g. Level 1 = Year, Level 2 = Month). -... Check the box adjacent to _Use Alternative Chronology Captions?_ If the publication uses alternative chronology. -... After you have completed the chronology caption, click *Next* in the upper-right hand corner. - - -image::media/serials_wizard2.PNG[] - - -.. MFHD Indicators -... *Compression Display Options*: Select the appropriate option for compressing or expanding your captions in the catalog from the compressibility and expandability drop down menu. The entries in the drop down menu correspond to the indicator codes and the subfield $w in the 853 tag. Compressibility and expandability correspond to the first indicator in the 853 tag. -... *Caption Evaluation*: Choose the appropriate caption evaluation from the drop down menu. Caption Evaluation corresponds to the second indicator in the 853 tag. -... Click *Next* in the upper right hand corner. - - -image::media/serials_wizard3.PNG[] - - -.. Frequency and Regularity -... Indicate the frequency of the publication by selecting one of the following radio buttons: -.... *Pre-selected* and choose the frequency from the drop down menu. -.... *Use number of issues per year* and enter the total number of issues in the field. -... If the publication has combined, skipped, or special issues, that should be accounted for in the publication pattern, check the box adjacent to _Use specific regularity information?_. -.... From the first drop down menu, select the appropriate publication information: _Combined_, _Omitted_, or _Published_ issues. -.... From the subsequent drop down menus, select the appropriate frequency and issue information. -.... Add additional regularity rows as needed. -.... For a Combined issue, enter the relevant combined issue code. E.g., for a monthly combined issue, enter 02/03 to specify that February and March are combined. -... After you have completed frequency and regularity information, click *Next* in the upper-right hand corner. - - -image::media/serials_wizard4.PNG[] - - -.. Review -... Review the Pattern Summary to verify that the pattern is correct. You can also click on the expand arrow icon to view the _Raw Pattern Code_. -... If you want to share this pattern, assign it a name and select if it will be shared with your library, the system, or across the consortium. -... Click *Save*. - - -image::media/serials_wizard5.PNG[] - - -. Back in the Prediction Pattern Template dialog box, select the Owning Library, which will default to the workstation library. -. If you want to share the template, set the Share Depth to indicate how far out into your consortium the template will be shared. - - -image::media/serials_wizard6.PNG[] - - -. The Prediction Pattern will now appear in the list of templates and can be used to create predictions for subscriptions. - -NOTE: Prediction Patterns can be edited after creation as long as all predicted issues have the status of "Expected". Once an issue is moved into a different status, the Prediction Pattern cannot be changed. diff --git a/docs-antora/modules/serials/pages/C-serials_workflow.adoc b/docs-antora/modules/serials/pages/C-serials_workflow.adoc deleted file mode 100644 index 17da0d33de..0000000000 --- a/docs-antora/modules/serials/pages/C-serials_workflow.adoc +++ /dev/null @@ -1,136 +0,0 @@ -= Serials Module = -:toc: - -The Serials Module can be used to create subscriptions, distributions, streams, and prediction patterns. As well as to generate predictions and receive issues as they come in to the library. - - -To access the Serials Module, go to a serials record in the catalog, and click on *Serials->Manage Subscriptions*. This will open the serials interface for that particular record. In this interface you can: - -. Create and manage subscriptions -. Create and manage predictions -. Create and manage issues -. Create and manage MFHDs - - -image::media/serials_sub0.PNG[] - - -== Create a Subscription == - -. From a bibliographic record, go to *Serials->Manage Subscriptions*, view the _Manage Subscriptions_ tab. -. Within the _Manage Subscriptions_ tab, create a new subscription by clicking *New Subscription*. The subscription editor will appear: -.. Select the _Owning Library for_ the subscription. The owning library indicates the organizational unit(s) whose staff can use this subscription. The rule of parental inheritance applies to this list. For example, if a system is made the owner of a subscription, then users, with appropriate permissions, at the branches within the system could also use this subscription. This field is required. -.. Enter the date that the subscription begins in the _Start Dat_e field. This field is required. -.. An _End Date_ for the subscription may also be entered, but it is not required. -.. Optionally, enter an _Expected Offset_. This is the difference between the nominal publishing date of an issue and the date that you expect to receive your copy. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter "-2 days" into this field. -.. Next, create a Distribution for the subscription by selecting the Library for the distribution. Distributions identify the branches that will receive copies of a serial. -... Note: If the Owning Library of the subscription was set at the branch level, the Library will be the same as the Owning Library. If the Owning Library of the subscription was set at the system level, the Library will be set to the holdings library. -.. Enter a Label for the distribution. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. -.. Select the preferred _OPAC Display for holdings_: Chronological or Enumeration. -.. Select the _Receiving Template_ that will be applied to items as they are received. The receiving templates are configured in Administration->Serials Administration->Serial Copy Templates. -.. Next, create a Stream by assigning a label to the stream in the _Send to_ field. The stream indicates the number of copies that should be sent to the distribution library. You can click *Add copy stream* if the library will receive multiple copies of the serial. -. After the subscription, distribution, and copy information is configured, click *Save* and go to the _Manage Predictions_ tab to create the prediction pattern that will be used to generate predictions for this title. - -NOTE: After creating a subscription, you can use the Actions menu to take a variety of actions with the subscription, such as adding Subscription or Distribution Notes, linking it to an MFHD record, or creating routing lists. - - -image::media/serials_sub1.PNG[] - - -== Create and Manage Predictions == - -From the _Manage Predictions_ tab you can create a new prediction pattern from scratch, use an existing pattern template, or use an existing pattern template as the basis for a new prediction pattern. - -=== Predict Issues Using a New Prediction Pattern === -. Within the _Manage Predictions_ tab, _Select [a] subscription_ to work on from the drop down menu. -. To create a new prediction pattern, click *Add New*. -.. The box next to *Active* will be checked by default. -.. Select the _Type of pattern_ from the drop down menu and click *Create Pattern*. The Pattern Wizard will appear. -.. Follow the steps in the section _Creating a Pattern Template_ in this documentation to create a new pattern using the wizard. - - -image::media/serials_sub2.PNG[] - - -. After creating the pattern in the wizard, click *Create*. The new prediction pattern will now appear under Existing Prediction Patterns. -. To create predictions, click *Predict New Issues*. - -NOTE: You can also predict new issues from the _Manage Issues_ tab. - - -image::media/serials_sub3.PNG[] - - -. A dialog box called _Predict New Issues: Initial Values_ will appear. -.. Select the _Publication date_ for the subscription. This will be publication date of the first issue you expect to receive. -.. The _Type_ will correspond to the type of prediction pattern selected. -.. Enter any _Enumeration labels_ for the first expected issue. -.. Enter any _Chronology labels_ for the first expected issue. -.. Enter the _Prediction count_. This is the number of issues that you want to predict. -. Click *Save*. -. Evergreen will generate the predictions and bring you to the _Manage Issues_ tab to review the predicted issues. - - -image::media/serials_sub4.PNG[] - - -=== Predict Issues Using a Prediction Pattern Template === -. Within the _Manage Predictions_ tab, *Select [a] subscription* to work on from the drop down menu. -. _Select a template_ from the drop down menu that appears under the Add New button and click *Create from Template*. The pattern information will appear below the drop down menu. - - -image::media/serials_sub5.PNG[] - - -. If you want to use the pattern "as is" click *Create*. -.. If you want to review or modify the pattern, click *Edit Pattern*. The Pattern Wizard will appear. -.. The Pattern Wizard will be pre-populated with the pattern template selected. Follow the steps in the section Creating a Pattern Template in this documentation to modify the template or click *Next* on each tab to review the template. -.. After modifying or reviewing the pattern in the wizard, click *Create*. The prediction pattern will now appear under Existing Prediction Patterns. -. To create predictions, click *Predict New Issues*. -.. Note: you can also predict new issues from the _Manage Issues_ tab. -. A dialog box called _Predict New Issues: Initial Values_ will appear. -.. Select the _Publication date_ for the subscription. This will be publication date of the first issue you expect to receive. -.. The _Type_ will correspond to the type of prediction pattern selected. -.. Enter any _Enumeration labels_ for the first expected issue. -.. Enter any _Chronology labels_ for the first expected issue. -.. Enter the _Prediction count_. This is the number of issues that you want to predict. -. Click *Save*. -. Evergreen will generate the predictions and bring you to the _Manage Issues_ tab to review the predicted issues. - - -=== Predict Issues Using a Prediction Pattern from a Bibliographic and/or MFHD Record === -Evergreen can also generate a prediction pattern from existing MFHD records attached to a serials record and from MFHD patterns embedded directly in the bibliographic record. - -. Within the _Manage Predictions_ tab, *Select [a] subscription* to work on from the drop down menu. -. Click *Import from Bibliographic and/or MFHD Records*. - - -image::media/serials_sub6.PNG[] - - -. A dialog box will appear that presents the available MFHD records and the prediction pattern that will be imported. -. Check the box adjacent to the MFHD record that you would like to import and click *Import*. The new prediction pattern will now appear under _Existing Prediction Patterns_. - - -image::media/serials_sub7.PNG[] - - -. If you want to review or modify the pattern, click *Edit Pattern*. The Pattern Wizard will appear. -.. The Pattern Wizard will be pre-populated with the pattern from the MFHD selected. Follow the steps in the section . Creating a Pattern Template. in this documentation to modify the template or click *Next* on each tab to review the template. -. To create predictions, click *Predict New Issues*. -.. Note: you can also predict new issues from the _Manage Issues_ tab. -. A dialog box called _Predict New Issues: Initial Values_ will appear. -.. Select the _Publication date_ for the subscription. This will be publication date of the first issue you expect to receive. -.. The _Type_ will correspond to the type of prediction pattern selected. -.. Enter any _Enumeration labels_ for the first expected issue. -.. Enter any _Chronology labels_ for the first expected issue. -.. Enter the _Prediction count_. This is the number of issues that you want to predict. -. Click *Save*. -. Evergreen will generate the predictions and bring you to the _Manage Issues_ tab to review the predicted issues. - - -=== Manage Issues === -After generating predictions in the _Manage Predictions_ tab, you will see a list of the predicted issues in the Manage Issues tab. A variety of actions can be taken in this tab, including receiving issues, predicting new issues, adding special issues. - - -image::media/serials_sub8.PNG[] diff --git a/docs-antora/modules/serials/pages/D-Receiving.adoc b/docs-antora/modules/serials/pages/D-Receiving.adoc deleted file mode 100644 index 8391d1371c..0000000000 --- a/docs-antora/modules/serials/pages/D-Receiving.adoc +++ /dev/null @@ -1,81 +0,0 @@ -= Receiving = -:toc: -Issues can be received through the _Manage Issues_ tab or through the _Quick Receive_ option located in the bibliographic record display. While receiving, staff can select if issues should be barcoded during receipt. - - -== Quick Receive == -. From a serials record in the catalog, go to *Serials->Quick Receive*. -. A dialog box will appear. Select the _Library_ and _Subscription_ for which you are receiving issues from the drop down menu and click *OK/Continue*. -. A _Receive items_ dialog box will appear with the next expected issue. -.. To receive the item(s) and barcode them: -... The _Copy Location_ and _Circulation Modifier_ will be pre-populated from the Receive Template associated with the Distribution. Changes can be made to the pre-populated information. -.... Note: Copy location, call number, and circulation modifier can be applied to multiple copies in batch using the batch modify. -... *Call Number*: Enter a call number. Any item with a barcode must also have a call number. -... *Barcode*: Scan in the barcode that will be affixed to the issue. -... The box adjacent to _Receive the issue_ will be checked by default. -... Check the box adjacent to _Routing List_ to print an existing routing list. -... Click *Save* to receive the issue. The Status of the issue will update to "Received" and a Date Received will be recorded. The barcoded copy will now appear in the holdings area of the catalog and the Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. -.. To receive the item(s) without barcoding them: -... Uncheck the box adjacent to _Barcode Items_ and click *Save*. The Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. - - -image::media/serials_sub9.PNG[] - - -== Receiving from the Manage Issues tab == -The Manage Issues tab can be used to receive the next expected issue and to receive multiple expected issues. This tab can be accessed by retrieving the serial record, going to *Serials->Manage Subscriptions*, and selecting the _Manage Issues_ tab. - - -=== Receive Next Issue and Barcode === - -. Within the _Manage Issues_ tab, *Select [a] subscription* to work on from the drop down menu. The list of predicted issues for the subscription will appear. -. Check the box adjacent to _Barcode on receive_. -. Click *Receive Next*. -. A _Receive items_ dialog box will appear with the next expected issue and item(s). -. The _Copy Location_ and _Circulation Modifier_ will be pre-populated from the Receive Template associated with the Distribution. Changes can be made to the pre-populated information. -. *Call Number*: Enter a call number. Any item with a barcode must also have a call number. -. *Barcode*: Scan in the barcode that will be affixed to the item(s). -. The box to _Receive the item(s)_ will be checked by default. -. Check the box adjacent to _Routing List_ to print an existing routing list. -. Click *Save* to receive the item(s). The Status of the issue will update to "Received" and a Date Received will be recorded. The barcoded item(s) will now appear in the holdings area of the catalog and the Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. - - -=== Receive Next Issue (no barcode) === - -. In the _Manage Issues_ tab, make sure the box adjacent to _Barcode on receive_ is unchecked and click *Receive Next*. -. A _Receive items_ dialog box will appear with the message "Will receive # item(s) without barcoding." -. Click *OK/Continue* to receive the issue. The Status of the issue will update to "Received" and a Date Received will be recorded. The Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. - - -image::media/serials_sub10.PNG[] - - -== Batch Receiving == -Multiple issues can be received at the same time using the _Manage Issues_ tab. - - -=== Batch Receive and Barcode === - -. Within the _Manage Issues tab_, *Select [a] subscription* to work on from the drop down menu. The list of predicted issues for the subscription will appear. -. Check the box adjacent to _Barcode on receive_. -. Check the boxes adjacent to the expected issues you want to receive. -. Go to *Actions->Receive selected* or _right-click_ on the rows and select *Receive selected* from the drop down menu. -. A _Receive items_ dialog box will appear with the selected issues and items. -. The _Copy Location_ and _Circulation Modifier_ will be pre-populated from the Receive Template associated with the Distribution. Changes can be made to the pre-populated information. -. *Call Number*: Enter a call number. Any item with a barcode must also have a call number. -. *Barcode*: Scan in the barcodes that will be affixed to the items. -. The box to _Receive_ the items will be checked by default. -. Check the box adjacent to _Routing List_ to print an existing routing list. -. Click *Save* to receive the items. The Status of the items will update to "Received" and a Date Received will be recorded. The barcoded items will now appear in the holdings area of the catalog and the Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issues. - - -image::media/serials_sub11.PNG[] - - -=== Receive multiple issues (no barcode) === - -. Within the _Manage Issues_ tab, *Select [a] subscription* to work on from the drop down menu. The list of predicted issues for the subscription will appear. -. Make sure the box next to _Barcode on receive_ is unchecked and check the boxes adjacent to the expected issues you want to receive. -. A _Receive items_ dialog box will appear with the message "Will receive # item(s) without barcoding." -. Click *OK/Continue* to receive the issues. The Status of the issue will update to "Received" and a Date Received will be recorded. The Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issues. - diff --git a/docs-antora/modules/serials/pages/E-routing_lists.adoc b/docs-antora/modules/serials/pages/E-routing_lists.adoc deleted file mode 100644 index 31d18a2666..0000000000 --- a/docs-antora/modules/serials/pages/E-routing_lists.adoc +++ /dev/null @@ -1,19 +0,0 @@ -= Routing Lists = -:toc: - -Routing lists enable you to designate specific users and/or departments that serial items need to be routed to upon receiving. - -*Create a Routing List* - -. To create a routing list for a subscription, go to the _Manage Subscriptions_ tab for a serials record, select the subscription from the list and go to *Actions->Additional Routing*, or _right-click_ and select *Additional Routing*. A dialog box will appear where you can create the routing list. -.. Scan or type in the barcode of the user the items should be routed to in the _Reader (barcode)_ field and click *Add Route*. Continue adding barcodes until the list is complete. -.. To route items to a location, click the radio button next to _Department_, type in the routing location, and click *Add Route*. -.. A _Note_ may be added along with each addition to the list. -.. The names and departments on the list will appear at the top of the dialog box and can be reordered by clicking the arrows or removed by clicking the x next to each name or department. -. When the list is complete, click *Update*. - - -image::media/serials_routing1.PNG[] - - -Routing lists can be printed as items are received (see the documentation on Receiving for more information). They can also be printed directly from the _Manage Issues_ tab in a subscription by selecting the item(s) and going to *Actions->Print routing lists* or _right-clicking_ on the item(s) and selecting *Print routing lists* from the menu. diff --git a/docs-antora/modules/serials/pages/F-Special_issue.adoc b/docs-antora/modules/serials/pages/F-Special_issue.adoc deleted file mode 100644 index 93059f3dda..0000000000 --- a/docs-antora/modules/serials/pages/F-Special_issue.adoc +++ /dev/null @@ -1,34 +0,0 @@ -= Special Issues = -:toc: - -== Adding Extra Copies == -If the library receives an extra copy of an expected issue, the extra copy can be added to the list of predicted issues so it can be received through the serials module. - -*To add an extra copy of an expected issue*: - -. In the _Manage Issues_ tab, select the issuance that precedes the issuance that you received an extra copy of and go to *Actions->Add following issue* or _right-click_ on the issuance and select *Add following issue* from the menu. -. A dialog box will appear. Verify that the _Publication date_, _Type_, and _Chronology_ labels are correct. The _Enumeration_ labels will be filled in automatically when the issue is created. -. Click *Save* to create the extra copy of the following issue. -. The extra copy will appear in the list of issues and can be received using your typical workflow. - - -image::media/serials_extra1.PNG[] - - -== Adding Special Issues == -If the library receives an unexpected issue of a subscription, such as Summer Issue or Holiday Issue, it can be added to the list of predicted issues as a Special Issue so it can be received through the serials module. - -*To add a special issue*: - -. In the _Manage Issues_ tab, click *Add Special Issue*. A dialog box will appear. -. Enter the _Publication date_ of the special issue. -. Select the _Type_ (typically Basic). -. Add an _Issuance Label_ to identify the special issue, such as "Holiday Issue 2017". -. Click *Save*. -. The special issue will appear in the list of issues and can be received using your typical workflow. - - -image::media/serials_extra2.PNG[] - - -NOTE: A special issue may also be added as an ad hoc issue by following the instructions for Adding Extra Copies. Enter the Publication date and Type and check the box adjacent to Ad hoc issue? The form will update to allow you to enter an Issuance Label. diff --git a/docs-antora/modules/serials/pages/G-binding.adoc b/docs-antora/modules/serials/pages/G-binding.adoc deleted file mode 100644 index 553f5227b0..0000000000 --- a/docs-antora/modules/serials/pages/G-binding.adoc +++ /dev/null @@ -1,19 +0,0 @@ -= Binding Issues = -:toc: - -*Apply a binding template:* - -To bind issues, first a binding template needs to be applied to the associated Distribution. - -. Go to the _Manage Subscriptions_ tab and from the grid, select the Distribution(s) with issues you’d like to bind. -. _Right-click_ on the Distribution(s) or go to *Actions* and select *Apply Binding Template*. -. In the dialog box that appears, select the Serial Copy Template you’d like to use from the dropdown and click *Update*. - - -*To bind received issues together:* - -. Go to the _Manage Issues_ tab and select the issues you want to bind together. -. _Right-click_ on the issues or go to *Actions* and select *Bind*. -. The _Bind Items_ interface will appear and all items will be represented on the screen. The first item's fields will be editable. _Modify the Call Number_ if needed. Replace the *Barcode* and click *Save*. - -NOTE: The barcode must be replaced with a new barcode. The binding will fail if you attempt to reuse an existing barcode from one of the items being bound. Evergreen views it as a duplicate barcode. diff --git a/docs-antora/modules/serials/pages/Group_Serials_Issues_in_the_OPAC_2.2.adoc b/docs-antora/modules/serials/pages/Group_Serials_Issues_in_the_OPAC_2.2.adoc deleted file mode 100644 index 69f42c9d53..0000000000 --- a/docs-antora/modules/serials/pages/Group_Serials_Issues_in_the_OPAC_2.2.adoc +++ /dev/null @@ -1,49 +0,0 @@ -= Group Serials Issues in the Template Toolkit OPAC = -:toc: - -In previous versions of Evergreen, issues of serials displayed in a list ordered by publication date. The list could be lengthy if the library had extensive holdings of a serial. -Using the Template Toolkit OPAC that is available in version 2.2, you can group issues of serials in the OPAC by chronology or enumeration. For example, you might group issues by date published or by volume. Users can expand these hyperlinked groups to view holdings of specific issues. The result is a clean, easy-to-navigate interface for viewing holdings of serials with a large quantity of issues. - -NOTE: This feature is only available in the Template Toolkit OPAC. - -== Administration == - -Enable the following organizational unit settings to use this feature: - -. Click *Administration* -> *Local Administration* -> *Library Settings Editor*. -. Search or scroll to find *Serials: Default display grouping for serials distributions presented in the OPAC*. -. Click *Edit*. -. Enter *enum* to display issues by enumeration, or enter *chron* to display issues in chronological order. This value will become your default setting for display issues in the OPAC. -. Click *Update Setting*. -. Search or scroll to find *OPAC: Use fully compressed serials holdings*. -. Select the value, *True*, to view a compressed holdings statement. -. Click *Update Setting*. - -== Displaying Issues in the OPAC == - -Your library system has a subscription to the periodical, _Bon Appetit_. The serials librarian has determined that the issues at the Forest Falls branch should display in the OPAC by month and year. The issues at the McKinley branch should display by volume and number. The serials librarian will create two distributions for the serial that will include these groupings. - -. Retrieve the bibliographic record for the serial, and click *Actions for this Record* -> *Alternate Serial Control*. -. Create a *New Subscription* or click on the hyperlinked ID of an existing subscription. -. Click *New Distribution*. -. Create a label to identify the distribution. -. Select the holding library from the drop down menu that will own physical copies of the issues. -. Select a display grouping. Select *chronology* from the drop down menu. -. Select a template from the drop down menu to receive copies. -. Click *Save*. -+ -image::media/Group_Serials_Issues_in_the_OPAC2.jpg[Group_Serials_Issues_in_the_OPAC2] -+ -. Click *New Distribution* and repeat the process to send issues to the McKinley Branch. Choose *enumeration* in the *Display Grouping* field to display issues by volume and number. -. Complete the creation of your subscription. -. Retrieve the record from the catalog. -. Scroll down to and click the *Issues Held* link. The issues label for each branch appears. -. Click the hyperlinked issues label. - -The issues owned by the Forest Falls branch are grouped by chronology: - -image::media/Group_Serials_Issues_in_the_OPAC5.jpg[Group_Serials_Issues_in_the_OPAC5] - -The issues owned by the McKinley branch are grouped by enumeration: - -image::media/Group_Serials_Issues_in_the_OPAC7.jpg[Group_Serials_Issues_in_the_OPAC7] diff --git a/docs-antora/modules/serials/pages/H-holdings_statements.adoc b/docs-antora/modules/serials/pages/H-holdings_statements.adoc deleted file mode 100644 index 1656063dbc..0000000000 --- a/docs-antora/modules/serials/pages/H-holdings_statements.adoc +++ /dev/null @@ -1,46 +0,0 @@ -= Holdings = -:toc: - -== System Generated Holdings Statement == -As issues are received, Evergreen creates a holding statement in the OPAC based on what is set up in the Caption and Patterns of the subscription. The systems generated holdings can only be edited by changing caption and pattern information and there is no ability to edit the statement as free text. - -== MARC Format for Holdings Display (MFHD) == -Evergreen users can create, edit and delete their own MFHD. - -=== Create an MFHD record === - -*To create a MFHD record:* - -. From a serials record in the catalog, go to *Serials->Manage MFHDs*. This will bring you to the _Manage MFHD_ tab within the serials module. -. Click *Create MFHD*. - - -image::media/serials_mfhd1.PNG[] - - -. A _Create new MFHD_ dialog box will appear. _Select the library_ for which you are creating the MFHD record and click *Create*. -. The MFHD record will appear in the list. Go to *Actions->Edit MFHD* or _right-click_ on the row and select *Edit MFHD* from the drop down menu. - - -image::media/serials_mfhd3.PNG[] - - -. The MARC Editor will appear. _Modify the MFHD record_ as needed and click *Save*. -. The Textual Holdings statement will appear in the _Issues Held_ tab in the catalog. - - -image::media/serials_mfhd6.PNG[] - - -=== Edit a MFHD record === - -. Open a serial record, go to *Serials* -> *MFHD Record* -> *Manage MFHDs* and select the appropriate MFHD. -. Go to *Actions* or right-click on the MFHD and select *Edit MFHD*. -. The MARC Editor will appear. _Modify the MFHD record_ as needed and click *Save*. - - -=== Delete a MFHD Record === - -. Open a serial record, go to *Serials* -> *MFHD Record* -> *Manage MFHDs* and select the appropriate MFHD. -. Go to *Actions* or right-click on the MFHD and select *Delete Selected MFHDs*. -. Click *OK/Continue* to delete the record. diff --git a/docs-antora/modules/serials/pages/_attributes.adoc b/docs-antora/modules/serials/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/serials/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/shared/_attributes.adoc b/docs-antora/modules/shared/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/shared/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/shared/assets/images/media/ccbysa.png b/docs-antora/modules/shared/assets/images/media/ccbysa.png deleted file mode 100644 index f0a944e0b8..0000000000 Binary files a/docs-antora/modules/shared/assets/images/media/ccbysa.png and /dev/null differ diff --git a/docs-antora/modules/shared/pages/_attributes.adoc b/docs-antora/modules/shared/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/shared/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/shared/pages/about_evergreen.adoc b/docs-antora/modules/shared/pages/about_evergreen.adoc deleted file mode 100644 index 582319ac7a..0000000000 --- a/docs-antora/modules/shared/pages/about_evergreen.adoc +++ /dev/null @@ -1,25 +0,0 @@ -= About Evergreen = - -Evergreen is an open source library automation software designed to meet the -needs of the very smallest to the very largest libraries and consortia. Through -its staff interface, it facilitates the management, cataloging, and circulation -of library materials, and through its online public access interface it helps -patrons find those materials. - -The Evergreen software is freely licensed under the GNU General Public License, -meaning that it is free to download, use, view, modify, and share. It has an -active development and user community, as well as several companies offering -migration, support, hosting, and development services. - -The community's development requirements state that Evergreen must be: - -* Stable, even under extreme load. -* Robust, and capable of handling a high volume of transactions and simultaneous users. -* Flexible, to accommodate the varied needs of libraries. -* Secure, to protect our patrons’ privacy and data. -* User-friendly, to facilitate patron and staff use of the system. - -Evergreen, which first launched in 2006 now powers over 544 libraries of every -type – public, academic, special, school, and even tribal and home libraries – -in over a dozen countries worldwide. - diff --git a/docs-antora/modules/shared/pages/about_this_documentation.adoc b/docs-antora/modules/shared/pages/about_this_documentation.adoc deleted file mode 100644 index 43fd403c85..0000000000 --- a/docs-antora/modules/shared/pages/about_this_documentation.adoc +++ /dev/null @@ -1,13 +0,0 @@ -= About This Documentation = - -This guide was produced by the Evergreen Documentation Interest Group (DIG), -consisting of numerous volunteers from many different organizations. The DIG -has drawn together, edited, and supplemented pre-existing documentation -contributed by libraries and consortia running Evergreen that were kind enough -to release their documentation into the creative commons. Please see the -xref:shared:attributions.adoc#attributions[Attributions] section for a full list of authors and -contributing organizations. Just like the software it describes, this guide is -a work in progress, continually revised to meet the needs of its users, so if -you find errors or omissions, please let us know, by contacting the DIG -facilitators at docs@evergreen-ils.org. - diff --git a/docs-antora/modules/shared/pages/attributions.adoc b/docs-antora/modules/shared/pages/attributions.adoc deleted file mode 100644 index 770d6a8d0d..0000000000 --- a/docs-antora/modules/shared/pages/attributions.adoc +++ /dev/null @@ -1,57 +0,0 @@ -[[attributions]] -[#appendix] -= Attributions = - -Copyright © 2009-2018 Evergreen DIG - -Copyright © 2007-2018 Equinox - -Copyright © 2007-2018 Dan Scott - -Copyright © 2009-2018 BC Libraries Cooperative (SITKA) - -Copyright © 2008-2018 King County Library System - -Copyright © 2009-2018 Pioneer Library System - -Copyright © 2009-2018 PALS - -Copyright © 2009-2018 Georgia Public Library Service - -Copyright © 2008-2018 Project Conifer - -Copyright © 2009-2018 Bibliomation - -Copyright © 2008-2018 Evergreen Indiana - -Copyright © 2008-2018 SC LENDS - -Copyright © 2012-2018 CW MARS - -Copyright © 2014-2020 MOBIUS - - -*DIG Contributors* - -* Hilary Caws-Elwitt, Susquehanna County Library -* Karen Collier, Kent County Public Library -* George Duimovich, NRCan Library -* Lynn Floyd, Anderson County Library -* Sally Fortin, Equinox Software -* Wolf Halton, Lyrasis -* Jennifer Pringle, SITKA -* June Rayner, eiNetwork -* Steve Sheppard -* Ben Shum, Bibliomation -* Roni Shwaish, eiNetwork -* Robert Soulliere, Mohawk College -* Remington Steed, Calvin College -* Jeanette Lundgren, CW MARS -* Tim Spindler, CW MARS -* Jane Sandberg, Linn-Benton Community College -* Lindsay Stratton, Pioneer Library System -* Yamil Suarez, Berklee College of Music -* Jenny Turner, PALS -* Debbie Luchenbill, MOBIUS -* Blake Graham-Henderson, MOBIUS -* Ted Peterson, MOBIUS diff --git a/docs-antora/modules/shared/pages/end_matter.adoc b/docs-antora/modules/shared/pages/end_matter.adoc deleted file mode 100644 index 8600828ae0..0000000000 --- a/docs-antora/modules/shared/pages/end_matter.adoc +++ /dev/null @@ -1,14 +0,0 @@ -[[licensing]] -[#appendix] -= Licensing = - -image::media/ccbysa.png["CC-BY-SA",link="http://creativecommons.org/licenses/by-sa/3.0/"] - -This work is licensed under a -link:http://creativecommons.org/licenses/by-sa/3.0/[Creative -Commons Attribution-ShareAlike 3.0 Unported License]. - - -[#index] -== Index == - diff --git a/docs-antora/modules/shared/pages/index.adoc b/docs-antora/modules/shared/pages/index.adoc deleted file mode 100644 index fa9fe8c110..0000000000 --- a/docs-antora/modules/shared/pages/index.adoc +++ /dev/null @@ -1,3 +0,0 @@ -[#index] -= Index = - diff --git a/docs-antora/modules/shared/pages/licensing.adoc b/docs-antora/modules/shared/pages/licensing.adoc deleted file mode 100644 index ce4967538b..0000000000 --- a/docs-antora/modules/shared/pages/licensing.adoc +++ /dev/null @@ -1,11 +0,0 @@ -[[licensing]] -[#appendix] -= Licensing = - -image::media/ccbysa.png["CC-BY-SA",link="http://creativecommons.org/licenses/by-sa/3.0/"] - -This work is licensed under a -link:http://creativecommons.org/licenses/by-sa/3.0/[Creative -Commons Attribution-ShareAlike 3.0 Unported License]. - - diff --git a/docs-antora/modules/shared/pages/workstation_settings.adoc b/docs-antora/modules/shared/pages/workstation_settings.adoc deleted file mode 100644 index 56ec62ca4c..0000000000 --- a/docs-antora/modules/shared/pages/workstation_settings.adoc +++ /dev/null @@ -1,30 +0,0 @@ -= Configuring Evergreen for your workstation = - -== Setting search defaults == - -* Go to Administration -> Workstation. -* Use the dropdown menu to select an appropriate -_Default Search Library_. The default search library -setting determines what library is searched from the -advanced search screen and portal page by default. -You can override this setting when you are actually -searching by selecting a different library. -One recommendation is to set the search library to the -highest point you would normally want to search. -* Use the dropdown menu to select an appropriate -_Preferred Library_. The preferred library is used to -show copies and electronic resource URIs regardless -of the library searched. One recommendation is to set -this to your home library so that local copies show up -first in search results. -* Use the dropdown menu to select an appropriate -_Advanced Search Default Pane_. Advanced search has -secondary panes for Numeric and MARC Expert searching. -You can change which one is loaded by default when -opening a new catalog window here. - -== Turning off sounds == - -* Go to Administration -> Workstation. -* Click the checkbox labeled _Disable Sounds?_ - diff --git a/docs-antora/modules/sys_admin/_attributes.adoc b/docs-antora/modules/sys_admin/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/sys_admin/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/sys_admin/nav.adoc b/docs-antora/modules/sys_admin/nav.adoc deleted file mode 100644 index ddc3c2c568..0000000000 --- a/docs-antora/modules/sys_admin/nav.adoc +++ /dev/null @@ -1,24 +0,0 @@ -* xref:sys_admin:introduction.adoc[System Administration From the Staff Client] -** xref:admin:acquisitions_admin.adoc[Acquisitions Administration] -** xref:admin:age_hold_protection.adoc[Age hold protection] -** xref:admin:authorities.adoc[Authorities] -** xref:admin:Best_Hold_Selection_Sort_Order.adoc[Best-Hold Selection Sort Order] -** xref:admin:booking-admin.adoc[Booking Module Administration] -** xref:admin:cn_prefixes_and_suffixes.adoc[Call Number Prefixes and Suffixes] -** xref:admin:desk_payments.adoc[Cash Reports] -** xref:admin:circulation_limit_groups.adoc[Circulation Limit Sets] -** xref:admin:copy_statuses.adoc[Item Status] -** xref:admin:floating_groups.adoc[Floating Groups] -** xref:admin:MARC_Import_Remove_Fields.adoc[MARC Import Remove Fields] -** xref:admin:copy_tags_admin.adoc[Item Tags (Digital Bookplates)] -** xref:admin:MARC_RAD_MVF_CRA.adoc[MARC Record Attributes] -*** xref:admin:multilingual_search.adoc[Multilingual Search in Evergreen] -*** xref:admin:infrastructure_auth_browse.adoc[Infrastructure Changes to Authority Browse] -*** xref:admin:virtual_index_defs.adoc[Virtual Index Definitions] -** xref:admin:Org_Unit_Proximity_Adjustments.adoc[Org Unit Proximity Adjustments] -** xref:admin:physical_char_wizard_db.adoc[Administering the Physical Characteristics Wizard] -** xref:admin:copy_locations.adoc[Administering shelving locations] -** xref:admin:permissions.adoc[User and Group Permissions] -** xref:admin:SMS_messaging.adoc[SMS Text Messaging] -** xref:admin:user_activity_type.adoc[User Activity Types] -** xref:admin:restrict_Z39.50_sources_by_perm_group.adoc[Z39.50 Servers] diff --git a/docs-antora/modules/sys_admin/pages/_attributes.adoc b/docs-antora/modules/sys_admin/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/sys_admin/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/sys_admin/pages/introduction.adoc b/docs-antora/modules/sys_admin/pages/introduction.adoc deleted file mode 100644 index 1863a675cc..0000000000 --- a/docs-antora/modules/sys_admin/pages/introduction.adoc +++ /dev/null @@ -1,3 +0,0 @@ -= Introduction = -This part deals with the options in the Server Administration menu found in the -staff client. diff --git a/docs-antora/modules/using_staff_client/_attributes.adoc b/docs-antora/modules/using_staff_client/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/modules/using_staff_client/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/modules/using_staff_client/nav.adoc b/docs-antora/modules/using_staff_client/nav.adoc deleted file mode 100644 index b47b91c63a..0000000000 --- a/docs-antora/modules/using_staff_client/nav.adoc +++ /dev/null @@ -1,7 +0,0 @@ -* xref:using_staff_client:introduction.adoc[Using the Browser Staff Client] -** xref:admin:web_client-login.adoc[Logging into Evergreen] -** xref:admin:web-client-browser-best-practices.adoc[Best Practices for Using the Browser] -** xref:admin:staff_client-column_picker.adoc[Column Picker] -** xref:admin:staff_client-recent_searches.adoc[Recent Staff Searches] -** xref:admin:workstation_admin.adoc[Workstation Administration] -*** xref:admin:receipt_template_editor.adoc[Receipt Template Editor] diff --git a/docs-antora/modules/using_staff_client/pages/_attributes.adoc b/docs-antora/modules/using_staff_client/pages/_attributes.adoc deleted file mode 100644 index fb982443d7..0000000000 --- a/docs-antora/modules/using_staff_client/pages/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] diff --git a/docs-antora/modules/using_staff_client/pages/introduction.adoc b/docs-antora/modules/using_staff_client/pages/introduction.adoc deleted file mode 100644 index c6e1d5a298..0000000000 --- a/docs-antora/modules/using_staff_client/pages/introduction.adoc +++ /dev/null @@ -1,8 +0,0 @@ -= Introduction = - -This part of the documentation deals with general Browser Client usage including -logging in, navigation and shortcuts. - -For information about the XUL client, consult the -http://docs.evergreen-ils.org/2.11/[Evergreen 2.11 documentation]. - diff --git a/docs-antora/setup_lunr.yml b/docs-antora/setup_lunr.yml deleted file mode 100644 index 583fe3678d..0000000000 --- a/docs-antora/setup_lunr.yml +++ /dev/null @@ -1,24 +0,0 @@ ---- -- hosts: 'localhost' - connection: local - remote_user: user - become_method: sudo - tasks: - - name: Insert const generateIndex - lineinfile: - path: node_modules/@antora/site-generator-default/lib/generate-site.js - insertafter: 'use strict' - line: "const generateIndex = require('antora-lunr')" - - - name: Insert const index - lineinfile: - path: node_modules/@antora/site-generator-default/lib/generate-site.js - insertafter: 'const siteFiles = mapSit' - line: " const index = generateIndex(playbook, pages)" - - - name: Insert line siteFiles.push(generateIndex.createIndexFile(index)) - lineinfile: - path: node_modules/@antora/site-generator-default/lib/generate-site.js - insertafter: 'const index = generateIn' - line: " siteFiles.push(generateIndex.createIndexFile(index))" -... \ No newline at end of file diff --git a/docs-antora/site.yml b/docs-antora/site.yml deleted file mode 100644 index fb26eda3ed..0000000000 --- a/docs-antora/site.yml +++ /dev/null @@ -1,17 +0,0 @@ -site: - title: Evergreen Documentation - start_page: docs:shared:about_this_documentation - url: http://localhost/prod -content: - sources: - - url: ../ -# - url: git@git.evergreen-ils.org:working/Evergreen.git - branches: LP1848524_antora_ize_docs - start_path: docs-antora -ui: - bundle: - url: ./../../eg-antora/build/ui-bundle.zip - supplemental_files: ./ui/ui-lunr - -output: - dir: /var/www/html/prod diff --git a/docs-antora/topics/acquisitions/antora.yml b/docs-antora/topics/acquisitions/antora.yml deleted file mode 100644 index 2c13efd3f4..0000000000 --- a/docs-antora/topics/acquisitions/antora.yml +++ /dev/null @@ -1,5 +0,0 @@ -name: acq -title: Evergreen Acquisitions Manual -version: 'latest' -nav: -- modules/ROOT/nav.adoc diff --git a/docs-antora/topics/acquisitions/modules/ROOT/_attributes.adoc b/docs-antora/topics/acquisitions/modules/ROOT/_attributes.adoc deleted file mode 100644 index dec438a296..0000000000 --- a/docs-antora/topics/acquisitions/modules/ROOT/_attributes.adoc +++ /dev/null @@ -1,4 +0,0 @@ -:attachmentsdir: {moduledir}/assets/attachments -:examplesdir: {moduledir}/examples -:imagesdir: {moduledir}/assets/images -:partialsdir: {moduledir}/pages/_partials diff --git a/docs-antora/topics/acquisitions/modules/ROOT/nav.adoc b/docs-antora/topics/acquisitions/modules/ROOT/nav.adoc deleted file mode 100644 index df52d41beb..0000000000 --- a/docs-antora/topics/acquisitions/modules/ROOT/nav.adoc +++ /dev/null @@ -1,18 +0,0 @@ -* xref:ROOT:index.adoc[Introduction] -** xref:docs:shared:about_evergreen.adoc[About Evergreen] -* xref:docs:admin:web_client-login.adoc[Logging into Evergreen] -* xref:docs:admin:web-client-browser-best-practices.adoc[Best Practices for Using the Browser] -* xref:docs:shared:workstation_settings.adoc[Configuring Evergreen for Your Workstation] -* xref:docs:acquisitions:introduction.adoc[Acquisitions] -* xref:docs:acquisitions:selection_lists_po.adoc[Selection Lists and Purchase Orders] -* xref:docs:acquisitions:vandelay_acquisitions_integration.adoc[Load MARC Order Records] -* xref:docs:acquisitions:invoices.adoc[Invoices] -* xref:docs:acquisitions:receive_items_from_invoice.adoc[] -* xref:docs:acquisitions:purchase_requests_management.adoc[Managing Patron Purchase Requests] -* xref:docs:acquisitions:purchase_requests_patron_view.adoc[] -* xref:docs:opac:using_the_public_access_catalog.adoc[Using the Public Access Catalog] -* xref:docs:shared:attributions.adoc[Attributions] -* xref:docs:shared:attributions.adoc[Appendix A. Attributions] -* xref:docs:shared:licensing.adoc[Appendix B. Licensing] -* xref:docs:appendix:glossary.adoc[Glossary] -* xref:docs:shared:index.adoc[Index] diff --git a/docs-antora/topics/acquisitions/modules/ROOT/pages/index.adoc b/docs-antora/topics/acquisitions/modules/ROOT/pages/index.adoc deleted file mode 100644 index 86bf5a31c7..0000000000 --- a/docs-antora/topics/acquisitions/modules/ROOT/pages/index.adoc +++ /dev/null @@ -1,5 +0,0 @@ -= Evergreen Acquisitions Manual = - -This guide to Evergreen is intended to meet the needs of library workers who use -Evergreen's Acquisitions module. It is organized into Parts, Chapters, and -Sections addressing key aspects of the software. diff --git a/docs-antora/ui/.editorconfig b/docs-antora/ui/.editorconfig deleted file mode 100644 index c6c8b36219..0000000000 --- a/docs-antora/ui/.editorconfig +++ /dev/null @@ -1,9 +0,0 @@ -root = true - -[*] -indent_style = space -indent_size = 2 -end_of_line = lf -charset = utf-8 -trim_trailing_whitespace = true -insert_final_newline = true diff --git a/docs-antora/ui/.eslintrc b/docs-antora/ui/.eslintrc deleted file mode 100644 index f8fb261492..0000000000 --- a/docs-antora/ui/.eslintrc +++ /dev/null @@ -1,9 +0,0 @@ -{ - "extends": "standard", - "rules": { - "arrow-parens": ["error", "always"], - "comma-dangle": ["error", "always-multiline"], - "max-len": [1, 120, 2], - "spaced-comment": "off" - } -} diff --git a/docs-antora/ui/.gitignore b/docs-antora/ui/.gitignore deleted file mode 100644 index 57834a1291..0000000000 --- a/docs-antora/ui/.gitignore +++ /dev/null @@ -1,3 +0,0 @@ -/build/ -/node_modules/ -/public/ diff --git a/docs-antora/ui/.gitlab-ci.yml b/docs-antora/ui/.gitlab-ci.yml deleted file mode 100644 index b183e33c59..0000000000 --- a/docs-antora/ui/.gitlab-ci.yml +++ /dev/null @@ -1,55 +0,0 @@ -image: node:10.14.2-stretch -stages: [setup, verify, deploy] -install: - stage: setup - cache: - paths: - - .cache/npm - script: - - &npm_install - npm install --quiet --no-progress --cache=.cache/npm -lint: - stage: verify - cache: &pull_cache - policy: pull - paths: - - .cache/npm - script: - - *npm_install - - node_modules/.bin/gulp lint -bundle-stable: - stage: deploy - only: - - master@antora/antora-ui-default - cache: *pull_cache - script: - - *npm_install - - node_modules/.bin/gulp bundle - artifacts: - paths: - - build/ui-bundle.zip -bundle-dev: - stage: deploy - except: - - master - cache: *pull_cache - script: - - *npm_install - - node_modules/.bin/gulp bundle - artifacts: - expire_in: 1 day # unless marked as keep from job page - paths: - - build/ui-bundle.zip -pages: - stage: deploy - only: - - master@antora/antora-ui-default - cache: *pull_cache - script: - - *npm_install - - node_modules/.bin/gulp preview:build - # FIXME figure out a way to avoid copying these files to preview site - - rm -rf public/_/{helpers,layouts,partials} - artifacts: - paths: - - public diff --git a/docs-antora/ui/.gulp.json b/docs-antora/ui/.gulp.json deleted file mode 100644 index 2da9b16c1e..0000000000 --- a/docs-antora/ui/.gulp.json +++ /dev/null @@ -1,4 +0,0 @@ -{ - "description": "Build tasks for the Antora default UI project", - "flags.tasksDepth": 1 -} diff --git a/docs-antora/ui/.nvmrc b/docs-antora/ui/.nvmrc deleted file mode 100644 index f599e28b8a..0000000000 --- a/docs-antora/ui/.nvmrc +++ /dev/null @@ -1 +0,0 @@ -10 diff --git a/docs-antora/ui/.stylelintrc b/docs-antora/ui/.stylelintrc deleted file mode 100644 index 344318f3c5..0000000000 --- a/docs-antora/ui/.stylelintrc +++ /dev/null @@ -1,7 +0,0 @@ -{ - "extends": "stylelint-config-standard", - "rules": { - "comment-empty-line-before": null, - "no-descending-specificity": null, - } -} diff --git a/docs-antora/ui/ui-lunr/css/search.css b/docs-antora/ui/ui-lunr/css/search.css deleted file mode 100644 index d9af4ac3ba..0000000000 --- a/docs-antora/ui/ui-lunr/css/search.css +++ /dev/null @@ -1,115 +0,0 @@ -.navbar-brand .navbar-item + .navbar-item { - flex-grow: 1; - justify-content: flex-end; -} - -@media screen and (min-width: 1024px) { - .navbar-brand { - flex-grow: 1; - } - - .navbar-menu { - flex-grow: 0; - } -} - -#search-input { - color: #333; - font-family: inherit; - font-size: 0.95rem; - width: 150px; - border: 1px solid #dbdbdb; - border-radius: 0.1em; - line-height: 1.5; - padding: 0 0.25em; -} - -@media screen and (min-width: 769px) { - #search-input { - width: 200px; - } -} - -.search-result-dropdown-menu { - position: absolute; - z-index: 100; - display: block; - right: 0; - left: inherit; - top: 100%; - border-radius: 4px; - margin: 6px 0 0; - padding: 0; - text-align: left; - height: auto; - background: transparent; - border: none; - max-width: 600px; - min-width: 500px; - box-shadow: 0 1px 0 0 rgba(0, 0, 0, 0.2), 0 2px 3px 0 rgba(0, 0, 0, 0.1); -} - -@media screen and (max-width: 768px) { - .navbar-brand .navbar-item + .navbar-item { - padding-left: 0; - padding-right: 0; - } - - .search-result-dropdown-menu { - min-width: calc(100vw - 3.75rem); - } -} - -.search-result-dataset { - position: relative; - border: 1px solid #d9d9d9; - background: #fff; - border-radius: 4px; - overflow: auto; - padding: 0 8px 8px; - max-height: calc(100vh - 5.25rem); - color: #333; -} - -.search-result-highlight { - color: #174d8c; - background: rgba(143, 187, 237, 0.1); - padding: .1em .05em; -} - -.search-result-item { - display: flex; - font-size: 1rem; - margin-bottom: 0.5rem; - margin-top: 0.5rem; -} - -.search-result-document-title { - width: 33%; - border-right: 1px solid #ddd; - color: #a4a7ae; - font-size: 0.8rem; - padding: 0.25rem 0.5rem 0.25rem 0; - text-align: right; - position: relative; - word-wrap: break-word; -} - -.search-result-document-hit { - flex: 1; - font-size: 0.75em; - color: #02060c; - font-weight: 700; -} - -.search-result-document-hit > a { - color: inherit; - display: block; - padding: 0.5rem 0 0.5rem 1rem; - margin-bottom: 0.25rem; -} - -.search-result-document-hit > a:hover { - background-color: rgba(69, 142, 225, 0.05); -} - diff --git a/docs-antora/ui/ui-lunr/js/vendor/lunr.js b/docs-antora/ui/ui-lunr/js/vendor/lunr.js deleted file mode 100644 index c3537658a6..0000000000 --- a/docs-antora/ui/ui-lunr/js/vendor/lunr.js +++ /dev/null @@ -1,3475 +0,0 @@ -/** - * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.8 - * Copyright (C) 2019 Oliver Nightingale - * @license MIT - */ - -;(function(){ - -/** - * A convenience function for configuring and constructing - * a new lunr Index. - * - * A lunr.Builder instance is created and the pipeline setup - * with a trimmer, stop word filter and stemmer. - * - * This builder object is yielded to the configuration function - * that is passed as a parameter, allowing the list of fields - * and other builder parameters to be customised. - * - * All documents _must_ be added within the passed config function. - * - * @example - * var idx = lunr(function () { - * this.field('title') - * this.field('body') - * this.ref('id') - * - * documents.forEach(function (doc) { - * this.add(doc) - * }, this) - * }) - * - * @see {@link lunr.Builder} - * @see {@link lunr.Pipeline} - * @see {@link lunr.trimmer} - * @see {@link lunr.stopWordFilter} - * @see {@link lunr.stemmer} - * @namespace {function} lunr - */ -var lunr = function (config) { - var builder = new lunr.Builder - - builder.pipeline.add( - lunr.trimmer, - lunr.stopWordFilter, - lunr.stemmer - ) - - builder.searchPipeline.add( - lunr.stemmer - ) - - config.call(builder, builder) - return builder.build() -} - -lunr.version = "2.3.8" -/*! - * lunr.utils - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * A namespace containing utils for the rest of the lunr library - * @namespace lunr.utils - */ -lunr.utils = {} - -/** - * Print a warning message to the console. - * - * @param {String} message The message to be printed. - * @memberOf lunr.utils - * @function - */ -lunr.utils.warn = (function (global) { - /* eslint-disable no-console */ - return function (message) { - if (global.console && console.warn) { - console.warn(message) - } - } - /* eslint-enable no-console */ -})(this) - -/** - * Convert an object to a string. - * - * In the case of `null` and `undefined` the function returns - * the empty string, in all other cases the result of calling - * `toString` on the passed object is returned. - * - * @param {Any} obj The object to convert to a string. - * @return {String} string representation of the passed object. - * @memberOf lunr.utils - */ -lunr.utils.asString = function (obj) { - if (obj === void 0 || obj === null) { - return "" - } else { - return obj.toString() - } -} - -/** - * Clones an object. - * - * Will create a copy of an existing object such that any mutations - * on the copy cannot affect the original. - * - * Only shallow objects are supported, passing a nested object to this - * function will cause a TypeError. - * - * Objects with primitives, and arrays of primitives are supported. - * - * @param {Object} obj The object to clone. - * @return {Object} a clone of the passed object. - * @throws {TypeError} when a nested object is passed. - * @memberOf Utils - */ -lunr.utils.clone = function (obj) { - if (obj === null || obj === undefined) { - return obj - } - - var clone = Object.create(null), - keys = Object.keys(obj) - - for (var i = 0; i < keys.length; i++) { - var key = keys[i], - val = obj[key] - - if (Array.isArray(val)) { - clone[key] = val.slice() - continue - } - - if (typeof val === 'string' || - typeof val === 'number' || - typeof val === 'boolean') { - clone[key] = val - continue - } - - throw new TypeError("clone is not deep and does not support nested objects") - } - - return clone -} -lunr.FieldRef = function (docRef, fieldName, stringValue) { - this.docRef = docRef - this.fieldName = fieldName - this._stringValue = stringValue -} - -lunr.FieldRef.joiner = "/" - -lunr.FieldRef.fromString = function (s) { - var n = s.indexOf(lunr.FieldRef.joiner) - - if (n === -1) { - throw "malformed field ref string" - } - - var fieldRef = s.slice(0, n), - docRef = s.slice(n + 1) - - return new lunr.FieldRef (docRef, fieldRef, s) -} - -lunr.FieldRef.prototype.toString = function () { - if (this._stringValue == undefined) { - this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef - } - - return this._stringValue -} -/*! - * lunr.Set - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * A lunr set. - * - * @constructor - */ -lunr.Set = function (elements) { - this.elements = Object.create(null) - - if (elements) { - this.length = elements.length - - for (var i = 0; i < this.length; i++) { - this.elements[elements[i]] = true - } - } else { - this.length = 0 - } -} - -/** - * A complete set that contains all elements. - * - * @static - * @readonly - * @type {lunr.Set} - */ -lunr.Set.complete = { - intersect: function (other) { - return other - }, - - union: function (other) { - return other - }, - - contains: function () { - return true - } -} - -/** - * An empty set that contains no elements. - * - * @static - * @readonly - * @type {lunr.Set} - */ -lunr.Set.empty = { - intersect: function () { - return this - }, - - union: function (other) { - return other - }, - - contains: function () { - return false - } -} - -/** - * Returns true if this set contains the specified object. - * - * @param {object} object - Object whose presence in this set is to be tested. - * @returns {boolean} - True if this set contains the specified object. - */ -lunr.Set.prototype.contains = function (object) { - return !!this.elements[object] -} - -/** - * Returns a new set containing only the elements that are present in both - * this set and the specified set. - * - * @param {lunr.Set} other - set to intersect with this set. - * @returns {lunr.Set} a new set that is the intersection of this and the specified set. - */ - -lunr.Set.prototype.intersect = function (other) { - var a, b, elements, intersection = [] - - if (other === lunr.Set.complete) { - return this - } - - if (other === lunr.Set.empty) { - return other - } - - if (this.length < other.length) { - a = this - b = other - } else { - a = other - b = this - } - - elements = Object.keys(a.elements) - - for (var i = 0; i < elements.length; i++) { - var element = elements[i] - if (element in b.elements) { - intersection.push(element) - } - } - - return new lunr.Set (intersection) -} - -/** - * Returns a new set combining the elements of this and the specified set. - * - * @param {lunr.Set} other - set to union with this set. - * @return {lunr.Set} a new set that is the union of this and the specified set. - */ - -lunr.Set.prototype.union = function (other) { - if (other === lunr.Set.complete) { - return lunr.Set.complete - } - - if (other === lunr.Set.empty) { - return this - } - - return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements))) -} -/** - * A function to calculate the inverse document frequency for - * a posting. This is shared between the builder and the index - * - * @private - * @param {object} posting - The posting for a given term - * @param {number} documentCount - The total number of documents. - */ -lunr.idf = function (posting, documentCount) { - var documentsWithTerm = 0 - - for (var fieldName in posting) { - if (fieldName == '_index') continue // Ignore the term index, its not a field - documentsWithTerm += Object.keys(posting[fieldName]).length - } - - var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5) - - return Math.log(1 + Math.abs(x)) -} - -/** - * A token wraps a string representation of a token - * as it is passed through the text processing pipeline. - * - * @constructor - * @param {string} [str=''] - The string token being wrapped. - * @param {object} [metadata={}] - Metadata associated with this token. - */ -lunr.Token = function (str, metadata) { - this.str = str || "" - this.metadata = metadata || {} -} - -/** - * Returns the token string that is being wrapped by this object. - * - * @returns {string} - */ -lunr.Token.prototype.toString = function () { - return this.str -} - -/** - * A token update function is used when updating or optionally - * when cloning a token. - * - * @callback lunr.Token~updateFunction - * @param {string} str - The string representation of the token. - * @param {Object} metadata - All metadata associated with this token. - */ - -/** - * Applies the given function to the wrapped string token. - * - * @example - * token.update(function (str, metadata) { - * return str.toUpperCase() - * }) - * - * @param {lunr.Token~updateFunction} fn - A function to apply to the token string. - * @returns {lunr.Token} - */ -lunr.Token.prototype.update = function (fn) { - this.str = fn(this.str, this.metadata) - return this -} - -/** - * Creates a clone of this token. Optionally a function can be - * applied to the cloned token. - * - * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token. - * @returns {lunr.Token} - */ -lunr.Token.prototype.clone = function (fn) { - fn = fn || function (s) { return s } - return new lunr.Token (fn(this.str, this.metadata), this.metadata) -} -/*! - * lunr.tokenizer - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * A function for splitting a string into tokens ready to be inserted into - * the search index. Uses `lunr.tokenizer.separator` to split strings, change - * the value of this property to change how strings are split into tokens. - * - * This tokenizer will convert its parameter to a string by calling `toString` and - * then will split this string on the character in `lunr.tokenizer.separator`. - * Arrays will have their elements converted to strings and wrapped in a lunr.Token. - * - * Optional metadata can be passed to the tokenizer, this metadata will be cloned and - * added as metadata to every token that is created from the object to be tokenized. - * - * @static - * @param {?(string|object|object[])} obj - The object to convert into tokens - * @param {?object} metadata - Optional metadata to associate with every token - * @returns {lunr.Token[]} - * @see {@link lunr.Pipeline} - */ -lunr.tokenizer = function (obj, metadata) { - if (obj == null || obj == undefined) { - return [] - } - - if (Array.isArray(obj)) { - return obj.map(function (t) { - return new lunr.Token( - lunr.utils.asString(t).toLowerCase(), - lunr.utils.clone(metadata) - ) - }) - } - - var str = obj.toString().toLowerCase(), - len = str.length, - tokens = [] - - for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) { - var char = str.charAt(sliceEnd), - sliceLength = sliceEnd - sliceStart - - if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) { - - if (sliceLength > 0) { - var tokenMetadata = lunr.utils.clone(metadata) || {} - tokenMetadata["position"] = [sliceStart, sliceLength] - tokenMetadata["index"] = tokens.length - - tokens.push( - new lunr.Token ( - str.slice(sliceStart, sliceEnd), - tokenMetadata - ) - ) - } - - sliceStart = sliceEnd + 1 - } - - } - - return tokens -} - -/** - * The separator used to split a string into tokens. Override this property to change the behaviour of - * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens. - * - * @static - * @see lunr.tokenizer - */ -lunr.tokenizer.separator = /[\s\-]+/ -/*! - * lunr.Pipeline - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * lunr.Pipelines maintain an ordered list of functions to be applied to all - * tokens in documents entering the search index and queries being ran against - * the index. - * - * An instance of lunr.Index created with the lunr shortcut will contain a - * pipeline with a stop word filter and an English language stemmer. Extra - * functions can be added before or after either of these functions or these - * default functions can be removed. - * - * When run the pipeline will call each function in turn, passing a token, the - * index of that token in the original list of all tokens and finally a list of - * all the original tokens. - * - * The output of functions in the pipeline will be passed to the next function - * in the pipeline. To exclude a token from entering the index the function - * should return undefined, the rest of the pipeline will not be called with - * this token. - * - * For serialisation of pipelines to work, all functions used in an instance of - * a pipeline should be registered with lunr.Pipeline. Registered functions can - * then be loaded. If trying to load a serialised pipeline that uses functions - * that are not registered an error will be thrown. - * - * If not planning on serialising the pipeline then registering pipeline functions - * is not necessary. - * - * @constructor - */ -lunr.Pipeline = function () { - this._stack = [] -} - -lunr.Pipeline.registeredFunctions = Object.create(null) - -/** - * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token - * string as well as all known metadata. A pipeline function can mutate the token string - * or mutate (or add) metadata for a given token. - * - * A pipeline function can indicate that the passed token should be discarded by returning - * null, undefined or an empty string. This token will not be passed to any downstream pipeline - * functions and will not be added to the index. - * - * Multiple tokens can be returned by returning an array of tokens. Each token will be passed - * to any downstream pipeline functions and all will returned tokens will be added to the index. - * - * Any number of pipeline functions may be chained together using a lunr.Pipeline. - * - * @interface lunr.PipelineFunction - * @param {lunr.Token} token - A token from the document being processed. - * @param {number} i - The index of this token in the complete list of tokens for this document/field. - * @param {lunr.Token[]} tokens - All tokens for this document/field. - * @returns {(?lunr.Token|lunr.Token[])} - */ - -/** - * Register a function with the pipeline. - * - * Functions that are used in the pipeline should be registered if the pipeline - * needs to be serialised, or a serialised pipeline needs to be loaded. - * - * Registering a function does not add it to a pipeline, functions must still be - * added to instances of the pipeline for them to be used when running a pipeline. - * - * @param {lunr.PipelineFunction} fn - The function to check for. - * @param {String} label - The label to register this function with - */ -lunr.Pipeline.registerFunction = function (fn, label) { - if (label in this.registeredFunctions) { - lunr.utils.warn('Overwriting existing registered function: ' + label) - } - - fn.label = label - lunr.Pipeline.registeredFunctions[fn.label] = fn -} - -/** - * Warns if the function is not registered as a Pipeline function. - * - * @param {lunr.PipelineFunction} fn - The function to check for. - * @private - */ -lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) { - var isRegistered = fn.label && (fn.label in this.registeredFunctions) - - if (!isRegistered) { - lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn) - } -} - -/** - * Loads a previously serialised pipeline. - * - * All functions to be loaded must already be registered with lunr.Pipeline. - * If any function from the serialised data has not been registered then an - * error will be thrown. - * - * @param {Object} serialised - The serialised pipeline to load. - * @returns {lunr.Pipeline} - */ -lunr.Pipeline.load = function (serialised) { - var pipeline = new lunr.Pipeline - - serialised.forEach(function (fnName) { - var fn = lunr.Pipeline.registeredFunctions[fnName] - - if (fn) { - pipeline.add(fn) - } else { - throw new Error('Cannot load unregistered function: ' + fnName) - } - }) - - return pipeline -} - -/** - * Adds new functions to the end of the pipeline. - * - * Logs a warning if the function has not been registered. - * - * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline. - */ -lunr.Pipeline.prototype.add = function () { - var fns = Array.prototype.slice.call(arguments) - - fns.forEach(function (fn) { - lunr.Pipeline.warnIfFunctionNotRegistered(fn) - this._stack.push(fn) - }, this) -} - -/** - * Adds a single function after a function that already exists in the - * pipeline. - * - * Logs a warning if the function has not been registered. - * - * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. - * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. - */ -lunr.Pipeline.prototype.after = function (existingFn, newFn) { - lunr.Pipeline.warnIfFunctionNotRegistered(newFn) - - var pos = this._stack.indexOf(existingFn) - if (pos == -1) { - throw new Error('Cannot find existingFn') - } - - pos = pos + 1 - this._stack.splice(pos, 0, newFn) -} - -/** - * Adds a single function before a function that already exists in the - * pipeline. - * - * Logs a warning if the function has not been registered. - * - * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. - * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. - */ -lunr.Pipeline.prototype.before = function (existingFn, newFn) { - lunr.Pipeline.warnIfFunctionNotRegistered(newFn) - - var pos = this._stack.indexOf(existingFn) - if (pos == -1) { - throw new Error('Cannot find existingFn') - } - - this._stack.splice(pos, 0, newFn) -} - -/** - * Removes a function from the pipeline. - * - * @param {lunr.PipelineFunction} fn The function to remove from the pipeline. - */ -lunr.Pipeline.prototype.remove = function (fn) { - var pos = this._stack.indexOf(fn) - if (pos == -1) { - return - } - - this._stack.splice(pos, 1) -} - -/** - * Runs the current list of functions that make up the pipeline against the - * passed tokens. - * - * @param {Array} tokens The tokens to run through the pipeline. - * @returns {Array} - */ -lunr.Pipeline.prototype.run = function (tokens) { - var stackLength = this._stack.length - - for (var i = 0; i < stackLength; i++) { - var fn = this._stack[i] - var memo = [] - - for (var j = 0; j < tokens.length; j++) { - var result = fn(tokens[j], j, tokens) - - if (result === null || result === void 0 || result === '') continue - - if (Array.isArray(result)) { - for (var k = 0; k < result.length; k++) { - memo.push(result[k]) - } - } else { - memo.push(result) - } - } - - tokens = memo - } - - return tokens -} - -/** - * Convenience method for passing a string through a pipeline and getting - * strings out. This method takes care of wrapping the passed string in a - * token and mapping the resulting tokens back to strings. - * - * @param {string} str - The string to pass through the pipeline. - * @param {?object} metadata - Optional metadata to associate with the token - * passed to the pipeline. - * @returns {string[]} - */ -lunr.Pipeline.prototype.runString = function (str, metadata) { - var token = new lunr.Token (str, metadata) - - return this.run([token]).map(function (t) { - return t.toString() - }) -} - -/** - * Resets the pipeline by removing any existing processors. - * - */ -lunr.Pipeline.prototype.reset = function () { - this._stack = [] -} - -/** - * Returns a representation of the pipeline ready for serialisation. - * - * Logs a warning if the function has not been registered. - * - * @returns {Array} - */ -lunr.Pipeline.prototype.toJSON = function () { - return this._stack.map(function (fn) { - lunr.Pipeline.warnIfFunctionNotRegistered(fn) - - return fn.label - }) -} -/*! - * lunr.Vector - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * A vector is used to construct the vector space of documents and queries. These - * vectors support operations to determine the similarity between two documents or - * a document and a query. - * - * Normally no parameters are required for initializing a vector, but in the case of - * loading a previously dumped vector the raw elements can be provided to the constructor. - * - * For performance reasons vectors are implemented with a flat array, where an elements - * index is immediately followed by its value. E.g. [index, value, index, value]. This - * allows the underlying array to be as sparse as possible and still offer decent - * performance when being used for vector calculations. - * - * @constructor - * @param {Number[]} [elements] - The flat list of element index and element value pairs. - */ -lunr.Vector = function (elements) { - this._magnitude = 0 - this.elements = elements || [] -} - - -/** - * Calculates the position within the vector to insert a given index. - * - * This is used internally by insert and upsert. If there are duplicate indexes then - * the position is returned as if the value for that index were to be updated, but it - * is the callers responsibility to check whether there is a duplicate at that index - * - * @param {Number} insertIdx - The index at which the element should be inserted. - * @returns {Number} - */ -lunr.Vector.prototype.positionForIndex = function (index) { - // For an empty vector the tuple can be inserted at the beginning - if (this.elements.length == 0) { - return 0 - } - - var start = 0, - end = this.elements.length / 2, - sliceLength = end - start, - pivotPoint = Math.floor(sliceLength / 2), - pivotIndex = this.elements[pivotPoint * 2] - - while (sliceLength > 1) { - if (pivotIndex < index) { - start = pivotPoint - } - - if (pivotIndex > index) { - end = pivotPoint - } - - if (pivotIndex == index) { - break - } - - sliceLength = end - start - pivotPoint = start + Math.floor(sliceLength / 2) - pivotIndex = this.elements[pivotPoint * 2] - } - - if (pivotIndex == index) { - return pivotPoint * 2 - } - - if (pivotIndex > index) { - return pivotPoint * 2 - } - - if (pivotIndex < index) { - return (pivotPoint + 1) * 2 - } -} - -/** - * Inserts an element at an index within the vector. - * - * Does not allow duplicates, will throw an error if there is already an entry - * for this index. - * - * @param {Number} insertIdx - The index at which the element should be inserted. - * @param {Number} val - The value to be inserted into the vector. - */ -lunr.Vector.prototype.insert = function (insertIdx, val) { - this.upsert(insertIdx, val, function () { - throw "duplicate index" - }) -} - -/** - * Inserts or updates an existing index within the vector. - * - * @param {Number} insertIdx - The index at which the element should be inserted. - * @param {Number} val - The value to be inserted into the vector. - * @param {function} fn - A function that is called for updates, the existing value and the - * requested value are passed as arguments - */ -lunr.Vector.prototype.upsert = function (insertIdx, val, fn) { - this._magnitude = 0 - var position = this.positionForIndex(insertIdx) - - if (this.elements[position] == insertIdx) { - this.elements[position + 1] = fn(this.elements[position + 1], val) - } else { - this.elements.splice(position, 0, insertIdx, val) - } -} - -/** - * Calculates the magnitude of this vector. - * - * @returns {Number} - */ -lunr.Vector.prototype.magnitude = function () { - if (this._magnitude) return this._magnitude - - var sumOfSquares = 0, - elementsLength = this.elements.length - - for (var i = 1; i < elementsLength; i += 2) { - var val = this.elements[i] - sumOfSquares += val * val - } - - return this._magnitude = Math.sqrt(sumOfSquares) -} - -/** - * Calculates the dot product of this vector and another vector. - * - * @param {lunr.Vector} otherVector - The vector to compute the dot product with. - * @returns {Number} - */ -lunr.Vector.prototype.dot = function (otherVector) { - var dotProduct = 0, - a = this.elements, b = otherVector.elements, - aLen = a.length, bLen = b.length, - aVal = 0, bVal = 0, - i = 0, j = 0 - - while (i < aLen && j < bLen) { - aVal = a[i], bVal = b[j] - if (aVal < bVal) { - i += 2 - } else if (aVal > bVal) { - j += 2 - } else if (aVal == bVal) { - dotProduct += a[i + 1] * b[j + 1] - i += 2 - j += 2 - } - } - - return dotProduct -} - -/** - * Calculates the similarity between this vector and another vector. - * - * @param {lunr.Vector} otherVector - The other vector to calculate the - * similarity with. - * @returns {Number} - */ -lunr.Vector.prototype.similarity = function (otherVector) { - return this.dot(otherVector) / this.magnitude() || 0 -} - -/** - * Converts the vector to an array of the elements within the vector. - * - * @returns {Number[]} - */ -lunr.Vector.prototype.toArray = function () { - var output = new Array (this.elements.length / 2) - - for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) { - output[j] = this.elements[i] - } - - return output -} - -/** - * A JSON serializable representation of the vector. - * - * @returns {Number[]} - */ -lunr.Vector.prototype.toJSON = function () { - return this.elements -} -/* eslint-disable */ -/*! - * lunr.stemmer - * Copyright (C) 2019 Oliver Nightingale - * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt - */ - -/** - * lunr.stemmer is an english language stemmer, this is a JavaScript - * implementation of the PorterStemmer taken from http://tartarus.org/~martin - * - * @static - * @implements {lunr.PipelineFunction} - * @param {lunr.Token} token - The string to stem - * @returns {lunr.Token} - * @see {@link lunr.Pipeline} - * @function - */ -lunr.stemmer = (function(){ - var step2list = { - "ational" : "ate", - "tional" : "tion", - "enci" : "ence", - "anci" : "ance", - "izer" : "ize", - "bli" : "ble", - "alli" : "al", - "entli" : "ent", - "eli" : "e", - "ousli" : "ous", - "ization" : "ize", - "ation" : "ate", - "ator" : "ate", - "alism" : "al", - "iveness" : "ive", - "fulness" : "ful", - "ousness" : "ous", - "aliti" : "al", - "iviti" : "ive", - "biliti" : "ble", - "logi" : "log" - }, - - step3list = { - "icate" : "ic", - "ative" : "", - "alize" : "al", - "iciti" : "ic", - "ical" : "ic", - "ful" : "", - "ness" : "" - }, - - c = "[^aeiou]", // consonant - v = "[aeiouy]", // vowel - C = c + "[^aeiouy]*", // consonant sequence - V = v + "[aeiou]*", // vowel sequence - - mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0 - meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1 - mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1 - s_v = "^(" + C + ")?" + v; // vowel in stem - - var re_mgr0 = new RegExp(mgr0); - var re_mgr1 = new RegExp(mgr1); - var re_meq1 = new RegExp(meq1); - var re_s_v = new RegExp(s_v); - - var re_1a = /^(.+?)(ss|i)es$/; - var re2_1a = /^(.+?)([^s])s$/; - var re_1b = /^(.+?)eed$/; - var re2_1b = /^(.+?)(ed|ing)$/; - var re_1b_2 = /.$/; - var re2_1b_2 = /(at|bl|iz)$/; - var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$"); - var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$"); - - var re_1c = /^(.+?[^aeiou])y$/; - var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; - - var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; - - var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; - var re2_4 = /^(.+?)(s|t)(ion)$/; - - var re_5 = /^(.+?)e$/; - var re_5_1 = /ll$/; - var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$"); - - var porterStemmer = function porterStemmer(w) { - var stem, - suffix, - firstch, - re, - re2, - re3, - re4; - - if (w.length < 3) { return w; } - - firstch = w.substr(0,1); - if (firstch == "y") { - w = firstch.toUpperCase() + w.substr(1); - } - - // Step 1a - re = re_1a - re2 = re2_1a; - - if (re.test(w)) { w = w.replace(re,"$1$2"); } - else if (re2.test(w)) { w = w.replace(re2,"$1$2"); } - - // Step 1b - re = re_1b; - re2 = re2_1b; - if (re.test(w)) { - var fp = re.exec(w); - re = re_mgr0; - if (re.test(fp[1])) { - re = re_1b_2; - w = w.replace(re,""); - } - } else if (re2.test(w)) { - var fp = re2.exec(w); - stem = fp[1]; - re2 = re_s_v; - if (re2.test(stem)) { - w = stem; - re2 = re2_1b_2; - re3 = re3_1b_2; - re4 = re4_1b_2; - if (re2.test(w)) { w = w + "e"; } - else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); } - else if (re4.test(w)) { w = w + "e"; } - } - } - - // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say) - re = re_1c; - if (re.test(w)) { - var fp = re.exec(w); - stem = fp[1]; - w = stem + "i"; - } - - // Step 2 - re = re_2; - if (re.test(w)) { - var fp = re.exec(w); - stem = fp[1]; - suffix = fp[2]; - re = re_mgr0; - if (re.test(stem)) { - w = stem + step2list[suffix]; - } - } - - // Step 3 - re = re_3; - if (re.test(w)) { - var fp = re.exec(w); - stem = fp[1]; - suffix = fp[2]; - re = re_mgr0; - if (re.test(stem)) { - w = stem + step3list[suffix]; - } - } - - // Step 4 - re = re_4; - re2 = re2_4; - if (re.test(w)) { - var fp = re.exec(w); - stem = fp[1]; - re = re_mgr1; - if (re.test(stem)) { - w = stem; - } - } else if (re2.test(w)) { - var fp = re2.exec(w); - stem = fp[1] + fp[2]; - re2 = re_mgr1; - if (re2.test(stem)) { - w = stem; - } - } - - // Step 5 - re = re_5; - if (re.test(w)) { - var fp = re.exec(w); - stem = fp[1]; - re = re_mgr1; - re2 = re_meq1; - re3 = re3_5; - if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) { - w = stem; - } - } - - re = re_5_1; - re2 = re_mgr1; - if (re.test(w) && re2.test(w)) { - re = re_1b_2; - w = w.replace(re,""); - } - - // and turn initial Y back to y - - if (firstch == "y") { - w = firstch.toLowerCase() + w.substr(1); - } - - return w; - }; - - return function (token) { - return token.update(porterStemmer); - } -})(); - -lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer') -/*! - * lunr.stopWordFilter - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * lunr.generateStopWordFilter builds a stopWordFilter function from the provided - * list of stop words. - * - * The built in lunr.stopWordFilter is built using this generator and can be used - * to generate custom stopWordFilters for applications or non English languages. - * - * @function - * @param {Array} token The token to pass through the filter - * @returns {lunr.PipelineFunction} - * @see lunr.Pipeline - * @see lunr.stopWordFilter - */ -lunr.generateStopWordFilter = function (stopWords) { - var words = stopWords.reduce(function (memo, stopWord) { - memo[stopWord] = stopWord - return memo - }, {}) - - return function (token) { - if (token && words[token.toString()] !== token.toString()) return token - } -} - -/** - * lunr.stopWordFilter is an English language stop word list filter, any words - * contained in the list will not be passed through the filter. - * - * This is intended to be used in the Pipeline. If the token does not pass the - * filter then undefined will be returned. - * - * @function - * @implements {lunr.PipelineFunction} - * @params {lunr.Token} token - A token to check for being a stop word. - * @returns {lunr.Token} - * @see {@link lunr.Pipeline} - */ -lunr.stopWordFilter = lunr.generateStopWordFilter([ - 'a', - 'able', - 'about', - 'across', - 'after', - 'all', - 'almost', - 'also', - 'am', - 'among', - 'an', - 'and', - 'any', - 'are', - 'as', - 'at', - 'be', - 'because', - 'been', - 'but', - 'by', - 'can', - 'cannot', - 'could', - 'dear', - 'did', - 'do', - 'does', - 'either', - 'else', - 'ever', - 'every', - 'for', - 'from', - 'get', - 'got', - 'had', - 'has', - 'have', - 'he', - 'her', - 'hers', - 'him', - 'his', - 'how', - 'however', - 'i', - 'if', - 'in', - 'into', - 'is', - 'it', - 'its', - 'just', - 'least', - 'let', - 'like', - 'likely', - 'may', - 'me', - 'might', - 'most', - 'must', - 'my', - 'neither', - 'no', - 'nor', - 'not', - 'of', - 'off', - 'often', - 'on', - 'only', - 'or', - 'other', - 'our', - 'own', - 'rather', - 'said', - 'say', - 'says', - 'she', - 'should', - 'since', - 'so', - 'some', - 'than', - 'that', - 'the', - 'their', - 'them', - 'then', - 'there', - 'these', - 'they', - 'this', - 'tis', - 'to', - 'too', - 'twas', - 'us', - 'wants', - 'was', - 'we', - 'were', - 'what', - 'when', - 'where', - 'which', - 'while', - 'who', - 'whom', - 'why', - 'will', - 'with', - 'would', - 'yet', - 'you', - 'your' -]) - -lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter') -/*! - * lunr.trimmer - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * lunr.trimmer is a pipeline function for trimming non word - * characters from the beginning and end of tokens before they - * enter the index. - * - * This implementation may not work correctly for non latin - * characters and should either be removed or adapted for use - * with languages with non-latin characters. - * - * @static - * @implements {lunr.PipelineFunction} - * @param {lunr.Token} token The token to pass through the filter - * @returns {lunr.Token} - * @see lunr.Pipeline - */ -lunr.trimmer = function (token) { - return token.update(function (s) { - return s.replace(/^\W+/, '').replace(/\W+$/, '') - }) -} - -lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer') -/*! - * lunr.TokenSet - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * A token set is used to store the unique list of all tokens - * within an index. Token sets are also used to represent an - * incoming query to the index, this query token set and index - * token set are then intersected to find which tokens to look - * up in the inverted index. - * - * A token set can hold multiple tokens, as in the case of the - * index token set, or it can hold a single token as in the - * case of a simple query token set. - * - * Additionally token sets are used to perform wildcard matching. - * Leading, contained and trailing wildcards are supported, and - * from this edit distance matching can also be provided. - * - * Token sets are implemented as a minimal finite state automata, - * where both common prefixes and suffixes are shared between tokens. - * This helps to reduce the space used for storing the token set. - * - * @constructor - */ -lunr.TokenSet = function () { - this.final = false - this.edges = {} - this.id = lunr.TokenSet._nextId - lunr.TokenSet._nextId += 1 -} - -/** - * Keeps track of the next, auto increment, identifier to assign - * to a new tokenSet. - * - * TokenSets require a unique identifier to be correctly minimised. - * - * @private - */ -lunr.TokenSet._nextId = 1 - -/** - * Creates a TokenSet instance from the given sorted array of words. - * - * @param {String[]} arr - A sorted array of strings to create the set from. - * @returns {lunr.TokenSet} - * @throws Will throw an error if the input array is not sorted. - */ -lunr.TokenSet.fromArray = function (arr) { - var builder = new lunr.TokenSet.Builder - - for (var i = 0, len = arr.length; i < len; i++) { - builder.insert(arr[i]) - } - - builder.finish() - return builder.root -} - -/** - * Creates a token set from a query clause. - * - * @private - * @param {Object} clause - A single clause from lunr.Query. - * @param {string} clause.term - The query clause term. - * @param {number} [clause.editDistance] - The optional edit distance for the term. - * @returns {lunr.TokenSet} - */ -lunr.TokenSet.fromClause = function (clause) { - if ('editDistance' in clause) { - return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance) - } else { - return lunr.TokenSet.fromString(clause.term) - } -} - -/** - * Creates a token set representing a single string with a specified - * edit distance. - * - * Insertions, deletions, substitutions and transpositions are each - * treated as an edit distance of 1. - * - * Increasing the allowed edit distance will have a dramatic impact - * on the performance of both creating and intersecting these TokenSets. - * It is advised to keep the edit distance less than 3. - * - * @param {string} str - The string to create the token set from. - * @param {number} editDistance - The allowed edit distance to match. - * @returns {lunr.Vector} - */ -lunr.TokenSet.fromFuzzyString = function (str, editDistance) { - var root = new lunr.TokenSet - - var stack = [{ - node: root, - editsRemaining: editDistance, - str: str - }] - - while (stack.length) { - var frame = stack.pop() - - // no edit - if (frame.str.length > 0) { - var char = frame.str.charAt(0), - noEditNode - - if (char in frame.node.edges) { - noEditNode = frame.node.edges[char] - } else { - noEditNode = new lunr.TokenSet - frame.node.edges[char] = noEditNode - } - - if (frame.str.length == 1) { - noEditNode.final = true - } - - stack.push({ - node: noEditNode, - editsRemaining: frame.editsRemaining, - str: frame.str.slice(1) - }) - } - - if (frame.editsRemaining == 0) { - continue - } - - // insertion - if ("*" in frame.node.edges) { - var insertionNode = frame.node.edges["*"] - } else { - var insertionNode = new lunr.TokenSet - frame.node.edges["*"] = insertionNode - } - - if (frame.str.length == 0) { - insertionNode.final = true - } - - stack.push({ - node: insertionNode, - editsRemaining: frame.editsRemaining - 1, - str: frame.str - }) - - // deletion - // can only do a deletion if we have enough edits remaining - // and if there are characters left to delete in the string - if (frame.str.length > 1) { - stack.push({ - node: frame.node, - editsRemaining: frame.editsRemaining - 1, - str: frame.str.slice(1) - }) - } - - // deletion - // just removing the last character from the str - if (frame.str.length == 1) { - frame.node.final = true - } - - // substitution - // can only do a substitution if we have enough edits remaining - // and if there are characters left to substitute - if (frame.str.length >= 1) { - if ("*" in frame.node.edges) { - var substitutionNode = frame.node.edges["*"] - } else { - var substitutionNode = new lunr.TokenSet - frame.node.edges["*"] = substitutionNode - } - - if (frame.str.length == 1) { - substitutionNode.final = true - } - - stack.push({ - node: substitutionNode, - editsRemaining: frame.editsRemaining - 1, - str: frame.str.slice(1) - }) - } - - // transposition - // can only do a transposition if there are edits remaining - // and there are enough characters to transpose - if (frame.str.length > 1) { - var charA = frame.str.charAt(0), - charB = frame.str.charAt(1), - transposeNode - - if (charB in frame.node.edges) { - transposeNode = frame.node.edges[charB] - } else { - transposeNode = new lunr.TokenSet - frame.node.edges[charB] = transposeNode - } - - if (frame.str.length == 1) { - transposeNode.final = true - } - - stack.push({ - node: transposeNode, - editsRemaining: frame.editsRemaining - 1, - str: charA + frame.str.slice(2) - }) - } - } - - return root -} - -/** - * Creates a TokenSet from a string. - * - * The string may contain one or more wildcard characters (*) - * that will allow wildcard matching when intersecting with - * another TokenSet. - * - * @param {string} str - The string to create a TokenSet from. - * @returns {lunr.TokenSet} - */ -lunr.TokenSet.fromString = function (str) { - var node = new lunr.TokenSet, - root = node - - /* - * Iterates through all characters within the passed string - * appending a node for each character. - * - * When a wildcard character is found then a self - * referencing edge is introduced to continually match - * any number of any characters. - */ - for (var i = 0, len = str.length; i < len; i++) { - var char = str[i], - final = (i == len - 1) - - if (char == "*") { - node.edges[char] = node - node.final = final - - } else { - var next = new lunr.TokenSet - next.final = final - - node.edges[char] = next - node = next - } - } - - return root -} - -/** - * Converts this TokenSet into an array of strings - * contained within the TokenSet. - * - * This is not intended to be used on a TokenSet that - * contains wildcards, in these cases the results are - * undefined and are likely to cause an infinite loop. - * - * @returns {string[]} - */ -lunr.TokenSet.prototype.toArray = function () { - var words = [] - - var stack = [{ - prefix: "", - node: this - }] - - while (stack.length) { - var frame = stack.pop(), - edges = Object.keys(frame.node.edges), - len = edges.length - - if (frame.node.final) { - /* In Safari, at this point the prefix is sometimes corrupted, see: - * https://github.com/olivernn/lunr.js/issues/279 Calling any - * String.prototype method forces Safari to "cast" this string to what - * it's supposed to be, fixing the bug. */ - frame.prefix.charAt(0) - words.push(frame.prefix) - } - - for (var i = 0; i < len; i++) { - var edge = edges[i] - - stack.push({ - prefix: frame.prefix.concat(edge), - node: frame.node.edges[edge] - }) - } - } - - return words -} - -/** - * Generates a string representation of a TokenSet. - * - * This is intended to allow TokenSets to be used as keys - * in objects, largely to aid the construction and minimisation - * of a TokenSet. As such it is not designed to be a human - * friendly representation of the TokenSet. - * - * @returns {string} - */ -lunr.TokenSet.prototype.toString = function () { - // NOTE: Using Object.keys here as this.edges is very likely - // to enter 'hash-mode' with many keys being added - // - // avoiding a for-in loop here as it leads to the function - // being de-optimised (at least in V8). From some simple - // benchmarks the performance is comparable, but allowing - // V8 to optimize may mean easy performance wins in the future. - - if (this._str) { - return this._str - } - - var str = this.final ? '1' : '0', - labels = Object.keys(this.edges).sort(), - len = labels.length - - for (var i = 0; i < len; i++) { - var label = labels[i], - node = this.edges[label] - - str = str + label + node.id - } - - return str -} - -/** - * Returns a new TokenSet that is the intersection of - * this TokenSet and the passed TokenSet. - * - * This intersection will take into account any wildcards - * contained within the TokenSet. - * - * @param {lunr.TokenSet} b - An other TokenSet to intersect with. - * @returns {lunr.TokenSet} - */ -lunr.TokenSet.prototype.intersect = function (b) { - var output = new lunr.TokenSet, - frame = undefined - - var stack = [{ - qNode: b, - output: output, - node: this - }] - - while (stack.length) { - frame = stack.pop() - - // NOTE: As with the #toString method, we are using - // Object.keys and a for loop instead of a for-in loop - // as both of these objects enter 'hash' mode, causing - // the function to be de-optimised in V8 - var qEdges = Object.keys(frame.qNode.edges), - qLen = qEdges.length, - nEdges = Object.keys(frame.node.edges), - nLen = nEdges.length - - for (var q = 0; q < qLen; q++) { - var qEdge = qEdges[q] - - for (var n = 0; n < nLen; n++) { - var nEdge = nEdges[n] - - if (nEdge == qEdge || qEdge == '*') { - var node = frame.node.edges[nEdge], - qNode = frame.qNode.edges[qEdge], - final = node.final && qNode.final, - next = undefined - - if (nEdge in frame.output.edges) { - // an edge already exists for this character - // no need to create a new node, just set the finality - // bit unless this node is already final - next = frame.output.edges[nEdge] - next.final = next.final || final - - } else { - // no edge exists yet, must create one - // set the finality bit and insert it - // into the output - next = new lunr.TokenSet - next.final = final - frame.output.edges[nEdge] = next - } - - stack.push({ - qNode: qNode, - output: next, - node: node - }) - } - } - } - } - - return output -} -lunr.TokenSet.Builder = function () { - this.previousWord = "" - this.root = new lunr.TokenSet - this.uncheckedNodes = [] - this.minimizedNodes = {} -} - -lunr.TokenSet.Builder.prototype.insert = function (word) { - var node, - commonPrefix = 0 - - if (word < this.previousWord) { - throw new Error ("Out of order word insertion") - } - - for (var i = 0; i < word.length && i < this.previousWord.length; i++) { - if (word[i] != this.previousWord[i]) break - commonPrefix++ - } - - this.minimize(commonPrefix) - - if (this.uncheckedNodes.length == 0) { - node = this.root - } else { - node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child - } - - for (var i = commonPrefix; i < word.length; i++) { - var nextNode = new lunr.TokenSet, - char = word[i] - - node.edges[char] = nextNode - - this.uncheckedNodes.push({ - parent: node, - char: char, - child: nextNode - }) - - node = nextNode - } - - node.final = true - this.previousWord = word -} - -lunr.TokenSet.Builder.prototype.finish = function () { - this.minimize(0) -} - -lunr.TokenSet.Builder.prototype.minimize = function (downTo) { - for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) { - var node = this.uncheckedNodes[i], - childKey = node.child.toString() - - if (childKey in this.minimizedNodes) { - node.parent.edges[node.char] = this.minimizedNodes[childKey] - } else { - // Cache the key for this node since - // we know it can't change anymore - node.child._str = childKey - - this.minimizedNodes[childKey] = node.child - } - - this.uncheckedNodes.pop() - } -} -/*! - * lunr.Index - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * An index contains the built index of all documents and provides a query interface - * to the index. - * - * Usually instances of lunr.Index will not be created using this constructor, instead - * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be - * used to load previously built and serialized indexes. - * - * @constructor - * @param {Object} attrs - The attributes of the built search index. - * @param {Object} attrs.invertedIndex - An index of term/field to document reference. - * @param {Object} attrs.fieldVectors - Field vectors - * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens. - * @param {string[]} attrs.fields - The names of indexed document fields. - * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms. - */ -lunr.Index = function (attrs) { - this.invertedIndex = attrs.invertedIndex - this.fieldVectors = attrs.fieldVectors - this.tokenSet = attrs.tokenSet - this.fields = attrs.fields - this.pipeline = attrs.pipeline -} - -/** - * A result contains details of a document matching a search query. - * @typedef {Object} lunr.Index~Result - * @property {string} ref - The reference of the document this result represents. - * @property {number} score - A number between 0 and 1 representing how similar this document is to the query. - * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match. - */ - -/** - * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple - * query language which itself is parsed into an instance of lunr.Query. - * - * For programmatically building queries it is advised to directly use lunr.Query, the query language - * is best used for human entered text rather than program generated text. - * - * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported - * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello' - * or 'world', though those that contain both will rank higher in the results. - * - * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can - * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding - * wildcards will increase the number of documents that will be found but can also have a negative - * impact on query performance, especially with wildcards at the beginning of a term. - * - * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term - * hello in the title field will match this query. Using a field not present in the index will lead - * to an error being thrown. - * - * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term - * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported - * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2. - * Avoid large values for edit distance to improve query performance. - * - * Each term also supports a presence modifier. By default a term's presence in document is optional, however - * this can be changed to either required or prohibited. For a term's presence to be required in a document the - * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and - * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not - * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'. - * - * To escape special characters the backslash character '\' can be used, this allows searches to include - * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead - * of attempting to apply a boost of 2 to the search term "foo". - * - * @typedef {string} lunr.Index~QueryString - * @example Simple single term query - * hello - * @example Multiple term query - * hello world - * @example term scoped to a field - * title:hello - * @example term with a boost of 10 - * hello^10 - * @example term with an edit distance of 2 - * hello~2 - * @example terms with presence modifiers - * -foo +bar baz - */ - -/** - * Performs a search against the index using lunr query syntax. - * - * Results will be returned sorted by their score, the most relevant results - * will be returned first. For details on how the score is calculated, please see - * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}. - * - * For more programmatic querying use lunr.Index#query. - * - * @param {lunr.Index~QueryString} queryString - A string containing a lunr query. - * @throws {lunr.QueryParseError} If the passed query string cannot be parsed. - * @returns {lunr.Index~Result[]} - */ -lunr.Index.prototype.search = function (queryString) { - return this.query(function (query) { - var parser = new lunr.QueryParser(queryString, query) - parser.parse() - }) -} - -/** - * A query builder callback provides a query object to be used to express - * the query to perform on the index. - * - * @callback lunr.Index~queryBuilder - * @param {lunr.Query} query - The query object to build up. - * @this lunr.Query - */ - -/** - * Performs a query against the index using the yielded lunr.Query object. - * - * If performing programmatic queries against the index, this method is preferred - * over lunr.Index#search so as to avoid the additional query parsing overhead. - * - * A query object is yielded to the supplied function which should be used to - * express the query to be run against the index. - * - * Note that although this function takes a callback parameter it is _not_ an - * asynchronous operation, the callback is just yielded a query object to be - * customized. - * - * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query. - * @returns {lunr.Index~Result[]} - */ -lunr.Index.prototype.query = function (fn) { - // for each query clause - // * process terms - // * expand terms from token set - // * find matching documents and metadata - // * get document vectors - // * score documents - - var query = new lunr.Query(this.fields), - matchingFields = Object.create(null), - queryVectors = Object.create(null), - termFieldCache = Object.create(null), - requiredMatches = Object.create(null), - prohibitedMatches = Object.create(null) - - /* - * To support field level boosts a query vector is created per - * field. An empty vector is eagerly created to support negated - * queries. - */ - for (var i = 0; i < this.fields.length; i++) { - queryVectors[this.fields[i]] = new lunr.Vector - } - - fn.call(query, query) - - for (var i = 0; i < query.clauses.length; i++) { - /* - * Unless the pipeline has been disabled for this term, which is - * the case for terms with wildcards, we need to pass the clause - * term through the search pipeline. A pipeline returns an array - * of processed terms. Pipeline functions may expand the passed - * term, which means we may end up performing multiple index lookups - * for a single query term. - */ - var clause = query.clauses[i], - terms = null, - clauseMatches = lunr.Set.complete - - if (clause.usePipeline) { - terms = this.pipeline.runString(clause.term, { - fields: clause.fields - }) - } else { - terms = [clause.term] - } - - for (var m = 0; m < terms.length; m++) { - var term = terms[m] - - /* - * Each term returned from the pipeline needs to use the same query - * clause object, e.g. the same boost and or edit distance. The - * simplest way to do this is to re-use the clause object but mutate - * its term property. - */ - clause.term = term - - /* - * From the term in the clause we create a token set which will then - * be used to intersect the indexes token set to get a list of terms - * to lookup in the inverted index - */ - var termTokenSet = lunr.TokenSet.fromClause(clause), - expandedTerms = this.tokenSet.intersect(termTokenSet).toArray() - - /* - * If a term marked as required does not exist in the tokenSet it is - * impossible for the search to return any matches. We set all the field - * scoped required matches set to empty and stop examining any further - * clauses. - */ - if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) { - for (var k = 0; k < clause.fields.length; k++) { - var field = clause.fields[k] - requiredMatches[field] = lunr.Set.empty - } - - break - } - - for (var j = 0; j < expandedTerms.length; j++) { - /* - * For each term get the posting and termIndex, this is required for - * building the query vector. - */ - var expandedTerm = expandedTerms[j], - posting = this.invertedIndex[expandedTerm], - termIndex = posting._index - - for (var k = 0; k < clause.fields.length; k++) { - /* - * For each field that this query term is scoped by (by default - * all fields are in scope) we need to get all the document refs - * that have this term in that field. - * - * The posting is the entry in the invertedIndex for the matching - * term from above. - */ - var field = clause.fields[k], - fieldPosting = posting[field], - matchingDocumentRefs = Object.keys(fieldPosting), - termField = expandedTerm + "/" + field, - matchingDocumentsSet = new lunr.Set(matchingDocumentRefs) - - /* - * if the presence of this term is required ensure that the matching - * documents are added to the set of required matches for this clause. - * - */ - if (clause.presence == lunr.Query.presence.REQUIRED) { - clauseMatches = clauseMatches.union(matchingDocumentsSet) - - if (requiredMatches[field] === undefined) { - requiredMatches[field] = lunr.Set.complete - } - } - - /* - * if the presence of this term is prohibited ensure that the matching - * documents are added to the set of prohibited matches for this field, - * creating that set if it does not yet exist. - */ - if (clause.presence == lunr.Query.presence.PROHIBITED) { - if (prohibitedMatches[field] === undefined) { - prohibitedMatches[field] = lunr.Set.empty - } - - prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet) - - /* - * Prohibited matches should not be part of the query vector used for - * similarity scoring and no metadata should be extracted so we continue - * to the next field - */ - continue - } - - /* - * The query field vector is populated using the termIndex found for - * the term and a unit value with the appropriate boost applied. - * Using upsert because there could already be an entry in the vector - * for the term we are working with. In that case we just add the scores - * together. - */ - queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b }) - - /** - * If we've already seen this term, field combo then we've already collected - * the matching documents and metadata, no need to go through all that again - */ - if (termFieldCache[termField]) { - continue - } - - for (var l = 0; l < matchingDocumentRefs.length; l++) { - /* - * All metadata for this term/field/document triple - * are then extracted and collected into an instance - * of lunr.MatchData ready to be returned in the query - * results - */ - var matchingDocumentRef = matchingDocumentRefs[l], - matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field), - metadata = fieldPosting[matchingDocumentRef], - fieldMatch - - if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) { - matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata) - } else { - fieldMatch.add(expandedTerm, field, metadata) - } - - } - - termFieldCache[termField] = true - } - } - } - - /** - * If the presence was required we need to update the requiredMatches field sets. - * We do this after all fields for the term have collected their matches because - * the clause terms presence is required in _any_ of the fields not _all_ of the - * fields. - */ - if (clause.presence === lunr.Query.presence.REQUIRED) { - for (var k = 0; k < clause.fields.length; k++) { - var field = clause.fields[k] - requiredMatches[field] = requiredMatches[field].intersect(clauseMatches) - } - } - } - - /** - * Need to combine the field scoped required and prohibited - * matching documents into a global set of required and prohibited - * matches - */ - var allRequiredMatches = lunr.Set.complete, - allProhibitedMatches = lunr.Set.empty - - for (var i = 0; i < this.fields.length; i++) { - var field = this.fields[i] - - if (requiredMatches[field]) { - allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field]) - } - - if (prohibitedMatches[field]) { - allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field]) - } - } - - var matchingFieldRefs = Object.keys(matchingFields), - results = [], - matches = Object.create(null) - - /* - * If the query is negated (contains only prohibited terms) - * we need to get _all_ fieldRefs currently existing in the - * index. This is only done when we know that the query is - * entirely prohibited terms to avoid any cost of getting all - * fieldRefs unnecessarily. - * - * Additionally, blank MatchData must be created to correctly - * populate the results. - */ - if (query.isNegated()) { - matchingFieldRefs = Object.keys(this.fieldVectors) - - for (var i = 0; i < matchingFieldRefs.length; i++) { - var matchingFieldRef = matchingFieldRefs[i] - var fieldRef = lunr.FieldRef.fromString(matchingFieldRef) - matchingFields[matchingFieldRef] = new lunr.MatchData - } - } - - for (var i = 0; i < matchingFieldRefs.length; i++) { - /* - * Currently we have document fields that match the query, but we - * need to return documents. The matchData and scores are combined - * from multiple fields belonging to the same document. - * - * Scores are calculated by field, using the query vectors created - * above, and combined into a final document score using addition. - */ - var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]), - docRef = fieldRef.docRef - - if (!allRequiredMatches.contains(docRef)) { - continue - } - - if (allProhibitedMatches.contains(docRef)) { - continue - } - - var fieldVector = this.fieldVectors[fieldRef], - score = queryVectors[fieldRef.fieldName].similarity(fieldVector), - docMatch - - if ((docMatch = matches[docRef]) !== undefined) { - docMatch.score += score - docMatch.matchData.combine(matchingFields[fieldRef]) - } else { - var match = { - ref: docRef, - score: score, - matchData: matchingFields[fieldRef] - } - matches[docRef] = match - results.push(match) - } - } - - /* - * Sort the results objects by score, highest first. - */ - return results.sort(function (a, b) { - return b.score - a.score - }) -} - -/** - * Prepares the index for JSON serialization. - * - * The schema for this JSON blob will be described in a - * separate JSON schema file. - * - * @returns {Object} - */ -lunr.Index.prototype.toJSON = function () { - var invertedIndex = Object.keys(this.invertedIndex) - .sort() - .map(function (term) { - return [term, this.invertedIndex[term]] - }, this) - - var fieldVectors = Object.keys(this.fieldVectors) - .map(function (ref) { - return [ref, this.fieldVectors[ref].toJSON()] - }, this) - - return { - version: lunr.version, - fields: this.fields, - fieldVectors: fieldVectors, - invertedIndex: invertedIndex, - pipeline: this.pipeline.toJSON() - } -} - -/** - * Loads a previously serialized lunr.Index - * - * @param {Object} serializedIndex - A previously serialized lunr.Index - * @returns {lunr.Index} - */ -lunr.Index.load = function (serializedIndex) { - var attrs = {}, - fieldVectors = {}, - serializedVectors = serializedIndex.fieldVectors, - invertedIndex = Object.create(null), - serializedInvertedIndex = serializedIndex.invertedIndex, - tokenSetBuilder = new lunr.TokenSet.Builder, - pipeline = lunr.Pipeline.load(serializedIndex.pipeline) - - if (serializedIndex.version != lunr.version) { - lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'") - } - - for (var i = 0; i < serializedVectors.length; i++) { - var tuple = serializedVectors[i], - ref = tuple[0], - elements = tuple[1] - - fieldVectors[ref] = new lunr.Vector(elements) - } - - for (var i = 0; i < serializedInvertedIndex.length; i++) { - var tuple = serializedInvertedIndex[i], - term = tuple[0], - posting = tuple[1] - - tokenSetBuilder.insert(term) - invertedIndex[term] = posting - } - - tokenSetBuilder.finish() - - attrs.fields = serializedIndex.fields - - attrs.fieldVectors = fieldVectors - attrs.invertedIndex = invertedIndex - attrs.tokenSet = tokenSetBuilder.root - attrs.pipeline = pipeline - - return new lunr.Index(attrs) -} -/*! - * lunr.Builder - * Copyright (C) 2019 Oliver Nightingale - */ - -/** - * lunr.Builder performs indexing on a set of documents and - * returns instances of lunr.Index ready for querying. - * - * All configuration of the index is done via the builder, the - * fields to index, the document reference, the text processing - * pipeline and document scoring parameters are all set on the - * builder before indexing. - * - * @constructor - * @property {string} _ref - Internal reference to the document reference field. - * @property {string[]} _fields - Internal reference to the document fields to index. - * @property {object} invertedIndex - The inverted index maps terms to document fields. - * @property {object} documentTermFrequencies - Keeps track of document term frequencies. - * @property {object} documentLengths - Keeps track of the length of documents added to the index. - * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing. - * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing. - * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index. - * @property {number} documentCount - Keeps track of the total number of documents indexed. - * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75. - * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2. - * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space. - * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index. - */ -lunr.Builder = function () { - this._ref = "id" - this._fields = Object.create(null) - this._documents = Object.create(null) - this.invertedIndex = Object.create(null) - this.fieldTermFrequencies = {} - this.fieldLengths = {} - this.tokenizer = lunr.tokenizer - this.pipeline = new lunr.Pipeline - this.searchPipeline = new lunr.Pipeline - this.documentCount = 0 - this._b = 0.75 - this._k1 = 1.2 - this.termIndex = 0 - this.metadataWhitelist = [] -} - -/** - * Sets the document field used as the document reference. Every document must have this field. - * The type of this field in the document should be a string, if it is not a string it will be - * coerced into a string by calling toString. - * - * The default ref is 'id'. - * - * The ref should _not_ be changed during indexing, it should be set before any documents are - * added to the index. Changing it during indexing can lead to inconsistent results. - * - * @param {string} ref - The name of the reference field in the document. - */ -lunr.Builder.prototype.ref = function (ref) { - this._ref = ref -} - -/** - * A function that is used to extract a field from a document. - * - * Lunr expects a field to be at the top level of a document, if however the field - * is deeply nested within a document an extractor function can be used to extract - * the right field for indexing. - * - * @callback fieldExtractor - * @param {object} doc - The document being added to the index. - * @returns {?(string|object|object[])} obj - The object that will be indexed for this field. - * @example Extracting a nested field - * function (doc) { return doc.nested.field } - */ - -/** - * Adds a field to the list of document fields that will be indexed. Every document being - * indexed should have this field. Null values for this field in indexed documents will - * not cause errors but will limit the chance of that document being retrieved by searches. - * - * All fields should be added before adding documents to the index. Adding fields after - * a document has been indexed will have no effect on already indexed documents. - * - * Fields can be boosted at build time. This allows terms within that field to have more - * importance when ranking search results. Use a field boost to specify that matches within - * one field are more important than other fields. - * - * @param {string} fieldName - The name of a field to index in all documents. - * @param {object} attributes - Optional attributes associated with this field. - * @param {number} [attributes.boost=1] - Boost applied to all terms within this field. - * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document. - * @throws {RangeError} fieldName cannot contain unsupported characters '/' - */ -lunr.Builder.prototype.field = function (fieldName, attributes) { - if (/\//.test(fieldName)) { - throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'") - } - - this._fields[fieldName] = attributes || {} -} - -/** - * A parameter to tune the amount of field length normalisation that is applied when - * calculating relevance scores. A value of 0 will completely disable any normalisation - * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b - * will be clamped to the range 0 - 1. - * - * @param {number} number - The value to set for this tuning parameter. - */ -lunr.Builder.prototype.b = function (number) { - if (number < 0) { - this._b = 0 - } else if (number > 1) { - this._b = 1 - } else { - this._b = number - } -} - -/** - * A parameter that controls the speed at which a rise in term frequency results in term - * frequency saturation. The default value is 1.2. Setting this to a higher value will give - * slower saturation levels, a lower value will result in quicker saturation. - * - * @param {number} number - The value to set for this tuning parameter. - */ -lunr.Builder.prototype.k1 = function (number) { - this._k1 = number -} - -/** - * Adds a document to the index. - * - * Before adding fields to the index the index should have been fully setup, with the document - * ref and all fields to index already having been specified. - * - * The document must have a field name as specified by the ref (by default this is 'id') and - * it should have all fields defined for indexing, though null or undefined values will not - * cause errors. - * - * Entire documents can be boosted at build time. Applying a boost to a document indicates that - * this document should rank higher in search results than other documents. - * - * @param {object} doc - The document to add to the index. - * @param {object} attributes - Optional attributes associated with this document. - * @param {number} [attributes.boost=1] - Boost applied to all terms within this document. - */ -lunr.Builder.prototype.add = function (doc, attributes) { - var docRef = doc[this._ref], - fields = Object.keys(this._fields) - - this._documents[docRef] = attributes || {} - this.documentCount += 1 - - for (var i = 0; i < fields.length; i++) { - var fieldName = fields[i], - extractor = this._fields[fieldName].extractor, - field = extractor ? extractor(doc) : doc[fieldName], - tokens = this.tokenizer(field, { - fields: [fieldName] - }), - terms = this.pipeline.run(tokens), - fieldRef = new lunr.FieldRef (docRef, fieldName), - fieldTerms = Object.create(null) - - this.fieldTermFrequencies[fieldRef] = fieldTerms - this.fieldLengths[fieldRef] = 0 - - // store the length of this field for this document - this.fieldLengths[fieldRef] += terms.length - - // calculate term frequencies for this field - for (var j = 0; j < terms.length; j++) { - var term = terms[j] - - if (fieldTerms[term] == undefined) { - fieldTerms[term] = 0 - } - - fieldTerms[term] += 1 - - // add to inverted index - // create an initial posting if one doesn't exist - if (this.invertedIndex[term] == undefined) { - var posting = Object.create(null) - posting["_index"] = this.termIndex - this.termIndex += 1 - - for (var k = 0; k < fields.length; k++) { - posting[fields[k]] = Object.create(null) - } - - this.invertedIndex[term] = posting - } - - // add an entry for this term/fieldName/docRef to the invertedIndex - if (this.invertedIndex[term][fieldName][docRef] == undefined) { - this.invertedIndex[term][fieldName][docRef] = Object.create(null) - } - - // store all whitelisted metadata about this token in the - // inverted index - for (var l = 0; l < this.metadataWhitelist.length; l++) { - var metadataKey = this.metadataWhitelist[l], - metadata = term.metadata[metadataKey] - - if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) { - this.invertedIndex[term][fieldName][docRef][metadataKey] = [] - } - - this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata) - } - } - - } -} - -/** - * Calculates the average document length for this index - * - * @private - */ -lunr.Builder.prototype.calculateAverageFieldLengths = function () { - - var fieldRefs = Object.keys(this.fieldLengths), - numberOfFields = fieldRefs.length, - accumulator = {}, - documentsWithField = {} - - for (var i = 0; i < numberOfFields; i++) { - var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), - field = fieldRef.fieldName - - documentsWithField[field] || (documentsWithField[field] = 0) - documentsWithField[field] += 1 - - accumulator[field] || (accumulator[field] = 0) - accumulator[field] += this.fieldLengths[fieldRef] - } - - var fields = Object.keys(this._fields) - - for (var i = 0; i < fields.length; i++) { - var fieldName = fields[i] - accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName] - } - - this.averageFieldLength = accumulator -} - -/** - * Builds a vector space model of every document using lunr.Vector - * - * @private - */ -lunr.Builder.prototype.createFieldVectors = function () { - var fieldVectors = {}, - fieldRefs = Object.keys(this.fieldTermFrequencies), - fieldRefsLength = fieldRefs.length, - termIdfCache = Object.create(null) - - for (var i = 0; i < fieldRefsLength; i++) { - var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), - fieldName = fieldRef.fieldName, - fieldLength = this.fieldLengths[fieldRef], - fieldVector = new lunr.Vector, - termFrequencies = this.fieldTermFrequencies[fieldRef], - terms = Object.keys(termFrequencies), - termsLength = terms.length - - - var fieldBoost = this._fields[fieldName].boost || 1, - docBoost = this._documents[fieldRef.docRef].boost || 1 - - for (var j = 0; j < termsLength; j++) { - var term = terms[j], - tf = termFrequencies[term], - termIndex = this.invertedIndex[term]._index, - idf, score, scoreWithPrecision - - if (termIdfCache[term] === undefined) { - idf = lunr.idf(this.invertedIndex[term], this.documentCount) - termIdfCache[term] = idf - } else { - idf = termIdfCache[term] - } - - score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf) - score *= fieldBoost - score *= docBoost - scoreWithPrecision = Math.round(score * 1000) / 1000 - // Converts 1.23456789 to 1.234. - // Reducing the precision so that the vectors take up less - // space when serialised. Doing it now so that they behave - // the same before and after serialisation. Also, this is - // the fastest approach to reducing a number's precision in - // JavaScript. - - fieldVector.insert(termIndex, scoreWithPrecision) - } - - fieldVectors[fieldRef] = fieldVector - } - - this.fieldVectors = fieldVectors -} - -/** - * Creates a token set of all tokens in the index using lunr.TokenSet - * - * @private - */ -lunr.Builder.prototype.createTokenSet = function () { - this.tokenSet = lunr.TokenSet.fromArray( - Object.keys(this.invertedIndex).sort() - ) -} - -/** - * Builds the index, creating an instance of lunr.Index. - * - * This completes the indexing process and should only be called - * once all documents have been added to the index. - * - * @returns {lunr.Index} - */ -lunr.Builder.prototype.build = function () { - this.calculateAverageFieldLengths() - this.createFieldVectors() - this.createTokenSet() - - return new lunr.Index({ - invertedIndex: this.invertedIndex, - fieldVectors: this.fieldVectors, - tokenSet: this.tokenSet, - fields: Object.keys(this._fields), - pipeline: this.searchPipeline - }) -} - -/** - * Applies a plugin to the index builder. - * - * A plugin is a function that is called with the index builder as its context. - * Plugins can be used to customise or extend the behaviour of the index - * in some way. A plugin is just a function, that encapsulated the custom - * behaviour that should be applied when building the index. - * - * The plugin function will be called with the index builder as its argument, additional - * arguments can also be passed when calling use. The function will be called - * with the index builder as its context. - * - * @param {Function} plugin The plugin to apply. - */ -lunr.Builder.prototype.use = function (fn) { - var args = Array.prototype.slice.call(arguments, 1) - args.unshift(this) - fn.apply(this, args) -} -/** - * Contains and collects metadata about a matching document. - * A single instance of lunr.MatchData is returned as part of every - * lunr.Index~Result. - * - * @constructor - * @param {string} term - The term this match data is associated with - * @param {string} field - The field in which the term was found - * @param {object} metadata - The metadata recorded about this term in this field - * @property {object} metadata - A cloned collection of metadata associated with this document. - * @see {@link lunr.Index~Result} - */ -lunr.MatchData = function (term, field, metadata) { - var clonedMetadata = Object.create(null), - metadataKeys = Object.keys(metadata || {}) - - // Cloning the metadata to prevent the original - // being mutated during match data combination. - // Metadata is kept in an array within the inverted - // index so cloning the data can be done with - // Array#slice - for (var i = 0; i < metadataKeys.length; i++) { - var key = metadataKeys[i] - clonedMetadata[key] = metadata[key].slice() - } - - this.metadata = Object.create(null) - - if (term !== undefined) { - this.metadata[term] = Object.create(null) - this.metadata[term][field] = clonedMetadata - } -} - -/** - * An instance of lunr.MatchData will be created for every term that matches a - * document. However only one instance is required in a lunr.Index~Result. This - * method combines metadata from another instance of lunr.MatchData with this - * objects metadata. - * - * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one. - * @see {@link lunr.Index~Result} - */ -lunr.MatchData.prototype.combine = function (otherMatchData) { - var terms = Object.keys(otherMatchData.metadata) - - for (var i = 0; i < terms.length; i++) { - var term = terms[i], - fields = Object.keys(otherMatchData.metadata[term]) - - if (this.metadata[term] == undefined) { - this.metadata[term] = Object.create(null) - } - - for (var j = 0; j < fields.length; j++) { - var field = fields[j], - keys = Object.keys(otherMatchData.metadata[term][field]) - - if (this.metadata[term][field] == undefined) { - this.metadata[term][field] = Object.create(null) - } - - for (var k = 0; k < keys.length; k++) { - var key = keys[k] - - if (this.metadata[term][field][key] == undefined) { - this.metadata[term][field][key] = otherMatchData.metadata[term][field][key] - } else { - this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key]) - } - - } - } - } -} - -/** - * Add metadata for a term/field pair to this instance of match data. - * - * @param {string} term - The term this match data is associated with - * @param {string} field - The field in which the term was found - * @param {object} metadata - The metadata recorded about this term in this field - */ -lunr.MatchData.prototype.add = function (term, field, metadata) { - if (!(term in this.metadata)) { - this.metadata[term] = Object.create(null) - this.metadata[term][field] = metadata - return - } - - if (!(field in this.metadata[term])) { - this.metadata[term][field] = metadata - return - } - - var metadataKeys = Object.keys(metadata) - - for (var i = 0; i < metadataKeys.length; i++) { - var key = metadataKeys[i] - - if (key in this.metadata[term][field]) { - this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key]) - } else { - this.metadata[term][field][key] = metadata[key] - } - } -} -/** - * A lunr.Query provides a programmatic way of defining queries to be performed - * against a {@link lunr.Index}. - * - * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method - * so the query object is pre-initialized with the right index fields. - * - * @constructor - * @property {lunr.Query~Clause[]} clauses - An array of query clauses. - * @property {string[]} allFields - An array of all available fields in a lunr.Index. - */ -lunr.Query = function (allFields) { - this.clauses = [] - this.allFields = allFields -} - -/** - * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause. - * - * This allows wildcards to be added to the beginning and end of a term without having to manually do any string - * concatenation. - * - * The wildcard constants can be bitwise combined to select both leading and trailing wildcards. - * - * @constant - * @default - * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour - * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists - * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists - * @see lunr.Query~Clause - * @see lunr.Query#clause - * @see lunr.Query#term - * @example query term with trailing wildcard - * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING }) - * @example query term with leading and trailing wildcard - * query.term('foo', { - * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING - * }) - */ - -lunr.Query.wildcard = new String ("*") -lunr.Query.wildcard.NONE = 0 -lunr.Query.wildcard.LEADING = 1 -lunr.Query.wildcard.TRAILING = 2 - -/** - * Constants for indicating what kind of presence a term must have in matching documents. - * - * @constant - * @enum {number} - * @see lunr.Query~Clause - * @see lunr.Query#clause - * @see lunr.Query#term - * @example query term with required presence - * query.term('foo', { presence: lunr.Query.presence.REQUIRED }) - */ -lunr.Query.presence = { - /** - * Term's presence in a document is optional, this is the default value. - */ - OPTIONAL: 1, - - /** - * Term's presence in a document is required, documents that do not contain - * this term will not be returned. - */ - REQUIRED: 2, - - /** - * Term's presence in a document is prohibited, documents that do contain - * this term will not be returned. - */ - PROHIBITED: 3 -} - -/** - * A single clause in a {@link lunr.Query} contains a term and details on how to - * match that term against a {@link lunr.Index}. - * - * @typedef {Object} lunr.Query~Clause - * @property {string[]} fields - The fields in an index this clause should be matched against. - * @property {number} [boost=1] - Any boost that should be applied when matching this clause. - * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be. - * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline. - * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended. - * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents. - */ - -/** - * Adds a {@link lunr.Query~Clause} to this query. - * - * Unless the clause contains the fields to be matched all fields will be matched. In addition - * a default boost of 1 is applied to the clause. - * - * @param {lunr.Query~Clause} clause - The clause to add to this query. - * @see lunr.Query~Clause - * @returns {lunr.Query} - */ -lunr.Query.prototype.clause = function (clause) { - if (!('fields' in clause)) { - clause.fields = this.allFields - } - - if (!('boost' in clause)) { - clause.boost = 1 - } - - if (!('usePipeline' in clause)) { - clause.usePipeline = true - } - - if (!('wildcard' in clause)) { - clause.wildcard = lunr.Query.wildcard.NONE - } - - if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) { - clause.term = "*" + clause.term - } - - if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) { - clause.term = "" + clause.term + "*" - } - - if (!('presence' in clause)) { - clause.presence = lunr.Query.presence.OPTIONAL - } - - this.clauses.push(clause) - - return this -} - -/** - * A negated query is one in which every clause has a presence of - * prohibited. These queries require some special processing to return - * the expected results. - * - * @returns boolean - */ -lunr.Query.prototype.isNegated = function () { - for (var i = 0; i < this.clauses.length; i++) { - if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) { - return false - } - } - - return true -} - -/** - * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause} - * to the list of clauses that make up this query. - * - * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion - * to a token or token-like string should be done before calling this method. - * - * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an - * array, each term in the array will share the same options. - * - * @param {object|object[]} term - The term(s) to add to the query. - * @param {object} [options] - Any additional properties to add to the query clause. - * @returns {lunr.Query} - * @see lunr.Query#clause - * @see lunr.Query~Clause - * @example adding a single term to a query - * query.term("foo") - * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard - * query.term("foo", { - * fields: ["title"], - * boost: 10, - * wildcard: lunr.Query.wildcard.TRAILING - * }) - * @example using lunr.tokenizer to convert a string to tokens before using them as terms - * query.term(lunr.tokenizer("foo bar")) - */ -lunr.Query.prototype.term = function (term, options) { - if (Array.isArray(term)) { - term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this) - return this - } - - var clause = options || {} - clause.term = term.toString() - - this.clause(clause) - - return this -} -lunr.QueryParseError = function (message, start, end) { - this.name = "QueryParseError" - this.message = message - this.start = start - this.end = end -} - -lunr.QueryParseError.prototype = new Error -lunr.QueryLexer = function (str) { - this.lexemes = [] - this.str = str - this.length = str.length - this.pos = 0 - this.start = 0 - this.escapeCharPositions = [] -} - -lunr.QueryLexer.prototype.run = function () { - var state = lunr.QueryLexer.lexText - - while (state) { - state = state(this) - } -} - -lunr.QueryLexer.prototype.sliceString = function () { - var subSlices = [], - sliceStart = this.start, - sliceEnd = this.pos - - for (var i = 0; i < this.escapeCharPositions.length; i++) { - sliceEnd = this.escapeCharPositions[i] - subSlices.push(this.str.slice(sliceStart, sliceEnd)) - sliceStart = sliceEnd + 1 - } - - subSlices.push(this.str.slice(sliceStart, this.pos)) - this.escapeCharPositions.length = 0 - - return subSlices.join('') -} - -lunr.QueryLexer.prototype.emit = function (type) { - this.lexemes.push({ - type: type, - str: this.sliceString(), - start: this.start, - end: this.pos - }) - - this.start = this.pos -} - -lunr.QueryLexer.prototype.escapeCharacter = function () { - this.escapeCharPositions.push(this.pos - 1) - this.pos += 1 -} - -lunr.QueryLexer.prototype.next = function () { - if (this.pos >= this.length) { - return lunr.QueryLexer.EOS - } - - var char = this.str.charAt(this.pos) - this.pos += 1 - return char -} - -lunr.QueryLexer.prototype.width = function () { - return this.pos - this.start -} - -lunr.QueryLexer.prototype.ignore = function () { - if (this.start == this.pos) { - this.pos += 1 - } - - this.start = this.pos -} - -lunr.QueryLexer.prototype.backup = function () { - this.pos -= 1 -} - -lunr.QueryLexer.prototype.acceptDigitRun = function () { - var char, charCode - - do { - char = this.next() - charCode = char.charCodeAt(0) - } while (charCode > 47 && charCode < 58) - - if (char != lunr.QueryLexer.EOS) { - this.backup() - } -} - -lunr.QueryLexer.prototype.more = function () { - return this.pos < this.length -} - -lunr.QueryLexer.EOS = 'EOS' -lunr.QueryLexer.FIELD = 'FIELD' -lunr.QueryLexer.TERM = 'TERM' -lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE' -lunr.QueryLexer.BOOST = 'BOOST' -lunr.QueryLexer.PRESENCE = 'PRESENCE' - -lunr.QueryLexer.lexField = function (lexer) { - lexer.backup() - lexer.emit(lunr.QueryLexer.FIELD) - lexer.ignore() - return lunr.QueryLexer.lexText -} - -lunr.QueryLexer.lexTerm = function (lexer) { - if (lexer.width() > 1) { - lexer.backup() - lexer.emit(lunr.QueryLexer.TERM) - } - - lexer.ignore() - - if (lexer.more()) { - return lunr.QueryLexer.lexText - } -} - -lunr.QueryLexer.lexEditDistance = function (lexer) { - lexer.ignore() - lexer.acceptDigitRun() - lexer.emit(lunr.QueryLexer.EDIT_DISTANCE) - return lunr.QueryLexer.lexText -} - -lunr.QueryLexer.lexBoost = function (lexer) { - lexer.ignore() - lexer.acceptDigitRun() - lexer.emit(lunr.QueryLexer.BOOST) - return lunr.QueryLexer.lexText -} - -lunr.QueryLexer.lexEOS = function (lexer) { - if (lexer.width() > 0) { - lexer.emit(lunr.QueryLexer.TERM) - } -} - -// This matches the separator used when tokenising fields -// within a document. These should match otherwise it is -// not possible to search for some tokens within a document. -// -// It is possible for the user to change the separator on the -// tokenizer so it _might_ clash with any other of the special -// characters already used within the search string, e.g. :. -// -// This means that it is possible to change the separator in -// such a way that makes some words unsearchable using a search -// string. -lunr.QueryLexer.termSeparator = lunr.tokenizer.separator - -lunr.QueryLexer.lexText = function (lexer) { - while (true) { - var char = lexer.next() - - if (char == lunr.QueryLexer.EOS) { - return lunr.QueryLexer.lexEOS - } - - // Escape character is '\' - if (char.charCodeAt(0) == 92) { - lexer.escapeCharacter() - continue - } - - if (char == ":") { - return lunr.QueryLexer.lexField - } - - if (char == "~") { - lexer.backup() - if (lexer.width() > 0) { - lexer.emit(lunr.QueryLexer.TERM) - } - return lunr.QueryLexer.lexEditDistance - } - - if (char == "^") { - lexer.backup() - if (lexer.width() > 0) { - lexer.emit(lunr.QueryLexer.TERM) - } - return lunr.QueryLexer.lexBoost - } - - // "+" indicates term presence is required - // checking for length to ensure that only - // leading "+" are considered - if (char == "+" && lexer.width() === 1) { - lexer.emit(lunr.QueryLexer.PRESENCE) - return lunr.QueryLexer.lexText - } - - // "-" indicates term presence is prohibited - // checking for length to ensure that only - // leading "-" are considered - if (char == "-" && lexer.width() === 1) { - lexer.emit(lunr.QueryLexer.PRESENCE) - return lunr.QueryLexer.lexText - } - - if (char.match(lunr.QueryLexer.termSeparator)) { - return lunr.QueryLexer.lexTerm - } - } -} - -lunr.QueryParser = function (str, query) { - this.lexer = new lunr.QueryLexer (str) - this.query = query - this.currentClause = {} - this.lexemeIdx = 0 -} - -lunr.QueryParser.prototype.parse = function () { - this.lexer.run() - this.lexemes = this.lexer.lexemes - - var state = lunr.QueryParser.parseClause - - while (state) { - state = state(this) - } - - return this.query -} - -lunr.QueryParser.prototype.peekLexeme = function () { - return this.lexemes[this.lexemeIdx] -} - -lunr.QueryParser.prototype.consumeLexeme = function () { - var lexeme = this.peekLexeme() - this.lexemeIdx += 1 - return lexeme -} - -lunr.QueryParser.prototype.nextClause = function () { - var completedClause = this.currentClause - this.query.clause(completedClause) - this.currentClause = {} -} - -lunr.QueryParser.parseClause = function (parser) { - var lexeme = parser.peekLexeme() - - if (lexeme == undefined) { - return - } - - switch (lexeme.type) { - case lunr.QueryLexer.PRESENCE: - return lunr.QueryParser.parsePresence - case lunr.QueryLexer.FIELD: - return lunr.QueryParser.parseField - case lunr.QueryLexer.TERM: - return lunr.QueryParser.parseTerm - default: - var errorMessage = "expected either a field or a term, found " + lexeme.type - - if (lexeme.str.length >= 1) { - errorMessage += " with value '" + lexeme.str + "'" - } - - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } -} - -lunr.QueryParser.parsePresence = function (parser) { - var lexeme = parser.consumeLexeme() - - if (lexeme == undefined) { - return - } - - switch (lexeme.str) { - case "-": - parser.currentClause.presence = lunr.Query.presence.PROHIBITED - break - case "+": - parser.currentClause.presence = lunr.Query.presence.REQUIRED - break - default: - var errorMessage = "unrecognised presence operator'" + lexeme.str + "'" - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } - - var nextLexeme = parser.peekLexeme() - - if (nextLexeme == undefined) { - var errorMessage = "expecting term or field, found nothing" - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } - - switch (nextLexeme.type) { - case lunr.QueryLexer.FIELD: - return lunr.QueryParser.parseField - case lunr.QueryLexer.TERM: - return lunr.QueryParser.parseTerm - default: - var errorMessage = "expecting term or field, found '" + nextLexeme.type + "'" - throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) - } -} - -lunr.QueryParser.parseField = function (parser) { - var lexeme = parser.consumeLexeme() - - if (lexeme == undefined) { - return - } - - if (parser.query.allFields.indexOf(lexeme.str) == -1) { - var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(', '), - errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields - - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } - - parser.currentClause.fields = [lexeme.str] - - var nextLexeme = parser.peekLexeme() - - if (nextLexeme == undefined) { - var errorMessage = "expecting term, found nothing" - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } - - switch (nextLexeme.type) { - case lunr.QueryLexer.TERM: - return lunr.QueryParser.parseTerm - default: - var errorMessage = "expecting term, found '" + nextLexeme.type + "'" - throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) - } -} - -lunr.QueryParser.parseTerm = function (parser) { - var lexeme = parser.consumeLexeme() - - if (lexeme == undefined) { - return - } - - parser.currentClause.term = lexeme.str.toLowerCase() - - if (lexeme.str.indexOf("*") != -1) { - parser.currentClause.usePipeline = false - } - - var nextLexeme = parser.peekLexeme() - - if (nextLexeme == undefined) { - parser.nextClause() - return - } - - switch (nextLexeme.type) { - case lunr.QueryLexer.TERM: - parser.nextClause() - return lunr.QueryParser.parseTerm - case lunr.QueryLexer.FIELD: - parser.nextClause() - return lunr.QueryParser.parseField - case lunr.QueryLexer.EDIT_DISTANCE: - return lunr.QueryParser.parseEditDistance - case lunr.QueryLexer.BOOST: - return lunr.QueryParser.parseBoost - case lunr.QueryLexer.PRESENCE: - parser.nextClause() - return lunr.QueryParser.parsePresence - default: - var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" - throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) - } -} - -lunr.QueryParser.parseEditDistance = function (parser) { - var lexeme = parser.consumeLexeme() - - if (lexeme == undefined) { - return - } - - var editDistance = parseInt(lexeme.str, 10) - - if (isNaN(editDistance)) { - var errorMessage = "edit distance must be numeric" - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } - - parser.currentClause.editDistance = editDistance - - var nextLexeme = parser.peekLexeme() - - if (nextLexeme == undefined) { - parser.nextClause() - return - } - - switch (nextLexeme.type) { - case lunr.QueryLexer.TERM: - parser.nextClause() - return lunr.QueryParser.parseTerm - case lunr.QueryLexer.FIELD: - parser.nextClause() - return lunr.QueryParser.parseField - case lunr.QueryLexer.EDIT_DISTANCE: - return lunr.QueryParser.parseEditDistance - case lunr.QueryLexer.BOOST: - return lunr.QueryParser.parseBoost - case lunr.QueryLexer.PRESENCE: - parser.nextClause() - return lunr.QueryParser.parsePresence - default: - var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" - throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) - } -} - -lunr.QueryParser.parseBoost = function (parser) { - var lexeme = parser.consumeLexeme() - - if (lexeme == undefined) { - return - } - - var boost = parseInt(lexeme.str, 10) - - if (isNaN(boost)) { - var errorMessage = "boost must be numeric" - throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) - } - - parser.currentClause.boost = boost - - var nextLexeme = parser.peekLexeme() - - if (nextLexeme == undefined) { - parser.nextClause() - return - } - - switch (nextLexeme.type) { - case lunr.QueryLexer.TERM: - parser.nextClause() - return lunr.QueryParser.parseTerm - case lunr.QueryLexer.FIELD: - parser.nextClause() - return lunr.QueryParser.parseField - case lunr.QueryLexer.EDIT_DISTANCE: - return lunr.QueryParser.parseEditDistance - case lunr.QueryLexer.BOOST: - return lunr.QueryParser.parseBoost - case lunr.QueryLexer.PRESENCE: - parser.nextClause() - return lunr.QueryParser.parsePresence - default: - var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" - throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) - } -} - - /** - * export the module via AMD, CommonJS or as a browser global - * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js - */ - ;(function (root, factory) { - if (typeof define === 'function' && define.amd) { - // AMD. Register as an anonymous module. - define(factory) - } else if (typeof exports === 'object') { - /** - * Node. Does not work with strict CommonJS, but - * only CommonJS-like enviroments that support module.exports, - * like Node. - */ - module.exports = factory() - } else { - // Browser globals (root is window) - root.lunr = factory() - } - }(this, function () { - /** - * Just return a value to define the module export. - * This example returns an object, but the module - * can return a function as the exported value. - */ - return lunr - })) -})(); diff --git a/docs-antora/ui/ui-lunr/js/vendor/search.js b/docs-antora/ui/ui-lunr/js/vendor/search.js deleted file mode 100644 index fcf4046150..0000000000 --- a/docs-antora/ui/ui-lunr/js/vendor/search.js +++ /dev/null @@ -1,212 +0,0 @@ -/* eslint-env browser */ -window.antoraLunr = (function (lunr) { - var searchInput = document.getElementById('search-input') - var searchResult = document.createElement('div') - searchResult.classList.add('search-result-dropdown-menu') - searchInput.parentNode.appendChild(searchResult) - - function highlightText (doc, position) { - var hits = [] - var start = position[0] - var length = position[1] - - var text = doc.text - var highlightSpan = document.createElement('span') - highlightSpan.classList.add('search-result-highlight') - highlightSpan.innerText = text.substr(start, length) - - var end = start + length - var textEnd = text.length - 1 - var contextOffset = 15 - var contextAfter = end + contextOffset > textEnd ? textEnd : end + contextOffset - var contextBefore = start - contextOffset < 0 ? 0 : start - contextOffset - if (start === 0 && end === textEnd) { - hits.push(highlightSpan) - } else if (start === 0) { - hits.push(highlightSpan) - hits.push(document.createTextNode(text.substr(end, contextAfter))) - } else if (end === textEnd) { - hits.push(document.createTextNode(text.substr(0, start))) - hits.push(highlightSpan) - } else { - hits.push(document.createTextNode('...' + text.substr(contextBefore, start - contextBefore))) - hits.push(highlightSpan) - hits.push(document.createTextNode(text.substr(end, contextAfter - end) + '...')) - } - return hits - } - - function highlightTitle (hash, doc, position) { - var hits = [] - var start = position[0] - var length = position[1] - - var highlightSpan = document.createElement('span') - highlightSpan.classList.add('search-result-highlight') - var title - if (hash) { - title = doc.titles.filter(function (item) { - return item.id === hash - })[0].text - } else { - title = doc.title - } - highlightSpan.innerText = title.substr(start, length) - - var end = start + length - var titleEnd = title.length - 1 - if (start === 0 && end === titleEnd) { - hits.push(highlightSpan) - } else if (start === 0) { - hits.push(highlightSpan) - hits.push(document.createTextNode(title.substr(length, titleEnd))) - } else if (end === titleEnd) { - hits.push(document.createTextNode(title.substr(0, start))) - hits.push(highlightSpan) - } else { - hits.push(document.createTextNode(title.substr(0, start))) - hits.push(highlightSpan) - hits.push(document.createTextNode(title.substr(end, titleEnd))) - } - return hits - } - - function highlightHit (metadata, hash, doc) { - var hits = [] - for (var token in metadata) { - var fields = metadata[token] - for (var field in fields) { - var positions = fields[field] - if (positions.position) { - var position = positions.position[0] // only higlight the first match - if (field === 'title') { - hits = highlightTitle(hash, doc, position) - } else if (field === 'text') { - hits = highlightText(doc, position) - } - } - } - } - return hits - } - - function createSearchResult(result, store, searchResultDataset) { - result.forEach(function (item) { - var url = item.ref - var hash - if (url.includes('#')) { - hash = url.substring(url.indexOf('#') + 1) - url = url.replace('#' + hash, '') - } - var doc = store[url] - var metadata = item.matchData.metadata - var hits = highlightHit(metadata, hash, doc) - searchResultDataset.appendChild(createSearchResultItem(doc, item, hits)) - }) - } - - function createSearchResultItem (doc, item, hits) { - var documentTitle = document.createElement('div') - documentTitle.classList.add('search-result-document-title') - documentTitle.innerText = doc.title - var documentHit = document.createElement('div') - documentHit.classList.add('search-result-document-hit') - var documentHitLink = document.createElement('a') - var rootPath = window.antora.basePath - documentHitLink.href = rootPath + item.ref - documentHit.appendChild(documentHitLink) - hits.forEach(function (hit) { - documentHitLink.appendChild(hit) - }) - var searchResultItem = document.createElement('div') - searchResultItem.classList.add('search-result-item') - searchResultItem.appendChild(documentTitle) - searchResultItem.appendChild(documentHit) - searchResultItem.addEventListener('mousedown', function (e) { - e.preventDefault() - }) - return searchResultItem - } - - function createNoResult (text) { - var searchResultItem = document.createElement('div') - searchResultItem.classList.add('search-result-item') - var documentHit = document.createElement('div') - documentHit.classList.add('search-result-document-hit') - var message = document.createElement('strong') - message.innerText = 'No results found for query "' + text + '"' - documentHit.appendChild(message) - searchResultItem.appendChild(documentHit) - return searchResultItem - } - - function search (index, text) { - // execute an exact match search - var result = index.search(text) - if (result.length > 0) { - return result - } - // no result, use a begins with search - result = index.search(text + '*') - if (result.length > 0) { - return result - } - // no result, use a contains search - result = index.search('*' + text + '*') - return result - } - - function searchIndex (index, store, text) { - // reset search result - while (searchResult.firstChild) { - searchResult.removeChild(searchResult.firstChild) - } - if (text.trim() === '') { - return - } - var result = search(index, text) - var searchResultDataset = document.createElement('div') - searchResultDataset.classList.add('search-result-dataset') - searchResult.appendChild(searchResultDataset) - if (result.length > 0) { - createSearchResult(result, store, searchResultDataset) - } else { - searchResultDataset.appendChild(createNoResult(text)) - } - } - - function debounce (func, wait, immediate) { - var timeout - return function () { - var context = this - var args = arguments - var later = function () { - timeout = null - if (!immediate) func.apply(context, args) - } - var callNow = immediate && !timeout - clearTimeout(timeout) - timeout = setTimeout(later, wait) - if (callNow) func.apply(context, args) - } - } - - function init (data) { - var index = Object.assign({index: lunr.Index.load(data.index), store: data.store}) - var search = debounce(function () { - searchIndex(index.index, index.store, searchInput.value) - }, 100) - searchInput.addEventListener('keydown', search) - - // this is prevented in case of mousedown attached to SearchResultItem - searchInput.addEventListener('blur', function (e) { - while (searchResult.firstChild) { - searchResult.removeChild(searchResult.firstChild) - } - }) - } - - return { - init: init, - } -})(window.lunr) diff --git a/docs-antora/ui/ui-lunr/partials/footer-scripts.hbs b/docs-antora/ui/ui-lunr/partials/footer-scripts.hbs deleted file mode 100644 index 7d4519ea08..0000000000 --- a/docs-antora/ui/ui-lunr/partials/footer-scripts.hbs +++ /dev/null @@ -1,12 +0,0 @@ - - -{{#if (eq env.DOCSEARCH_ENGINE 'lunr')}} - - - -{{/if}} - diff --git a/docs-antora/ui/ui-lunr/partials/head-meta.hbs b/docs-antora/ui/ui-lunr/partials/head-meta.hbs deleted file mode 100644 index c9883c2baf..0000000000 --- a/docs-antora/ui/ui-lunr/partials/head-meta.hbs +++ /dev/null @@ -1 +0,0 @@ - diff --git a/docs/.gitignore b/docs/.gitignore new file mode 100644 index 0000000000..504afef81f --- /dev/null +++ b/docs/.gitignore @@ -0,0 +1,2 @@ +node_modules/ +package-lock.json diff --git a/docs/README.adoc b/docs/README.adoc new file mode 100644 index 0000000000..eac6cfb4c3 --- /dev/null +++ b/docs/README.adoc @@ -0,0 +1,188 @@ += Antora Docs build procedure + +:idseparator: - + +== Using generate_docs.pl + +This tool should perform all of the steps in "Doing it Manually", automatically. This tool requires some command line arguments and also requires some prerequisites. + +=== Installing Node + +Be sure and have Node installed. + +see https://github.com/nvm-sh/nvm#installation-and-update[Installing Node] + +[source,bash] +---- +wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash +---- + +=== Antora pre-reqs + +Once Node is installed, follow the Antora prereqs + +Summarized from https://docs.antora.org/antora/2.3/install/linux-requirements/[Antora pre-reqs] + +[source,bash] +---- +$ nvm install --lts +---- + +=== Install Ansible + +There is one peice of the puzzle using Ansible at the moment. Better get that installed. + +[source,bash] +---- +$ sudo apt-get install ansible +---- + +=== Run generate_docs.pl + +This tool does the rest of the work. You will be requried to supply these things: + +[cols=2*] +|=== + +|base-url +|[eg: http//examplesite.org] + +|tmp-space +|[Writable path where we stage antora UI repo and the antora HTML, eg: ../../tmp] + +|html-output +|[Path where you want the generated HTML files to go, eg: /var/www/html] + +|antora-ui-repo +|[git link to a repo, could be community repo: https://gitlab.com/antora/antora-ui-default.git] + +|antora-version +|[target version of antora, eg: 2.1] + +|=== + +Example: + +[source,bash] +---- +$ cd Evergreen/docs-antora +$ ./generate_docs.pl \ +--base-url http//examplesite.org/prod \ +--tmp-space ../../tmp \ +--html-output /var/www/html/prod \ +--antora-ui-repo https://git.evergreen-ils.org/eg-antora.git \ +--antora-version 2.3 + +---- + +NOTE: This tool will create two folders within the temp space folder path: "staging" and "antora-ui". These folders will be erased and re-created with each execution. + + + +== Doing it all manually + +[source,bash] +---- +$ git clone git://git.evergreen-ils.org/working/Evergreen.git +$ git clone git://git.evergreen-ils.org/eg-antora.git +$ cd Evergreen +$ git checkout collab/blake/LP1848524_antora_ize_docs +---- + +First we have to install antora: +Summarized from +https://docs.antora.org/antora/2.1/install/install-antora/ + +[source,bash] +---- +$ cd docs-antora +# (we want to install into directory as opposed to globally) +$ npm i @antora/cli@2.1 @antora/site-generator-default@2.1 +---- + + +Now, install the ui pre-reqs building +lifted from: +https://docs.antora.org/antora-ui-default/set-up-project/ + +[source,bash] +---- +$ cd ../../eg-antora +$ npm install +$ npm gulp-bundle +---- + +At this point you should find a file in: + +NOTE: build/ui-bundle.zip + +Now you can build the website. But you may want to edit the file: + +NOTE: docs-antora/site.yml + +Because the output folder for the website is defaulted to + +NOTE: /var/www/html/prod + +And the default web URL is: + +NOTE: http://localhost/prod + +Build: + +[source,bash] +---- +$ cd ../Evergreen/docs-antora +antora site.yml +---- + +If all went well - then you will have the site built in the output folder that was configured in site.yml! + +Interesting reading related to Antora and AsciiDoc and AsciiDoctor + +NOTE: https://asciidoctor.org/docs/asciidoc-asciidoctor-diffs/ + +NOTE: https://blog.anoff.io/2019-02-15-antora-first-steps/ + +NOTE: https://owncloud.org/news/owncloud-docs-migrating-antora-pt-1-2/ + + +== Search stuff + +First you need to have ansible installed + +NOTE: If you want to manually edit the file, you don't need to install ansible + +[source,bash] +---- +$ sudo apt-get -y install ansible +---- + +Now, let's run through the antora-lunr procedure: + +NOTE: Lifted from the base install notes from the https://github.com/Mogztter/antora-lunr[ git repo] + + +[source,bash] +---- +$ ansible-playbook setup_lunr.yml + +---- + +This should have edited this file: node_modules/@antora/site-generator-default/lib/generate-site.js +as outlined in the git repo notes + +Now, install the lunr bits (from docs-antora folder) + +[source,bash] +---- +$ npm i antora-lunr +---- + +And now, you can re-generate the site but this time with the search box: + +[source,bash] +---- +$ DOCSEARCH_ENABLED=true DOCSEARCH_ENGINE=lunr antora site.yml +---- + diff --git a/docs/antora.yml b/docs/antora.yml new file mode 100644 index 0000000000..d1febd680f --- /dev/null +++ b/docs/antora.yml @@ -0,0 +1,19 @@ +name: docs +title: Evergreen docs +version: 'latest' +nav: +- modules/ROOT/nav.adoc +- modules/installation/nav.adoc +- modules/admin_initial_setup/nav.adoc +- modules/using_staff_client/nav.adoc +- modules/sys_admin/nav.adoc +- modules/local_admin/nav.adoc +- modules/acquisitions/nav.adoc +- modules/cataloging/nav.adoc +- modules/serials/nav.adoc +- modules/circulation/nav.adoc +- modules/reports/nav.adoc +- modules/opac/nav.adoc +- modules/development/nav.adoc +- modules/api/nav.adoc +- modules/appendix/nav.adoc diff --git a/docs/check_docs_meta_title.sh b/docs/check_docs_meta_title.sh new file mode 100644 index 0000000000..1ba0a3f8fd --- /dev/null +++ b/docs/check_docs_meta_title.sh @@ -0,0 +1,16 @@ +#!/bin/bash + +# This script will search a website and gather up all of the for each page +# the results will land in out.csv +# This is a nice aid to help us find pages that do not have the "right" headings + +wget --spider -r -l inf -w .25 -nc -nd $1 -R bmp,css,gif,ico,jpg,jpeg,js,mp3,mp4,pdf,png,PNG,JPG,swf,txt,xml,xls,zip 2>&1 | tee wglog + +rm out.csv +cat wglog | grep '^--' | awk '{print $3}' | sort | uniq | while read url; do { + +printf "%s* Retreiving title for: %s$url%s " "$bldgreen" "$txtrst$txtbld" "$txtrst" +printf ""${url}","`curl -# ${url} | sed -n -E 's!.*(.*).*!\1!p'`" , " >> out.csv +printf " " +}; done + diff --git a/docs/generate_docs.pl b/docs/generate_docs.pl new file mode 100644 index 0000000000..0ca827272e --- /dev/null +++ b/docs/generate_docs.pl @@ -0,0 +1,233 @@ +#!/usr/bin/perl +# --------------------------------------------------------------- +# Copyright © 2020 MOBIUS +# Blake Graham-Henderson +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# --------------------------------------------------------------- + + +use Getopt::Long; +use Cwd; +use File::Path; +use Data::Dumper; + +my $base_url; +my $tmp_space; +my $html_output; +my $antoraui_git; +my $antora_version; +my $help; + + + +GetOptions ( +"base-url=s" => \$base_url, +"tmp-space=s" => \$tmp_space, +"html-output=s" => \$html_output, +"antora-ui-repo=s" => \$antoraui_git, +"antora-version=s" => \$antora_version, +"help" => \$help +); + +sub help +{ + print < '.$file) or $ret=0; + binmode(OUTPUT, ":utf8"); + print OUTPUT "$contents\n"; + close(OUTPUT); + return $ret; +} + +sub replace_yml +{ + my $replacement = shift; + my $yml_path = shift; + my $file = shift; + my @path = split(/\//,$yml_path); + my @lines = @{read_file($file)}; + my $depth = 0; + my $ret = ''; + while(@lines[0]) + { + my $line = shift @lines; + if(@path[0]) + { + my $preceed_space = $depth * 2; + my $exp = '\s{'.$preceed_space.'}'; + $exp = '[^\s#]' if $preceed_space == 0; + # print "testing $exp\n"; + if($line =~ m/^$exp.*/) + { + if($line =~ m/^\s*@path[0].*/) + { + $depth++; + if(!@path[1]) + { + # print "replacing '$line'\n"; + my $t = @path[0]; + $line =~ s/^(.*?$t[^\s]*).*$/\1 $replacement/g; + # print "now: '$line'\n"; + } + shift @path; + } + } + } + $line =~ s/[\n\t]*$//g; + $ret .= "$line\n"; + } + + return $ret; +} + +sub exec_system_cmd +{ + my $cmd = shift; + print "executing $cmd\n"; + system($cmd) == 0 + or die "system @args failed: $?"; +} + +sub read_file +{ + my $file = shift; + my $trys=0; + my $failed=0; + my @lines; + #print "Attempting open\n"; + if(-e $file) + { + my $worked = open (inputfile, '< '. $file); + if(!$worked) + { + print "******************Failed to read file*************\n"; + } + binmode(inputfile, ":utf8"); + while (!(open (inputfile, '< '. $file)) && $trys<100) + { + print "Trying again attempt $trys\n"; + $trys++; + sleep(1); + } + if($trys<100) + { + #print "Finally worked... now reading\n"; + @lines = ; + close(inputfile); + } + else + { + print "Attempted $trys times. COULD NOT READ FILE: $file\n"; + } + close(inputfile); + } + else + { + print "File does not exist: $file\n"; + } + return \@lines; +} + +exit; \ No newline at end of file diff --git a/docs/modules/ROOT/_attributes.adoc b/docs/modules/ROOT/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/ROOT/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc new file mode 100644 index 0000000000..48447d9e3b --- /dev/null +++ b/docs/modules/ROOT/nav.adoc @@ -0,0 +1,2 @@ +* xref:shared:about_this_documentation.adoc[Introduction] +** xref:shared:about_evergreen.adoc[About Evergreen] diff --git a/docs/modules/ROOT/pages/_attributes.adoc b/docs/modules/ROOT/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/ROOT/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/ROOT/pages/index.adoc b/docs/modules/ROOT/pages/index.adoc new file mode 100644 index 0000000000..6105a13baf --- /dev/null +++ b/docs/modules/ROOT/pages/index.adoc @@ -0,0 +1,16 @@ += Evergreen Documentation +ifndef::env-site,env-github[] +include::_attributes.adoc[] +endif::[] +// Settings +:idprefix: +:idseparator: - + + +== Topic Manuals == + +Browse all the documentation topics using the left sidebar. Or, try one of +these smaller topic manuals: + +xref:acq:ROOT:index.adoc[Acquisitions Manual] + diff --git a/docs/modules/acquisitions/_attributes.adoc b/docs/modules/acquisitions/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/acquisitions/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/acquisitions/assets/images/media/2_10_Lineitem_Paid.png b/docs/modules/acquisitions/assets/images/media/2_10_Lineitem_Paid.png new file mode 100644 index 0000000000..b5e9303017 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/2_10_Lineitem_Paid.png differ diff --git a/docs/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled2.jpg b/docs/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled2.jpg new file mode 100644 index 0000000000..f363c00988 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled2.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled4.jpg b/docs/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled4.jpg new file mode 100644 index 0000000000..3b5d647b01 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/2_7_Enhancements_to_Canceled4.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Electronic_invoicing1.jpg b/docs/modules/acquisitions/assets/images/media/Electronic_invoicing1.jpg new file mode 100644 index 0000000000..91923bd225 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Electronic_invoicing1.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice1.jpg b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice1.jpg new file mode 100644 index 0000000000..120894cf1b Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice1.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice2.jpg b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice2.jpg new file mode 100644 index 0000000000..266610b8c2 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice2.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice4.jpg b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice4.jpg new file mode 100644 index 0000000000..60f7eb1043 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice4.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice5.jpg b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice5.jpg new file mode 100644 index 0000000000..bd9a45b7e5 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice5.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice7.jpg b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice7.jpg new file mode 100644 index 0000000000..77785d506b Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Receive_Items_From_an_Invoice7.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Return_to_line_item1.jpg b/docs/modules/acquisitions/assets/images/media/Return_to_line_item1.jpg new file mode 100644 index 0000000000..ff0209be72 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Return_to_line_item1.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice1.jpg b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice1.jpg new file mode 100644 index 0000000000..3fe5aa3b93 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice1.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice2.jpg b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice2.jpg new file mode 100644 index 0000000000..e4832f6817 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice2.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice3.jpg b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice3.jpg new file mode 100644 index 0000000000..1ab71c6e59 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice3.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice5.jpg b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice5.jpg new file mode 100644 index 0000000000..181c27bfce Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Search_for_line_items_from_an_invoice5.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions1.jpg b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions1.jpg new file mode 100644 index 0000000000..5081a23337 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions1.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions2.jpg b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions2.jpg new file mode 100644 index 0000000000..afc195b847 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions2.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions3.jpg b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions3.jpg new file mode 100644 index 0000000000..1190fb1dd5 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions3.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions4.jpg b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions4.jpg new file mode 100644 index 0000000000..1b8d00fd3d Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions4.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions5.jpg b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions5.jpg new file mode 100644 index 0000000000..fdeb99c15c Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions5.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions6.jpg b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions6.jpg new file mode 100644 index 0000000000..90dde3c635 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Vandelay_Integration_into_Acquisitions6.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/Zero_Copies1.jpg b/docs/modules/acquisitions/assets/images/media/Zero_Copies1.jpg new file mode 100644 index 0000000000..b95610d131 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/Zero_Copies1.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_brief_record-2.png b/docs/modules/acquisitions/assets/images/media/acq_brief_record-2.png new file mode 100644 index 0000000000..264213ce31 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_brief_record-2.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_brief_record.png b/docs/modules/acquisitions/assets/images/media/acq_brief_record.png new file mode 100644 index 0000000000..25fdefe7ba Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_brief_record.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_invoice_blanket.png b/docs/modules/acquisitions/assets/images/media/acq_invoice_blanket.png new file mode 100644 index 0000000000..227727ee2b Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_invoice_blanket.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_invoice_link.png b/docs/modules/acquisitions/assets/images/media/acq_invoice_link.png new file mode 100644 index 0000000000..6bc7c1b3dd Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_invoice_link.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_invoice_view-2.png b/docs/modules/acquisitions/assets/images/media/acq_invoice_view-2.png new file mode 100644 index 0000000000..abac114908 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_invoice_view-2.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_invoice_view.png b/docs/modules/acquisitions/assets/images/media/acq_invoice_view.png new file mode 100644 index 0000000000..616380cee3 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_invoice_view.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_marc_search-2.png b/docs/modules/acquisitions/assets/images/media/acq_marc_search-2.png new file mode 100644 index 0000000000..f991a6d423 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_marc_search-2.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_marc_search.png b/docs/modules/acquisitions/assets/images/media/acq_marc_search.png new file mode 100644 index 0000000000..391ae435a2 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_marc_search.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_selection_clone.png b/docs/modules/acquisitions/assets/images/media/acq_selection_clone.png new file mode 100644 index 0000000000..d4183b5cb1 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_selection_clone.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_selection_create.png b/docs/modules/acquisitions/assets/images/media/acq_selection_create.png new file mode 100644 index 0000000000..f248d0ae9e Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_selection_create.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_selection_mark_ready.png b/docs/modules/acquisitions/assets/images/media/acq_selection_mark_ready.png new file mode 100644 index 0000000000..3eb558659e Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_selection_mark_ready.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_selection_merge.png b/docs/modules/acquisitions/assets/images/media/acq_selection_merge.png new file mode 100644 index 0000000000..e1f740c246 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_selection_merge.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_upload_library_settings.png b/docs/modules/acquisitions/assets/images/media/acq_upload_library_settings.png new file mode 100644 index 0000000000..f33d348843 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_upload_library_settings.png differ diff --git a/docs/modules/acquisitions/assets/images/media/acq_workflow.jpg b/docs/modules/acquisitions/assets/images/media/acq_workflow.jpg new file mode 100644 index 0000000000..1143cfa017 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/acq_workflow.jpg differ diff --git a/docs/modules/acquisitions/assets/images/media/display_copy_count_1.JPG b/docs/modules/acquisitions/assets/images/media/display_copy_count_1.JPG new file mode 100644 index 0000000000..ab4ac66ddf Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/display_copy_count_1.JPG differ diff --git a/docs/modules/acquisitions/assets/images/media/display_copy_count_2.JPG b/docs/modules/acquisitions/assets/images/media/display_copy_count_2.JPG new file mode 100644 index 0000000000..368f36f09c Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/display_copy_count_2.JPG differ diff --git a/docs/modules/acquisitions/assets/images/media/load_marc_order_records.png b/docs/modules/acquisitions/assets/images/media/load_marc_order_records.png new file mode 100644 index 0000000000..599777658d Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/load_marc_order_records.png differ diff --git a/docs/modules/acquisitions/assets/images/media/patronrequests_requestform.PNG b/docs/modules/acquisitions/assets/images/media/patronrequests_requestform.PNG new file mode 100644 index 0000000000..fbf7d03cdb Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/patronrequests_requestform.PNG differ diff --git a/docs/modules/acquisitions/assets/images/media/patronrequests_requestgrid.PNG b/docs/modules/acquisitions/assets/images/media/patronrequests_requestgrid.PNG new file mode 100644 index 0000000000..5a7afdbafb Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/patronrequests_requestgrid.PNG differ diff --git a/docs/modules/acquisitions/assets/images/media/po_name_detection_1.JPG b/docs/modules/acquisitions/assets/images/media/po_name_detection_1.JPG new file mode 100644 index 0000000000..93b34b4cfe Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/po_name_detection_1.JPG differ diff --git a/docs/modules/acquisitions/assets/images/media/po_name_detection_2.JPG b/docs/modules/acquisitions/assets/images/media/po_name_detection_2.JPG new file mode 100644 index 0000000000..e3913d6c61 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/po_name_detection_2.JPG differ diff --git a/docs/modules/acquisitions/assets/images/media/po_name_detection_3.JPG b/docs/modules/acquisitions/assets/images/media/po_name_detection_3.JPG new file mode 100644 index 0000000000..f0b046fd36 Binary files /dev/null and b/docs/modules/acquisitions/assets/images/media/po_name_detection_3.JPG differ diff --git a/docs/modules/acquisitions/nav.adoc b/docs/modules/acquisitions/nav.adoc new file mode 100644 index 0000000000..54d2a5cda4 --- /dev/null +++ b/docs/modules/acquisitions/nav.adoc @@ -0,0 +1,8 @@ +* xref:acquisitions:introduction.adoc[Acquisitions] +** xref:acquisitions:selection_lists_po.adoc[Selection Lists and Purchase Orders] +** xref:acquisitions:vandelay_acquisitions_integration.adoc[Load MARC Order Records] +** xref:acquisitions:invoices.adoc[Invoices] +** xref:acquisitions:purchase_requests_management.adoc[Managing patron purchase requests] +** xref:acquisitions:purchase_requests_patron_view.adoc[Placing purchase requests from a patron record] +** xref:acquisitions:blanket.adoc["Blanket" Orders] + diff --git a/docs/modules/acquisitions/pages/_attributes.adoc b/docs/modules/acquisitions/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/acquisitions/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/acquisitions/pages/blanket.adoc b/docs/modules/acquisitions/pages/blanket.adoc new file mode 100644 index 0000000000..bfc6db9c60 --- /dev/null +++ b/docs/modules/acquisitions/pages/blanket.adoc @@ -0,0 +1,39 @@ += "Blanket" Orders = +:toc: + +"Blanket" orders allow staff to invoice an encumbered amount multiple times, paying off the charge over a period of time. The work flow supported by this development assumes staff does not need to track the individual contents of the order, only the amounts encumbered and invoiced in bulk. + +== Example == + +. Staff creates PO with a Direct Charge of "Popular Fiction 2015" and a charge type of "Blanket Order". + +. The amount entered for the charge equals the total amount expected to be charged over the duration of the order. + +. When a shipment of "Popular Fiction" items arrive, staff creates an invoice from the "Popular Fiction 2015" PO page and enters the amount billed/paid for the received shipment under the "Popular Fiction 2015" charge in the invoice. + +. When the final shipment arrives, staff select the _Final invoice for Blanket Order_ option on the invoice screen to mark the PO as _received_ and drop any remaining encumbrances to $0. + + .. Alternatively, if the PO needs to be finalized without creating a final invoice, staff can use the new _Finalize Blanket Order_ option on the PO page. + +== More details about blanket orders == + +* Any direct charge using a _blanket_ item type will create a long-lived charge that can be invoiced multiple times. + +* Such a charge is considered open until its purchase order is "finalized" (received). + +* "Finalizing" a PO changes the PO's state to _received_ (assuming there are no pending lineitems on the PO) and fully dis-encumbers all _blanket_ charges on the PO by setting the fund_debit amount to $0 on the original fund_debit for the charge. + +* Invoicing a _blanket_ charge does the following under the covers: + + .. Create an invoice_item to track the payment + + .. Create a new fund_debit to implement the payment whose amount matches the invoiced amount. + +* Subtract the invoiced amount from the fund_debit linked to the original _blanket_ po_item, thus reducing the amount encumbered on the charge as a whole by the invoiced amount. + +* A PO can have multiple blanket charges. E.g. you could have a blanket order for "Popular Fiction 2015" and a second charge for "Pop Fiction 2015 Taxes" to track / pay taxes over time on a blanket charge. + +* A PO can have a mix of lineitems, non-blanket charges, and blanket charges. + +* A _blanket_ Invoice Item Type cannot also be a _prorate_ type, since it's nonsensical. Blanket items are encumbered, whereas prorated items are only paid at invoice time and never encumbered. + diff --git a/docs/modules/acquisitions/pages/introduction.adoc b/docs/modules/acquisitions/pages/introduction.adoc new file mode 100644 index 0000000000..54397cfd4a --- /dev/null +++ b/docs/modules/acquisitions/pages/introduction.adoc @@ -0,0 +1,26 @@ += Acquisitions = +:toc: + +== Initial Configuration == + +Before beginning to use Acquisitions, the following must be configured by an administrator: + +* Cancel/Suspend Reasons (optional) +* Claiming (optional) +* Currency Types (defaults exist) +* Distribution Formulas (optional) +* EDI Accounts (optional) +* Exchange Rates (defaults exist) +* Funds and Fund Sources +* Invoice Types (defaults exist) and Invoice Payment Methods +* Line Item Features (optional) +* Merge Overlay Profiles and Record Match Sets +* Providers + +More details can be found in the Staff Client System Administration manual. + +== Acquisitions Workflow == + +The following diagram shows how the workflow functions in Evergreen. One of the differences in this process you should notice is that when creating a selection list on the vendor site, libraries will be downloading and importing the vendor bibs and item records. + +image::media/acq_workflow.jpg[workflow diagram] diff --git a/docs/modules/acquisitions/pages/invoices.adoc b/docs/modules/acquisitions/pages/invoices.adoc new file mode 100644 index 0000000000..96d222bd53 --- /dev/null +++ b/docs/modules/acquisitions/pages/invoices.adoc @@ -0,0 +1,332 @@ += Invoices = +:toc: + +== Introduction == + +indexterm:[acquisitions,invoices] + +You can create invoices for purchase orders, individual line items, and blanket purchases. You can also link existing invoices to purchase order. + +You can invoice items before you receive the items if desired. You can also +reopen closed invoices, and you can print all invoices. + +== Creating invoices and adding line items == +You can add specific line items to an invoice from the PO or acquisitions +search results screen. You can also search for relevant line items from within +the invoice interface. In addition, you can add all line items from an entire +Purchase order to an invoice or you can create a blanket invoice for items that are not +attached to a purchase order. + +=== Creating a blanket invoice === + +You can create a blanket invoice for purchases that are not attached to a purchase order. + +. Click _Acquisitions_ -> _Create invoice_. +. Enter the invoice information in the top half of the screen. +. To add charges for materials not attached to a purchase order, click _Add +Charge..._ This functionality may also be used to add shipping, tax, and other fees. +. Select a charge type from the drop-down menu. ++ +[NOTE] +New charge types can be added via _Administration_ -> _Acquisitions +Administration_ -> _Invoice Item Types_. ++ +. Select a fund from the drop-down menu. +. Enter a _Title/Description_ of the resource. +. Enter the amount that you were billed. +. Enter the amount that you paid. +. Save the invoice. + +image::media/acq_invoice_blanket.png[Blanket invoice] + +=== Adding line items from a Purchase Order or search results screen to an invoice === + +You can create an invoice or add line items to an invoice directly from a +Purchase Order or an acquisitions search results screen. + +. Place a checkmark in the box for selected line items from the Purchase Order' or acquisitions search results page. +. If you are creating a new invoice, click _Actions_ -> _Create Invoice From +Selected Line Items_. Enter the invoice information in the top half of the +screen. +. If you are adding the line items to an existing invoice, click _Actions_ -> +_Link Selected Line Items to Invoice_. Enter the Invoice # and Provider and +then click the _Link_ button. +. Evergreen automatically enters the number of items that was ordered in +the # Invoiced and # Paid fields. Adjust these quantities as needed. +. Enter the amount that the organization was billed. This entry will +automatically propagate to the Paid field. +. You have the option to add charge types if applicable. Charge types are +additional charges that can be selected from the drop-down menu. Common charge +types include taxes and handling fees. +. You have four options for saving an invoice. + +- Click _Save_ to save the changes you have made while staying in the current +invoice. +- Click _Save & Clear_ to save the changes you have made and to replace the +current invoice with a new invoice so that you can continue invoicing items. +- Click _Prorate_ to save the invoice and prorate any additional charges, such +as taxes, across funds, if multiple funds have been used to pay the invoice. + ++ +[NOTE] +Prorating will only be applied to charge types that have the _Prorate?_ flag set +to true. This setting can be adjusted via _Administration_ -> +_Acquisitions Administration_ -> _Invoice Item Types_. ++ + +- Click _Close_. Choose this option when you have completed the invoice. This +option will also save any changes that have been made. Funds will be disencumbered when the invoice is closed. + +. You can re-open a closed invoice by clicking the link, _Re-open invoice_. This +link appears at the bottom of a closed invoice. + +=== Search for line items from an invoice === + +indexterm:[acquisitions,lineitems,searching for] +indexterm:[acquisitions,invoices,searching for lineitems] + +You can open an invoice, search for line items from +the invoice, and add your search results to a new or existing invoice. This +feature is especially useful when you want to populate an invoice with line +items from multiple purchase orders. + +In this example, we'll add line items to a new invoice: + +indexterm:[acquisitions,lineitems,adding] + +. Click _Acquisitions_ -> _Create Invoice_. +. An invoice summary appears at the top of the invoice and includes the number +of line items on the invoice and the expected cost of the items. This number +will change as we add line items to the invoice. +. Enter the invoice details (optional). If you do not enter the invoice +details, then Evergreen will populate the _Provider_ and _Receiver_ fields with +information from the line items. ++ +NOTE: If you do not want to display the details, click _Hide Details_. ++ +image::media/Search_for_line_items_from_an_invoice1.jpg[Search_for_line_items_from_an_invoice1] ++ +. Click the _Search_ tab to add line items to an invoice. +. Select your search criteria from the drop-down menu. +. On the right side of the screen, _Limit to Invoiceable Items_ is checked by +default. Invoiceable items are those that are on order, have not been +cancelled, and have not yet been invoiced. Evergreen also filters out items +that have already been added to an invoice. Finally, if this box is checked, +and if your entered the invoice details at the top of the screen, then Evergreen +will filter your search for items that have the same provider as the one that +you entered. If you have not entered the invoice details, then Evergreen +removes this limit. +. Sort by title (optional). By default, results are listed by line item +number. Check this box to sort by ascending title. +. Building the results list progressively (optional). By default, new search +results will replace previous results on the screen. Check this box for the +search results list to build with each subsequent search. This option is useful +for libraries that might search for line items by scanning an ISBN. Several +ISBNs can be scanned and then the entire result set can be selected and moved +to the invoice in a batch. +. Click _Search_. ++ +image::media/Search_for_line_items_from_an_invoice2.jpg[Search_for_line_items_from_an_invoice2] ++ +. Use the _Next_ button to page through results, or select a line item(s), and +click _Add Selected Items to Invoice_. +.The rows that you selected are highlighted, and the invoice summary at the +top of the screen updates. ++ +image::media/Search_for_line_items_from_an_invoice3.jpg[Search_for_line_items_from_an_invoice3] ++ +. Click the _Invoice_ tab to see the updated invoice. +. Evergreen automatically enters the number of items that was ordered in the +# Invoiced and # Paid fields. Adjust these quantities as needed. +. Enter the amount that the organization was billed. This entry will +automatically propagate to the Paid field. The _Per Copy_ field calculates the +cost of each copy by dividing the amount that was billed by the number of +copies for which the library paid. + +image::media/Search_for_line_items_from_an_invoice5.jpg[Search_for_line_items_from_an_invoice5] + +=== Create an invoice for a purchase order === + +You can create an invoice for all of the line items on a purchase order. With +the exception of fields with drop-down menus, no limitations on the data that you enter exist. + +. Open a purchase order. +. Click _Create Invoice_. +. Enter a Vendor Invoice ID. This number may be listed on the paper invoice +sent from your vendor. +. Choose a Receive Method from the drop-down menu. The system will default to +_Paper_. +. The Provider is generated from the purchase order and is entered by default. +. Enter a note (optional). +. Select a payment method from the drop-down menu (optional). +. The Invoice Date is entered by default as the date that you create the +invoice. You can change the date by clicking in the field. A calendar drops +down. +. Enter an Invoice Type (optional). +. The Shipper defaults to the provider that was entered in the purchase order. +. Enter a Payment Authorization (optional). +. The Receiver defaults to the branch at which your workstation is registered. +You can change the receiver by selecting an org unit from the drop-down menu. ++ +[NOTE] +The bibliographic line items are listed in the next section of the invoice. +Along with the _title_ and _author_ of the line items is a _summary of copies +ordered, received, invoiced, claimed,_ and _cancelled_. You can also view the +_amounts estimated, encumbered,_ and _paid_ for each line item. Finally, each +line item has a _line item ID_ and links to the _selection list_ (if used) and +the _purchase order_. ++ +. Evergreen automatically enters the number of items that was ordered in the +# Invoiced and # Paid fields. Adjust these quantities as needed. +. Enter the amount that the organization was billed. This entry will +automatically propagate to the Paid field. The _Per Copy_ field calculates the +cost of each copy by dividing the amount that was billed by the number of +copies for which the library paid. +. You have the option to add charge types if applicable. Charge types are +additional charges that can be selected from the drop-down menu. Common charge +types include taxes and handling fees. +. You have four options for saving an invoice. + +- Click _Save_ to save the changes you have made while staying in the current +invoice. +- Click _Save & Clear_ to save the changes you have made and to replace the +current invoice with a new invoice so that you can continue invoicing items. +- Click _Prorate_ to save the invoice and prorate any additional charges, such +as taxes, across funds, if multiple funds have been used to pay the invoice. + ++ +[NOTE] +Prorating will only be applied to charge types that have the Prorate? flag set +to true. This setting can be adjusted via _Administration_ -> +_Acquisitions Administration_ -> _Invoice Item Types_. ++ + +- Click _Close_. Choose this option when you have completed the invoice. This +option will also save any changes that have been made. Funds will be disencumbered when the invoice is closed. + +. You can re-open a closed invoice by clicking the link, _Re-open invoice_. This +link appears at the bottom of a closed invoice. + +=== Link an existing invoice to a purchase order === + +You can use the link invoice feature to link an existing invoice to a purchase +order. For example, an invoice is received for a shipment with items on +purchase order #1 and purchase order #2. When the invoice arrives, purchase +order #1 is retrieved, and the invoice is created. To receive the items on +purchase order #2, simply link the invoice to the purchase order. You do not +need to recreate it. + +. Open a purchase order. +. Click _Link Invoice_. +. Enter the Invoice # and the Provider of the invoice to which you wish to link. +. Click _Link_. + +image::media/acq_invoice_link.png[Link Invoice] + +== Electronic Invoicing == + +indexterm:[acquisitions,invoices,electronic] + +Evergreen can receive electronic invoices from providers. To +access an electronic invoice, you must: + +. Configure EDI for your provider. +. Evergreen will receive invoices electronically from the provider. +. Click _Acquisitions_ -> _Open Invoices_ to view a list of open invoices, or +use the _General Search_ to retrieve invoices. Click a hyperlinked invoice +number to view the invoice. + +image::media/Electronic_invoicing1.jpg[Electronic_invoicing1] + +== View an invoice == + +You can view an invoice in one of four ways: view open invoices; view invoices +on a purchase order; view invoices by searching specific invoice fields; view +invoices attached to a line item. + +. To view open invoices, click _Acquisitions_ -> _Open invoices_. This opens +the Acquisitions Search screen. The default fields search for open invoices. +Click _Search_. ++ +image::media/acq_invoice_view.png[Open Invoice Search] ++ +. To view invoices on a purchase order, open a purchase order and click the +_View Invoices_ link. The number in parentheses indicates the number of +invoices that are attached to the purchase order. ++ +image::media/acq_invoice_view-2.png[View Invoices from PO] ++ +. To view invoices by searching specific invoice fields, see the section on +searching the acquisitions module. +. To view invoices for a line item, see the section on line item invoices. + +== Receive Items From an Invoice == + +This feature enables users to receive items from an invoice. Staff can receive individual copies, or they can receive items in batch. + +=== Receive Items in Batch (List Mode) === + +In this example, we have created a purchase order, added line items and copies, and activated the purchase order. We will create an invoice from the purchase order, receive items, and invoice them. We will receive the items in batch from the invoice. + +1) Retrieve a purchase order. + +2) Click *Create Invoice*. + +image::media/Receive_Items_From_an_Invoice1.jpg[Receive_Items_From_an_Invoice1] + +3) The blank invoice appears. In the top half of the invoice, enter descriptive information about the invoice. In the bottom half of the invoice, enter the number of items for which you were invoiced, the amount that you were billed, and the amount that you paid. + + +image::media/Receive_Items_From_an_Invoice2.jpg[Receive_Items_From_an_Invoice2] + + +4) Click *Save*. You must choose a save option before you can receive items. + + +5) The screen refreshes. In the top right corner of the screen, click *Receive Items*. + + +6) The *Acquisitions Invoice Receiving* screen opens. By default, this screen enables users to receive items in batch, or *Numeric Mode*. You can select the number of copies that you want to receive; you are not receiving specific copies in this mode. + + +7) Select the number of copies that you want to receive. By default, the number that you invoiced will appear. In this example, we will receive one copy of each title. + + +NOTE: You cannot receive fewer items than 0 (zero) or more items than the number that you ordered. + + +8) Click *Receive Selected Copies*. + + +image::media/Receive_Items_From_an_Invoice4.jpg[Receive_Items_From_an_Invoice4] + + +9) When you are finished receiving items, close the screen. You can repeat this process as you receive more copies. + + + +=== Receive Specific Copies (Numeric Mode) === + +In this example, we have created a purchase order, added line items and copies, and activated the purchase order. We will create an invoice from the purchase order, receive items, and invoice them. We will receive specific copies from the invoice. This function may be useful to libraries who purchase items that have been barcoded by their vendor. + + +1) Complete steps 1-5 in the previous section. + +2) The *Acquisitions Invoice Receiving* screen by default enables user to receive items in batch, or *Numeric Mode*. Click *Use List Mode* to receive specific copies. + +3) Select the check boxes adjacent to the copies that you want to receive. Leave unchecked the copies that you do not want to receive. + +4) Click *Receive Selected Copies*. + +image::media/Receive_Items_From_an_Invoice5.jpg[Receive_Items_From_an_Invoice5] + + +The screen will refresh. Copies that have not yet been received remain on the screen so that you can receive them when they arrive. + + +5) When all copies on an invoice have been received, a message confirms that no copies remain to be received. + +6) The purchase order records that all items have been received. + +image::media/Receive_Items_From_an_Invoice7.jpg[Receive_Items_From_an_Invoice7] + diff --git a/docs/modules/acquisitions/pages/purchase_requests_management.adoc b/docs/modules/acquisitions/pages/purchase_requests_management.adoc new file mode 100644 index 0000000000..2de688c5c9 --- /dev/null +++ b/docs/modules/acquisitions/pages/purchase_requests_management.adoc @@ -0,0 +1,133 @@ += Managing patron purchase requests = +:toc: + +== Introduction == + +indexterm:[purchase requests] + +Patron Requests can be used to track purchase suggestions from patrons in Evergreen. This feature allows purchase requests to be placed on selection lists to integrate with the Acquisitions module. Patron Requests can be accessed through the Acquisitions module under *Acquisitions -> Patron Requests* and through patron accounts under *Other -> Acquisition Patron Requests*. Requests can be placed and managed through both interfaces. + +== Place a Patron Request == + +. Go to *Acquisitions -> Patron Requests*. This interface is scoped by Patron Home Library and will default to the library your workstation is registered to. +.. Requests can also be placed directly through a patron account, in which case the interface will scope to the patron ID. ++ +image::media/patronrequests_requestgrid.PNG[Patron Requests Grid] ++ +. Click *Create Request* and a modal with the patron request form will appear. +. Create the request by filling out the following information: +.. _User Barcode_ (required): enter the barcode of the user that is placing the request +.. _User ID_: this field will populate automatically when the User Barcode is entered +.. _Request Date/Time_: this field will populate automatically +.. _Need Before Date/Time_: if applicable, set the date and time after which the patron is no longer interested in receiving this title +.. _Place Hold?_: check this box to place a hold on this title for this patron. Holds are placed when the bib and item record are created in the catalog as part of the acquisitions process. +.. _Pickup Library_: pickup library for the hold. This field will default to the patron’s home library is the pickup library is not selected in the patron account. +.. _Notify by Email When Hold is Ready_ and _Notify by Phone When Hold is Ready_: preferences set in patron account will be used or can be set manually here. +.. _Request Type_ (required): type of material requested +.. _ISxN_ +.. _UPC_ +.. _Title_ +.. _Volume_ +.. _Author_ +.. _Publisher_ +.. _Publication Location_ +.. _Publication Date_ +.. _Article Title_: option available if Request Type is “Articles” +.. _Article Pages_: option available if Request Type is “Articles” +.. _Mentioned In_ +.. _Other Info_ +. Click *Save* at the bottom of the form. + +image::media/patronrequests_requestform.PNG[Patron Requests Form] + + +== Actions for Requests == + +After placing a Patron Request, a variety of actions can be taken by selecting the request, or right-clicking, and selecting Actions within either *Acquisitions -> Patron Requests* or through the patron account under *Other -> Acquisition Patron Requests*: + +* *Edit Request* - make changes to the request via the original request form. Edits can be made when the status of a request is New. +* *View Request* - view a read-only version of the request form +* *Retrieve Patron* - retrieve the account of the patron who placed the request +* *Add Request to Selection List* - add the request to a new or existing Selection List in the Acquisitions module. The bibliographic information in the request will generate the MARC order record. From the selection list, the request will be processed through the acquisitions module and the status of the request itself will be updated accordingly. +* *View Selection List* - view the Selection List a request has been added to (this option will be active only if the request is on a selection list) +* *Set Hold on Requests* - allows you to indicate that a hold should be placed on the requested title, without needing to go in and edit the request. You can set a hold as long as the status of the request is New or Pending. +* *Set No Hold on Requests* - allows you to indicate that a hold should not be placed on the requested title, without needing to go in and edit the request individually. +* *Cancel Requests* - cancel the request and select a cancellation reason + +== Administration == + +=== Request Status === + +Patron Requests will use the following statuses: + +* *New* - This is the initial state for a newly created acquisition request. This is the only state from which a request is editable. +* *Pending* - This is the state after a request is added to a selection list. +* *Ordered, Hold Not Placed* - This is the state when an associated purchase order has been created and the request's Place Hold flag is false. +* *Ordered, Hold Placed* - This is the state when the request's Place Hold flag is true, an associated purchase order has been created, and the bibliographic record and item for the request have been created in the catalog as part of the acquisitions process.. +* *Received* - This is the state when the line item on the linked purchase order has been marked as received. +* *Fulfilled* - This is the state when an associated hold request has been fulfilled. +* *Canceled* - This is the state when the acquisition request has been canceled. + +=== Notifications/Action Triggers === + +The following email notifications are included with Evergreen, but are disabled by default. The notices can be enabled through the *Notifications/Action Triggers* interface under *Administration -> Local Administration*. The existing notices could also be modified to place a message in the *Patron Message Center*. Any enabled notifications related to holds placed on requests will also be sent to patrons. + +* Email Notice: Acquisition Request created +* Email Notice: Acquisition Request Rejected +* Email Notice: Patron Acquisition Request marked On-Order +* Email Notice: Patron Acquisition Request marked Cancelled +* Email Notice: Patron Acquisition Request marked Received + +=== Permissions === + +This feature includes one new permission and makes use of several existing permissions. The following permissions are required to manage patron requests: + +* CLEAR_PURCHASE_REQUEST +** A new permission that allows users to clear completed requests +** This permission has been added to the stock Acquisitions permission group +** user_request.update will still be required with this sort of action +** The stock permission mappings for the Acquisitions group will be changed to include this permission +* CREATE_PICKLIST +** Will allow the staff user to create a selection list. +* VIEW_USER +** Permission depth will apply to requests. If a user tries to view a patron request that is beyond the scope of their permissions, a permission denied message will appear with a prompt to log in with different credentials. +* STAFF_LOGIN +* user_request.create +* user_request.view +* user_request.update +** This is checked when updating a request or canceling a request +* user_request.delete + +== Placing purchase requests from a patron record == + +indexterm:[patrons, purchase requests] + +Patrons may wish to suggest titles for your Library to purchase. You can track these requests within Evergreen, +whether or not you are using the acquisitions module for other purposes. This section describes how you can record +these requests within a patron's record. + +. Retrieve the patron's record. + +. Select Other --> Acquisition Patron Requests. This takes you to the Aquisition Patron Requests Screen. CTRL+click or scrollwheel click to open this in a new browser tab. + +. The Acquisition Patron Requests Screen will show any other requests that this patron has made. You may sort the requests by clicking on the column headers. + +. To show canceled requests, click the _Show Canceled Requests_ checkbox. + +. To add the request, click the _Create Request_ button. ++ +NOTE: You will need the CREATE_PURCHASE_REQUEST permission to add a request. ++ +. The request type field is required. Every other field is optional, although it is recommended that you enter as much information about the +request as possible. + +. The _Pickup Library_ and _User ID_ fields will be filled in automatically. + +. _Request Date/Time_ and _User Barcode_ will be automatically recorded when the request is saved. + +. _Notify by Email When Hold is Ready_ and _Notify by Phone When Hold is Ready_ will pull in preferences from the patron account if left blank, or can be set manually here. + +. You have the option to automatically place a hold for the patron if your library decides to purchase the item. If you'd like Evergreen to +generate this hold, check the _Place Hold_ box. + +. When you have finished entering information about the request, click the _Save_ button. diff --git a/docs/modules/acquisitions/pages/selection_lists_po.adoc b/docs/modules/acquisitions/pages/selection_lists_po.adoc new file mode 100644 index 0000000000..462d260611 --- /dev/null +++ b/docs/modules/acquisitions/pages/selection_lists_po.adoc @@ -0,0 +1,323 @@ += Selection Lists and Purchase Orders = +:toc: + +== Selection Lists == + +Selection lists allow you to create, manage, and save lists of items +that you may want to purchase. To view your selection list, click +*Acquisitions* -> *My Selection Lists*. Use the general search to view selection lists created by other users. + +=== Create a selection list === + +Selection lists can be created in four areas within the module. Selection lists can be created when you xref:#brief_records[Add Brief Records], Upload MARC Order Records, or find records through the xref:#marc_federated_search[MARC Federated Search]. In each of these interfaces, you will find the Add to Selection List field. Enter the name of the selection list that you want to create in that field. + +Selection lists can also be created through the My Selection Lists interface: + +. Click *Acquisitions* -> *My Selection Lists*. +. Click the New Selection List drop down arrow. +. Enter the name of the selection list in the box that appears. +. Click Create. + +image::media/acq_selection_create.png[create selection list] + +=== Add items to a selection list === + +You can add items to a selection list in one of three ways: xref:#brief_records[add a brief record]; upload MARC order records; add records through a xref:#marc_federated_search[federated search]; or use the View/Place Orders menu item in the catalog. + +=== Clone selection lists === + +Cloning selection lists enables you to copy one selection list into a new selection list. You can maintain both copies of the list, or you can delete the previous list. + +. Click *Acquisitions* -> *My Selection Lists*. +. Check the box adjacent to the list that you want to clone. +. Click Clone Selected. +. Enter a name into the box that appears, and click Clone. + +image::media/acq_selection_clone.png[clone selection list] + +=== Merge selection lists === + +You can merge two or more selection lists into one selection list. + + +. Click *Acquisitions* -> *My Selection Lists*. +. Check the boxes adjacent to the selection lists that you want to merge, and click Merge Selected. +. Choose the Lead Selection List from the drop down menu. This is the list to which the items on the other list(s) will be transferred. +. Click Merge. + +image::media/acq_selection_merge.png[merge selection list] + +=== Delete selection lists === + +You can delete selection lists that you do not want to save. You will not be able to retrieve these items through the General Search after you have deleted the list. You must delete all line items from a selection list before you can delete the list. + + +. Click *Acquisitions* -> *My Selection Lists*. +. Check the box adjacent to the selection list(s) that you want to delete. +. Click Delete Selected. + +=== Mark Ready for Selector === + +After an item has been added to a selection list or purchase order, you can mark it ready for selector. This step is optional but may be useful to individual workflows. + + +. If you want to mark part of a selection list ready for selector, then you can check the box(es) of the line item(s) that you wish to mark ready for selector. If you want to mark the entire list ready for selector, then skip to step 2. +. Click *Actions* -> *Mark Ready for Selector*. +. A pop up box will appear. Choose to mark the selected line items or all line items. +. Click Go. +. The screen will refresh. The marked line item(s) will be highlighted pink, and the status changes to selector-ready. + +image::media/acq_selection_mark_ready.png[mark ready] + +=== Convert selection list to purchase order === + +Use the Actions menu to convert a selection list to a purchase order. + + +. From a selection list, click *Actions* -> *Create Purchase Order*. +. A pop up box will appear. +. Select the ordering agency from the drop down menu. +. Enter the provider. +. Check the box adjacent to prepayment required if prepayment is required. +. Choose if you will add All Lineitems or Selected Lineitems to your purchase order. +. Check the box if you want to Import Bibs and Create Copies in the catalog. +. Click Submit. + + +[#purchase_orders] +== Purchase Orders == + +Purchase Orders allow you to keep track of orders and, if EDI is enabled, communicate with your provider. +To view purchase orders, click +*Acquisitions* -> *Purchase Orders*. + +=== Naming your purchase order === + +You can give your purchase order a name. + +When creating a purchase order or editing an existing purchase order, the purchase order name must be unique for the ordering agency. Evergreen will display a warning dialog to users, if they attempt to create or edit purchase order names that match the names of already existing purchase orders at the same ordering agency. The *Duplicate Purchase Order Name Warning Dialog* includes a link that will open the matching purchase order in a new tab. + +Purchase Order Names are case sensitive. + +*Duplicate PO Name Detection When Creating a New Purchase Order* + +image::media/po_name_detection_1.JPG[PO Name Detection 1] + +When a duplicate purchase order name is detected during the creation of a new purchase order, the user may: + +* Click *View PO* to view the purchase order with the matching name. The purchase order will open in a new tab. +* Click *Cancel* to cancel the creation of the new purchase order. +* Within the _Name (optional)_ field, enter a different, unique name for the new purchase order. + +If the purchase order name is unique for the ordering agency, the user will continue filling in the remaining fields and click *Save*. + +If the purchase order name is not unique for the ordering agency, the Save button will remain grayed out to the user until the purchase order is given a unique name. + +*Duplicate PO Name Detection When Editing the Name of an Existing Purchase Order* + +To change the name of an existing purchase order: + +. Within the purchase order, the _Name_ of the purchase order is a link (located at the top left-hand side of the purchase order). Click the PO Name. +. A new window will open, where users can rename the purchase order. +. Enter the new purchase order name. +. Click *OK*. + +image::media/po_name_detection_2.JPG[PO Name Detection 2] + +If the new purchase order name is unique for the ordering agency, the purchase order will be updated to reflect the new name. +If the purchase order name is not unique for the ordering agency, the purchase order will not be updated with the new name. Instead, the user will see the *Duplicate Purchase Order Name Warning Dialog* within the purchase order. + +image::media/po_name_detection_3.JPG[PO Name Detection 3] + +When a duplicate purchase order name is detected during the renaming of an existing purchase order, the user may: + +* Click *View PO* to view the purchase order with the matching name. The purchase order will open in a new tab. +* Repeat the steps to change the name of an existing purchase order and make the name unique. + +=== Activating your purchase order === + +When the appropriate criteria have been met the Activate Order button will appear and you can proceed with the following: + +. Click the button Activate Order. +. When you activate the order the bibliographic records and copies will be imported into the catalogue using the Vandelay interface, if not previously imported. See How to Load Bibliographic Records and Items into the Catalogue for instructions on using the Vandelay interface. +. The funds associated with the purchases will be encumbered. + +After you click *Activate Order*, you will be presented with the record import interface for records that are not already in the catalog. Once you complete entering in the parameters for the record import interface, the progress screen will appear. As of Evergreen 2.9, this progress screen consists of a progress bar in the foreground, and a tally of the following in the background of the bottom-left corner: + +* Lineitems processed +* Vandelay Records processed +* Bib Records Merged/Imported +* ACQ Copies Processed +* Debits Encumbered +* Real Copies Processed + +==== Activate Purchase Order without loading items ==== + +It is possible to activate a purchase order without loading items. Once the purchase order has been activated without loading items, it is not possible to load the items. This feature should only be used in situations where the copies have already been added to the catalogue, such as: + +* Cleaning up pre-acquisitions backlog +* Direct purchases that have already been catalogued + +To use this feature, click the Activate Without Loading Items button. + +==== Activate Purchase Order with Zero Copies ==== + +By default, a purchase order cannot be activated if a line item on the +purchase order has zero copies. To activate a purchase order with line +items that have zero copies, check the box *Allow activation with +zero-copy lineitems*. + +image::media/Zero_Copies1.jpg[Zero_Copies1] + +=== Line item statuses === + +The purchase orders interface keeps track of various statuses that your +line items might be in. This section lists some of the statuses you might +see when looking at purchase orders. + +==== Canceled and Delayed Items ==== + +In the purchase order interface, you can easily +differentiate between canceled and delayed items. Each label begins +with *Canceled* or *Delayed*. To view the list, click *Administration* +-> *Acquisitions Administration* -> *Cancel Reasons*. + +The cancel/delay reason label is displayed as the line item status in the list of line items or as the copy status in the list of copies. + +image::media/2_7_Enhancements_to_Canceled2.jpg[Canceled2] + + +image::media/2_7_Enhancements_to_Canceled4.jpg[Canceled4] + +A delayed line item can now be canceled. You can mark a line item as delayed, and if later, the order cannot be filled, you can change the line item's status to canceled. When delayed line items are canceled, the encumbrances are deleted. + +Cancel/delay reasons now appear on the worksheet and the printable purchase order. + +[NOTE] +======================== +When all the copies of a line item are canceled through the Acquisitions interface, +the parent lineitem is also canceled. The cancel reason will be calculated based +on the settings of: + +. The cancel reason for the last copy to be canceled copy if the cancel reason's +_Keep Debits_ setting is true. +. The cancel reason for any other copy on the line item if the cancel reason's +_Keep Debits_ setting is true. +. The cancel reason for the last copy to be canceled if no copies on the line +item have a cancel reason where _Keep Debits_ is true. +======================== + + +==== Paid PO Line Items ==== + +Purchase Order line items are marked as "Paid" in red text when all non-cancelled copies on the line item have been invoiced. + +image::media/2_10_Lineitem_Paid.png[Paid Lineitem] + + +[#brief_records] +== Brief Records == + +Brief records are short bibliographic records with minimal information that are often used as placeholder records until items are received. Brief records can be added to selection lists or purchase orders and can be imported into the catalog. You can add brief records to new or existing selection lists. You can add brief records to new, pending or on-order purchase orders. + +=== Add brief records to a selection list === + +. Click *Acquisitions* -> *New Brief Record*. You can also add brief records to an existing selection list by clicking the Actions menu on the selection list and choosing Add Brief Record. +. Choose a selection list from the drop down menu, or enter the name of a new selection list. +. Enter bibliographic information in the desired fields. +. Click Save Record. + +image::media/acq_brief_record.png[] + +=== Add brief records to purchase orders === + +You can add brief records to new or existing purchase orders. + +. Open or create a purchase order. See the section on xref:#purchase_orders[purchase orders] for more information. +. Click Add Brief Record. +. Enter bibliographic information in the desired fields. Notice that the record is added to the purchase order that you just created. +. Click Save Record. + +image::media/acq_brief_record-2.png[] + +[#marc_federated_search] +== MARC Federated Search == + +The MARC Federated Search enables you to import bibliographic records into a selection list or purchase order from a Z39.50 source. + +. Click *Acquisitions* -> *MARC Federated Search*. +. Check the boxes of Z39.50 services that you want to search. Your local Evergreen Catalog is checked by default. Click Submit. ++ +image::media/acq_marc_search.png[search form] ++ +. A list of results will appear. Click the "Copies" link to add copy information to the line item. See the xref:#line_items[section on Line Items] for more information. +. Click the Notes link to add notes or line item alerts to the line item. See the xref:#line_items[section on Line Items] for more information. +. Enter a price in the "Estimated Price" field. +. You can save the line item(s) to a selection list by checking the box +on the line item and clicking *Actions* -> *Save Items to Selection +List*. You can also create a purchase order from the line item(s) by +checking the box on the line item and clicking Actions -> Create +Purchase Order. + +image::media/acq_marc_search-2.png[line item] + +[#line_items] +== Line Items == + +=== Return to Line Item === + +This feature enables you to return to a specific line item on a selection list, +purchase order, or invoice after you have navigated away from the page that +contained the line item. This feature is especially useful when you must +identify a line item in a long list. After working with a line item, you can +return to your place in the search results or the list of line items. + +To use this feature, select a line item, and then, depending on the location of +the line item, click *Return* or *Return to search*. Evergreen will take you +back to the specific line item in your search and highlight the line item with a +colored box. + +For example, you retrieve a selection list, find a line item to examine, and +click the *Copies* link. After editing the copies, you click *Return*. +Evergreen takes you back to your selection list and highlights the line item +that you viewed. + +image::media/Return_to_line_item1.jpg[Return_to_line_item1] + +This feature is available in _General Search Results_, _Purchase Orders_, and +_Selection Lists_, whenever any of the following links are available: + +* Selection List +* Purchase Order +* Copies +* Notes +* Worksheet + +This feature is available in Invoices whenever any of the following links are +available: + +* Title +* Selection List +* Purchase Order + +=== Display a Count of Existing Copies on Selection List and Purchase Order Lineitems === + +When displaying Acquisitions lineitems within the Selection List and Purchase Order interfaces, Evergreen displays a count of existing catalog copies on the lineitem. The count of existing catalog copies refers to the number of copies owned at the ordering agency and / or the ordering agency's child organization units. + +The counts display for lineitems that have a direct link to a catalog record. Generally, this includes lineitems created as "on order" based on an existing catalog record and lineitems where "Load Bibs and Items" has been applied. + +The count of existing copies does not include copies that are in either a Lost or a Missing status. + +The existing copy count displays in the link "bar" located below the Order Identifier within the lineitem. + +If no existing copies are found, a "0" (zero) will display in plain text. + +If the existing copy count is greater than zero, then the count will display in bold and red on the lineitem. + +image::media/display_copy_count_1.JPG[Display Copy Count 1] + +The user may also hover over the existing copy count to view the accompanying tooltip. + +image::media/display_copy_count_2.JPG[Display Copy Count 2] + + diff --git a/docs/modules/acquisitions/pages/vandelay_acquisitions_integration.adoc b/docs/modules/acquisitions/pages/vandelay_acquisitions_integration.adoc new file mode 100644 index 0000000000..7819cee30d --- /dev/null +++ b/docs/modules/acquisitions/pages/vandelay_acquisitions_integration.adoc @@ -0,0 +1,223 @@ += Load MARC Order Records = +:toc: + +== Introduction == + +The Acquisitions Load MARC Order Record interface enables you to add MARC +records to selection lists and purchase orders and upload the records into the +catalog. You can both create and activate purchase orders in one step from this +interface. You can also load bibs and items into the catalog. + +Leveraging the match sets available in the cataloging MARC batch Import +interface, you can also utilize record matching mechanisms to prevent the +creation of duplicate records. + +For detailed instructions on record matching and importing, see +the cataloging manual. + +== Basic Upload Options == +. Click *Acquisitions* -> *Load MARC Order Records*. +. If you want to upload the MARC records to a new purchase order, then +check _Create Purchase Order_. +. If you want to activate the purchase order at the time of creation, then +check _Activate Purchase Order_. +. Enter the name of the *Provider*. The text will auto-complete. +. Select an org unit from the drop down menu. The context org unit is the org +unit responsible for placing and managing the order. It defines what org unit +settings (eg copy locations) are in scope, what fiscal year to use, who is +allowed to view/modify the PO, where the items should be delivered and the EDI +SAN. In the case of a multi-branch system uploading records for multiple +branches, choosing the system is probably best. Single branch libraries or +branches responsible for their own orders should probably select the branch. +. If you want to upload the records to a selection list, you can select a list +from the drop down menu, or type in the name of the selection list that you +want to create. +. Select a *Fiscal Year* from the dropdown menu that matches the fiscal year +of the funds that will be used for the order. If no fiscal year is selected, the +system will use the organizational unit's default fiscal year stored in the +database. If not fiscal year is set, the system will default to the current +calendar year. + +image::media/load_marc_order_records.png[Acquisitions MARC upload screen] + + +== Record Matching Options == +Use the options below the horizontal rule for the system to check for matching +records before importing an order record. + +. Create a queue to which you can upload your records, or add you records to an existing queue +. Select a *Record Match Set* from the drop-down menu. +. Select a *Merge Profile.* Merge profiles enable you to specify which tags +should be removed or preserved in incoming records. +. Select a *Record Source* from the drop-down menu. +. If you want to automatically import records on upload, select one or more of +the following options. + .. *Import Non-Matching Records* - import any records that don't have a match + in the system. + .. *Merge on Exact Match (901c)* - use only for records that will match on + the 901c field. + .. *Merge on Single Match* - import records that only have one match in the + system. + .. *Merge on Best Match* - If more than one match is found in the catalog for + a given record, Evergreen will attempt to perform the best match as defined + by the match score. +. To only import records that have a quality equal to or greater than the +existing record, enter a *Best/Single Match Minimum Quality Ratio*. Divide the +incoming record quality score, as determined by the match set's quality +metrics, by the record quality score of the best match that exists in the +catalog. If you want to ensure that the inbound record is only imported when it +has a higher quality than the best match, then you must enter a ratio that is +higher than 1, such as 1.1. If you want to bypass all quality restraints, enter +a 0 (zero) in this field. +. Select an *Insufficient Quality Fall-Through Profile* if desired. This field +enables you to indicate that if the inbound record does not meet the +configured quality standards, then you may still import the record using an +alternate merge profile. This field is typically used for selecting a merge +profile that allows the user to import holdings attached to a lower quality +record without replacing the existing (target) record with the incoming record. +This field is optional. +. If your order records contain holdings information, by default, Evergreen +will load them as acquisitions copies. (Note: These can be overlayed with real copies +during the MARC batch importing process.) Or you can select *Load Items for +Imported Records* to load them as live copies that display in the catalog. + +image::media/load_marc_order_records.png[Acquisitions MARC upload screen] + + +== Default Upload Settings == + +You can set default upload values by modifying the following settings in +*Administration* -> *Local Administration* -> *Library Settings Editor*: + +- Upload Activate PO +- Upload Create PO +- Upload Default Insufficient Quality Fall-Thru Profile +- Upload Default Match Set +- Upload Default Merge Profile +- Upload Upload Default Min. Quality Ratio +- Upload Default Provider +- Upload Import Non Matching by Default +- Upload Load Items for Imported Records by Default +- Upload Merge on Best Match by Default +- Upload Merge on Exact Match by Default +- Upload Merge on Single Match by Default + +image::media/acq_upload_library_settings.png[Acq upload settings in Library Settings Editor] + + +== Sticky Settings == + +If the above default settings are not implemented, the selections/values used +in the following fields will be sticky and will automatically populate the +fields the next time the *Load MARC Order Records* screen is pulled up: + +- Create Purchase Order +- Activate Purchase Order +- Context Org Unit +- Record Match Set +- Merge Profile +- Import Non-Matching Records +- Merge on Exact Match (901c) +- Merge on Single Match +- Merge on Best Match +- Best/Single Match Minimum Quality Ratio +- Insufficient Quality Fall-Through Profile +- Load Items for Imported Records + +== Use Cases for MARC Order Upload form == + +You can add items to a selection list or purchase order and ignore the record +matching options, or you can use both acquisitions and cataloging functions. In +these examples, you will use both functions. + +*Example 1* +Using the Acquisitions MARC Batch Load interface, upload MARC records to a +selection list and import queue, and match queued records with existing catalog +records. + +In this example, an acquisitions librarian has received a batch of MARC records +from a vendor. She will add the records to a selection list and a Vandelay +record queue. + +A cataloger will later view the queue, edit the records, and import them into +the catalog. + +. Click *Acquisitions -> Load MARC Order Records* +. Add MARC order records to a *Selection list* and/or a *Purchase Order.* +Check the box to create a purchase order if desired. +. Select a *Provider* from the drop-down menu, or begin typing the code for the provider, and the field will auto-fill. +. Select a *Context Org Unit* from the drop down-menu, or begin typing the code +for the context org unit, and the field will auto-fill. +. Select a *Selection List* from the drop down menu, or begin typing the name +of the selection list. You can create a new list, or the field will auto-fill. +. Create a new record import queue, or upload the records to an existing +queue. +. Select a *Record Match Set*. +. Browse your computer to find the MARC file, and click *Upload*. ++ +image::media/Vandelay_Integration_into_Acquisitions1.jpg[Vandelay_Integration_into_Acquisitions1] ++ +. The processed items appear at the bottom of the screen. ++ +image::media/Vandelay_Integration_into_Acquisitions2.jpg[Vandelay_Integration_into_Acquisitions2] +. You can click the link(s) to access the selection list or the import queue. +Click the link to *View Selection List*. +. Look at the first line item. The line item has not yet been linked to the +catalog, but it is linked to a record import queue. Click the link to the +*queue* to examine the MARC record. ++ +image::media/Vandelay_Integration_into_Acquisitions3.jpg[Vandelay_Integration_into_Acquisitions3] +. The batch import interface opens in a new tab. The bibliographic records +appear in the queue. Records that have matches are identified in the queue. You +can edit these records and/or import them into the catalog, completing the +process. + +image::media/Vandelay_Integration_into_Acquisitions4.jpg[Vandelay_Integration_into_Acquisitions4] + +*Example 2*: Using the Acquisitions MARC Batch Load interface, upload MARC +records to a selection list, and use the Vandelay options to import the records +directly into the catalog. The Vandelay options will enable you to match +incoming records with existing catalog records. + +In this example, a librarian will add MARC records to a selection list, create +criteria for matching incoming and existing records, and import the matching +and non-matching records into the catalog. + +. Click *Acquisitions* -> *Load MARC Order Records* +. Add MARC order records to a *Selection list* and/or a *Purchase Order.* +Check the box to create a purchase order if desired. +. Select a *Provider* from the drop down menu, or begin typing the code for the +provider, and the field will auto-fill. +. Select a *Context Org Unit* from the drop down menu, or begin typing the code for the context org unit, and the field will auto-fill. +. Select a *Selection List* from the drop down menu, or begin typing the name +of the selection list. You can create a new list, or the field will auto-fill. +. Create a new record import queue, or upload the records to an existing queue. +. Select a *Record Match Set*. +. Select *Merge Profile* -> *Match-Only Merge*. +. Check the boxes adjacent to *Import Non-Matching Records* and *Merge on Best +Match*. +. Browse your computer to find the MARC file, and click *Upload*. ++ +image::media/Vandelay_Integration_into_Acquisitions5.jpg[Vandelay_Integration_into_Acquisitions5] ++ +. Click the link to *View Selection List* Line items that do not match +existing catalog records on title and ISBN contain the link, *link to catalog*. +This link indicates that you could link the line item to a catalog record, but +currently, no match exists between the line item and catalog records. Line +items that do have matching records in the catalog contain the link, *catalog*. ++ +image::media/Vandelay_Integration_into_Acquisitions6.jpg[Vandelay_Integration_into_Acquisitions6] ++ +. Click the *catalog* link to view the line item in the catalog. + +*Permissions to use this Feature* + +IMPORT_MARC - Using batch importer to create new bib records requires the +IMPORT_MARC permission (same as open-ils.cat.biblio.record.xml.import). If the +permission fails, the queued record will fail import and be stamped with a new +"import.record.perm_failure" import error + +IMPORT_ACQ_LINEITEM_BIB_RECORD_UPLOAD - This allows interfaces leveraging +the batch importer, such as Acquisitions, to create a higher barrier to entry. +This permission prevents users from creating new bib records directly from the +ACQ vendor MARC file upload interface. diff --git a/docs/modules/admin/_attributes.adoc b/docs/modules/admin/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/admin/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets1.jpg b/docs/modules/admin/assets/images/media/Authority_Control_Sets1.jpg new file mode 100644 index 0000000000..9ecdc58489 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets1.jpg differ diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets2.jpg b/docs/modules/admin/assets/images/media/Authority_Control_Sets2.jpg new file mode 100644 index 0000000000..4bea42d834 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets2.jpg differ diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets4.jpg b/docs/modules/admin/assets/images/media/Authority_Control_Sets4.jpg new file mode 100644 index 0000000000..b563a56400 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets4.jpg differ diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets5.jpg b/docs/modules/admin/assets/images/media/Authority_Control_Sets5.jpg new file mode 100644 index 0000000000..a853f4b56c Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets5.jpg differ diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets6.jpg b/docs/modules/admin/assets/images/media/Authority_Control_Sets6.jpg new file mode 100644 index 0000000000..646a4fa9ce Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets6.jpg differ diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets_Fields.png b/docs/modules/admin/assets/images/media/Authority_Control_Sets_Fields.png new file mode 100644 index 0000000000..999f791dfa Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets_Fields.png differ diff --git a/docs/modules/admin/assets/images/media/Authority_Control_Sets_Fields_Edit.png b/docs/modules/admin/assets/images/media/Authority_Control_Sets_Fields_Edit.png new file mode 100644 index 0000000000..e956a11c67 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Control_Sets_Fields_Edit.png differ diff --git a/docs/modules/admin/assets/images/media/Authority_Server_Admin_Menu.png b/docs/modules/admin/assets/images/media/Authority_Server_Admin_Menu.png new file mode 100644 index 0000000000..fe01149f63 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Authority_Server_Admin_Menu.png differ diff --git a/docs/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search1.jpg b/docs/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search1.jpg new file mode 100644 index 0000000000..6cbf623590 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search1.jpg differ diff --git a/docs/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search2.jpg b/docs/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search2.jpg new file mode 100644 index 0000000000..c17abdb305 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Auto_Suggest_in_Catalog_Search2.jpg differ diff --git a/docs/modules/admin/assets/images/media/Barcode_Check_In.png b/docs/modules/admin/assets/images/media/Barcode_Check_In.png new file mode 100644 index 0000000000..454cc4103d Binary files /dev/null and b/docs/modules/admin/assets/images/media/Barcode_Check_In.png differ diff --git a/docs/modules/admin/assets/images/media/Barcode_Checkout_Item_Barcode.png b/docs/modules/admin/assets/images/media/Barcode_Checkout_Item_Barcode.png new file mode 100644 index 0000000000..bf2592e4bc Binary files /dev/null and b/docs/modules/admin/assets/images/media/Barcode_Checkout_Item_Barcode.png differ diff --git a/docs/modules/admin/assets/images/media/Barcode_Checkout_Patron_Barcode.png b/docs/modules/admin/assets/images/media/Barcode_Checkout_Patron_Barcode.png new file mode 100644 index 0000000000..6710a71a8a Binary files /dev/null and b/docs/modules/admin/assets/images/media/Barcode_Checkout_Patron_Barcode.png differ diff --git a/docs/modules/admin/assets/images/media/Barcode_Item_Status.png b/docs/modules/admin/assets/images/media/Barcode_Item_Status.png new file mode 100644 index 0000000000..59e2e45836 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Barcode_Item_Status.png differ diff --git a/docs/modules/admin/assets/images/media/Barcode_OPAC_Staff_Place_Hold.png b/docs/modules/admin/assets/images/media/Barcode_OPAC_Staff_Place_Hold.png new file mode 100644 index 0000000000..116eaa50b9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Barcode_OPAC_Staff_Place_Hold.png differ diff --git a/docs/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_21.jpg b/docs/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_21.jpg new file mode 100644 index 0000000000..825b46c2f6 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_21.jpg differ diff --git a/docs/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_22.jpg b/docs/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_22.jpg new file mode 100644 index 0000000000..1d3f8b621b Binary files /dev/null and b/docs/modules/admin/assets/images/media/Call_Number_Prefixes_and_Suffixes_2_22.jpg differ diff --git a/docs/modules/admin/assets/images/media/Core_Source_1.jpg b/docs/modules/admin/assets/images/media/Core_Source_1.jpg new file mode 100644 index 0000000000..a53710cfe4 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Core_Source_1.jpg differ diff --git a/docs/modules/admin/assets/images/media/ECHClosedDatesEditorAddClosing.png b/docs/modules/admin/assets/images/media/ECHClosedDatesEditorAddClosing.png new file mode 100644 index 0000000000..b549dd15af Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHClosedDatesEditorAddClosing.png differ diff --git a/docs/modules/admin/assets/images/media/ECHClosingSnowDay.png b/docs/modules/admin/assets/images/media/ECHClosingSnowDay.png new file mode 100644 index 0000000000..0ba89918c2 Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHClosingSnowDay.png differ diff --git a/docs/modules/admin/assets/images/media/ECHEditClosing.png b/docs/modules/admin/assets/images/media/ECHEditClosing.png new file mode 100644 index 0000000000..d2f67e06d4 Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHEditClosing.png differ diff --git a/docs/modules/admin/assets/images/media/ECHEditClosingModal.png b/docs/modules/admin/assets/images/media/ECHEditClosingModal.png new file mode 100644 index 0000000000..62d8083cf3 Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHEditClosingModal.png differ diff --git a/docs/modules/admin/assets/images/media/ECHLibraryClosingConstruction.png b/docs/modules/admin/assets/images/media/ECHLibraryClosingConstruction.png new file mode 100644 index 0000000000..27d4686cfb Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHLibraryClosingConstruction.png differ diff --git a/docs/modules/admin/assets/images/media/ECHLibraryClosingDetailed.png b/docs/modules/admin/assets/images/media/ECHLibraryClosingDetailed.png new file mode 100644 index 0000000000..c85b6460ee Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHLibraryClosingDetailed.png differ diff --git a/docs/modules/admin/assets/images/media/ECHLibraryClosingDone.png b/docs/modules/admin/assets/images/media/ECHLibraryClosingDone.png new file mode 100644 index 0000000000..2515a78947 Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHLibraryClosingDone.png differ diff --git a/docs/modules/admin/assets/images/media/ECHLibraryClosingMultipleDays.png b/docs/modules/admin/assets/images/media/ECHLibraryClosingMultipleDays.png new file mode 100644 index 0000000000..62083b0fd3 Binary files /dev/null and b/docs/modules/admin/assets/images/media/ECHLibraryClosingMultipleDays.png differ diff --git a/docs/modules/admin/assets/images/media/Expanding_the_Work_Log1.jpg b/docs/modules/admin/assets/images/media/Expanding_the_Work_Log1.jpg new file mode 100644 index 0000000000..9e899af734 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Expanding_the_Work_Log1.jpg differ diff --git a/docs/modules/admin/assets/images/media/Expanding_the_Work_Log2.jpg b/docs/modules/admin/assets/images/media/Expanding_the_Work_Log2.jpg new file mode 100644 index 0000000000..af3a9b3674 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Expanding_the_Work_Log2.jpg differ diff --git a/docs/modules/admin/assets/images/media/Fiscal_Rollover1.jpg b/docs/modules/admin/assets/images/media/Fiscal_Rollover1.jpg new file mode 100644 index 0000000000..cb4b17366b Binary files /dev/null and b/docs/modules/admin/assets/images/media/Fiscal_Rollover1.jpg differ diff --git a/docs/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location1.jpg b/docs/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location1.jpg new file mode 100644 index 0000000000..28d0402886 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location1.jpg differ diff --git a/docs/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location2.jpg b/docs/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location2.jpg new file mode 100644 index 0000000000..d3a90756f0 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Maximum_Checkout_by_Copy_Location2.jpg differ diff --git a/docs/modules/admin/assets/images/media/Org_Unit_Prox_Adj1.png b/docs/modules/admin/assets/images/media/Org_Unit_Prox_Adj1.png new file mode 100644 index 0000000000..25e3377fdc Binary files /dev/null and b/docs/modules/admin/assets/images/media/Org_Unit_Prox_Adj1.png differ diff --git a/docs/modules/admin/assets/images/media/Org_Unit_Prox_Adj2.png b/docs/modules/admin/assets/images/media/Org_Unit_Prox_Adj2.png new file mode 100644 index 0000000000..9ccf7e5fa6 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Org_Unit_Prox_Adj2.png differ diff --git a/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg b/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg new file mode 100644 index 0000000000..7be1b6f554 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg differ diff --git a/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group2.png b/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group2.png new file mode 100644 index 0000000000..bc9613a422 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group2.png differ diff --git a/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg b/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg new file mode 100644 index 0000000000..fd84093379 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging1.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging1.png new file mode 100644 index 0000000000..06fa12c6a5 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging1.png differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging11.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging11.png new file mode 100644 index 0000000000..8f6de4c8a9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging11.png differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging12.jpg b/docs/modules/admin/assets/images/media/SMS_Text_Messaging12.jpg new file mode 100644 index 0000000000..cf012f0ff4 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging12.jpg differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging13.jpg b/docs/modules/admin/assets/images/media/SMS_Text_Messaging13.jpg new file mode 100644 index 0000000000..541fc26811 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging13.jpg differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging2.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging2.png new file mode 100644 index 0000000000..b0ed58d237 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging2.png differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging3.jpg b/docs/modules/admin/assets/images/media/SMS_Text_Messaging3.jpg new file mode 100644 index 0000000000..f03f42e5a9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging3.jpg differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging4.jpg b/docs/modules/admin/assets/images/media/SMS_Text_Messaging4.jpg new file mode 100644 index 0000000000..ef001919d4 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging4.jpg differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging5.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging5.png new file mode 100644 index 0000000000..295a5c4a8e Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging5.png differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging6.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging6.png new file mode 100644 index 0000000000..5e153b18db Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging6.png differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging7.jpg b/docs/modules/admin/assets/images/media/SMS_Text_Messaging7.jpg new file mode 100644 index 0000000000..cf012f0ff4 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging7.jpg differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging8.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging8.png new file mode 100644 index 0000000000..3d00d42f1c Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging8.png differ diff --git a/docs/modules/admin/assets/images/media/SMS_Text_Messaging9.png b/docs/modules/admin/assets/images/media/SMS_Text_Messaging9.png new file mode 100644 index 0000000000..4130295545 Binary files /dev/null and b/docs/modules/admin/assets/images/media/SMS_Text_Messaging9.png differ diff --git a/docs/modules/admin/assets/images/media/Saved_Catalog_Searches_2_21.jpg b/docs/modules/admin/assets/images/media/Saved_Catalog_Searches_2_21.jpg new file mode 100644 index 0000000000..ce22623f15 Binary files /dev/null and b/docs/modules/admin/assets/images/media/Saved_Catalog_Searches_2_21.jpg differ diff --git a/docs/modules/admin/assets/images/media/Saved_Catalog_Searches_2_22.jpg b/docs/modules/admin/assets/images/media/Saved_Catalog_Searches_2_22.jpg new file mode 100644 index 0000000000..426e3994bf Binary files /dev/null and b/docs/modules/admin/assets/images/media/Saved_Catalog_Searches_2_22.jpg differ diff --git a/docs/modules/admin/assets/images/media/User_Activity_Types1A.jpg b/docs/modules/admin/assets/images/media/User_Activity_Types1A.jpg new file mode 100644 index 0000000000..69d6ab4d39 Binary files /dev/null and b/docs/modules/admin/assets/images/media/User_Activity_Types1A.jpg differ diff --git a/docs/modules/admin/assets/images/media/User_Activity_Types2A.jpg b/docs/modules/admin/assets/images/media/User_Activity_Types2A.jpg new file mode 100644 index 0000000000..27a6ce60d7 Binary files /dev/null and b/docs/modules/admin/assets/images/media/User_Activity_Types2A.jpg differ diff --git a/docs/modules/admin/assets/images/media/acq_marc_search-2.png b/docs/modules/admin/assets/images/media/acq_marc_search-2.png new file mode 100644 index 0000000000..f991a6d423 Binary files /dev/null and b/docs/modules/admin/assets/images/media/acq_marc_search-2.png differ diff --git a/docs/modules/admin/assets/images/media/acq_marc_search.png b/docs/modules/admin/assets/images/media/acq_marc_search.png new file mode 100644 index 0000000000..391ae435a2 Binary files /dev/null and b/docs/modules/admin/assets/images/media/acq_marc_search.png differ diff --git a/docs/modules/admin/assets/images/media/auth_browse_infra1.png b/docs/modules/admin/assets/images/media/auth_browse_infra1.png new file mode 100644 index 0000000000..a68f8afdee Binary files /dev/null and b/docs/modules/admin/assets/images/media/auth_browse_infra1.png differ diff --git a/docs/modules/admin/assets/images/media/auth_browse_infra2.png b/docs/modules/admin/assets/images/media/auth_browse_infra2.png new file mode 100644 index 0000000000..e08b9c5a67 Binary files /dev/null and b/docs/modules/admin/assets/images/media/auth_browse_infra2.png differ diff --git a/docs/modules/admin/assets/images/media/autorenew_circdur.PNG b/docs/modules/admin/assets/images/media/autorenew_circdur.PNG new file mode 100644 index 0000000000..eab0ae240a Binary files /dev/null and b/docs/modules/admin/assets/images/media/autorenew_circdur.PNG differ diff --git a/docs/modules/admin/assets/images/media/autorenew_itemsout.PNG b/docs/modules/admin/assets/images/media/autorenew_itemsout.PNG new file mode 100644 index 0000000000..9b044ccabe Binary files /dev/null and b/docs/modules/admin/assets/images/media/autorenew_itemsout.PNG differ diff --git a/docs/modules/admin/assets/images/media/autorenew_norenewnotice.PNG b/docs/modules/admin/assets/images/media/autorenew_norenewnotice.PNG new file mode 100644 index 0000000000..e4449ddd2c Binary files /dev/null and b/docs/modules/admin/assets/images/media/autorenew_norenewnotice.PNG differ diff --git a/docs/modules/admin/assets/images/media/autorenew_renewnotice.PNG b/docs/modules/admin/assets/images/media/autorenew_renewnotice.PNG new file mode 100644 index 0000000000..311eb1fee0 Binary files /dev/null and b/docs/modules/admin/assets/images/media/autorenew_renewnotice.PNG differ diff --git a/docs/modules/admin/assets/images/media/back_to_results.png b/docs/modules/admin/assets/images/media/back_to_results.png new file mode 100644 index 0000000000..b460a349e3 Binary files /dev/null and b/docs/modules/admin/assets/images/media/back_to_results.png differ diff --git a/docs/modules/admin/assets/images/media/best_hold_sort_order1.jpg b/docs/modules/admin/assets/images/media/best_hold_sort_order1.jpg new file mode 100644 index 0000000000..6aba665e93 Binary files /dev/null and b/docs/modules/admin/assets/images/media/best_hold_sort_order1.jpg differ diff --git a/docs/modules/admin/assets/images/media/best_hold_sort_order2.jpg b/docs/modules/admin/assets/images/media/best_hold_sort_order2.jpg new file mode 100644 index 0000000000..d8e22d6991 Binary files /dev/null and b/docs/modules/admin/assets/images/media/best_hold_sort_order2.jpg differ diff --git a/docs/modules/admin/assets/images/media/blu-ray.png b/docs/modules/admin/assets/images/media/blu-ray.png new file mode 100644 index 0000000000..d44dfe8b38 Binary files /dev/null and b/docs/modules/admin/assets/images/media/blu-ray.png differ diff --git a/docs/modules/admin/assets/images/media/book.png b/docs/modules/admin/assets/images/media/book.png new file mode 100644 index 0000000000..2800684510 Binary files /dev/null and b/docs/modules/admin/assets/images/media/book.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-bookable-1.png b/docs/modules/admin/assets/images/media/booking-create-bookable-1.png new file mode 100644 index 0000000000..7ddded0c87 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-bookable-1.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-bookable-2.png b/docs/modules/admin/assets/images/media/booking-create-bookable-2.png new file mode 100644 index 0000000000..df9a3a47b7 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-bookable-2.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-bookable-3.png b/docs/modules/admin/assets/images/media/booking-create-bookable-3.png new file mode 100644 index 0000000000..49ed80173d Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-bookable-3.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-bookable-4.png b/docs/modules/admin/assets/images/media/booking-create-bookable-4.png new file mode 100644 index 0000000000..6c7128d649 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-bookable-4.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-bookable-5.png b/docs/modules/admin/assets/images/media/booking-create-bookable-5.png new file mode 100644 index 0000000000..c501d47cf3 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-bookable-5.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-bookable-6.png b/docs/modules/admin/assets/images/media/booking-create-bookable-6.png new file mode 100644 index 0000000000..2261a9d6a5 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-bookable-6.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-resourcetype-2.png b/docs/modules/admin/assets/images/media/booking-create-resourcetype-2.png new file mode 100644 index 0000000000..ff517c5e87 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-resourcetype-2.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-resourcetype-3.png b/docs/modules/admin/assets/images/media/booking-create-resourcetype-3.png new file mode 100644 index 0000000000..d7a7e384f9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-resourcetype-3.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-resourcetype-4.png b/docs/modules/admin/assets/images/media/booking-create-resourcetype-4.png new file mode 100644 index 0000000000..0f7317e495 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-resourcetype-4.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-resourcetype-5.png b/docs/modules/admin/assets/images/media/booking-create-resourcetype-5.png new file mode 100644 index 0000000000..784d95ad00 Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-resourcetype-5.png differ diff --git a/docs/modules/admin/assets/images/media/booking-create-resourcetype_webclient-1.png b/docs/modules/admin/assets/images/media/booking-create-resourcetype_webclient-1.png new file mode 100644 index 0000000000..6ff2230bbb Binary files /dev/null and b/docs/modules/admin/assets/images/media/booking-create-resourcetype_webclient-1.png differ diff --git a/docs/modules/admin/assets/images/media/braille.png b/docs/modules/admin/assets/images/media/braille.png new file mode 100644 index 0000000000..693d937851 Binary files /dev/null and b/docs/modules/admin/assets/images/media/braille.png differ diff --git a/docs/modules/admin/assets/images/media/caed_6.jpg b/docs/modules/admin/assets/images/media/caed_6.jpg new file mode 100644 index 0000000000..8f9fe85492 Binary files /dev/null and b/docs/modules/admin/assets/images/media/caed_6.jpg differ diff --git a/docs/modules/admin/assets/images/media/casaudiobook.png b/docs/modules/admin/assets/images/media/casaudiobook.png new file mode 100644 index 0000000000..8352607bfa Binary files /dev/null and b/docs/modules/admin/assets/images/media/casaudiobook.png differ diff --git a/docs/modules/admin/assets/images/media/casmusic.png b/docs/modules/admin/assets/images/media/casmusic.png new file mode 100644 index 0000000000..f52327c672 Binary files /dev/null and b/docs/modules/admin/assets/images/media/casmusic.png differ diff --git a/docs/modules/admin/assets/images/media/cdaudiobook.png b/docs/modules/admin/assets/images/media/cdaudiobook.png new file mode 100644 index 0000000000..03d710c04c Binary files /dev/null and b/docs/modules/admin/assets/images/media/cdaudiobook.png differ diff --git a/docs/modules/admin/assets/images/media/cdmusic.png b/docs/modules/admin/assets/images/media/cdmusic.png new file mode 100644 index 0000000000..be5e341c7d Binary files /dev/null and b/docs/modules/admin/assets/images/media/cdmusic.png differ diff --git a/docs/modules/admin/assets/images/media/closed_dates.png b/docs/modules/admin/assets/images/media/closed_dates.png new file mode 100644 index 0000000000..2839d32157 Binary files /dev/null and b/docs/modules/admin/assets/images/media/closed_dates.png differ diff --git a/docs/modules/admin/assets/images/media/coded-value-1.png b/docs/modules/admin/assets/images/media/coded-value-1.png new file mode 100644 index 0000000000..9530027d68 Binary files /dev/null and b/docs/modules/admin/assets/images/media/coded-value-1.png differ diff --git a/docs/modules/admin/assets/images/media/column_picker_config_widths.png b/docs/modules/admin/assets/images/media/column_picker_config_widths.png new file mode 100644 index 0000000000..aca3c5ac07 Binary files /dev/null and b/docs/modules/admin/assets/images/media/column_picker_config_widths.png differ diff --git a/docs/modules/admin/assets/images/media/column_picker_dojo.png b/docs/modules/admin/assets/images/media/column_picker_dojo.png new file mode 100644 index 0000000000..5a448efbcf Binary files /dev/null and b/docs/modules/admin/assets/images/media/column_picker_dojo.png differ diff --git a/docs/modules/admin/assets/images/media/column_picker_popup.png b/docs/modules/admin/assets/images/media/column_picker_popup.png new file mode 100644 index 0000000000..87e5168d6a Binary files /dev/null and b/docs/modules/admin/assets/images/media/column_picker_popup.png differ diff --git a/docs/modules/admin/assets/images/media/column_picker_web.png b/docs/modules/admin/assets/images/media/column_picker_web.png new file mode 100644 index 0000000000..fff684591c Binary files /dev/null and b/docs/modules/admin/assets/images/media/column_picker_web.png differ diff --git a/docs/modules/admin/assets/images/media/column_picker_web_save.png b/docs/modules/admin/assets/images/media/column_picker_web_save.png new file mode 100644 index 0000000000..0d390be785 Binary files /dev/null and b/docs/modules/admin/assets/images/media/column_picker_web_save.png differ diff --git a/docs/modules/admin/assets/images/media/copy_status_add.png b/docs/modules/admin/assets/images/media/copy_status_add.png new file mode 100644 index 0000000000..8c01477344 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copy_status_add.png differ diff --git a/docs/modules/admin/assets/images/media/copy_status_delete.png b/docs/modules/admin/assets/images/media/copy_status_delete.png new file mode 100644 index 0000000000..525a84ce72 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copy_status_delete.png differ diff --git a/docs/modules/admin/assets/images/media/copy_status_edit.png b/docs/modules/admin/assets/images/media/copy_status_edit.png new file mode 100644 index 0000000000..9bb3a83386 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copy_status_edit.png differ diff --git a/docs/modules/admin/assets/images/media/copytags1.PNG b/docs/modules/admin/assets/images/media/copytags1.PNG new file mode 100644 index 0000000000..aca37bb614 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copytags1.PNG differ diff --git a/docs/modules/admin/assets/images/media/copytags2.PNG b/docs/modules/admin/assets/images/media/copytags2.PNG new file mode 100644 index 0000000000..fa20970097 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copytags2.PNG differ diff --git a/docs/modules/admin/assets/images/media/copytags3.PNG b/docs/modules/admin/assets/images/media/copytags3.PNG new file mode 100644 index 0000000000..6dd1447a79 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copytags3.PNG differ diff --git a/docs/modules/admin/assets/images/media/copytags4.PNG b/docs/modules/admin/assets/images/media/copytags4.PNG new file mode 100644 index 0000000000..8e7cfb5563 Binary files /dev/null and b/docs/modules/admin/assets/images/media/copytags4.PNG differ diff --git a/docs/modules/admin/assets/images/media/create-edi-accounts-2.png b/docs/modules/admin/assets/images/media/create-edi-accounts-2.png new file mode 100644 index 0000000000..59f8ad7e1f Binary files /dev/null and b/docs/modules/admin/assets/images/media/create-edi-accounts-2.png differ diff --git a/docs/modules/admin/assets/images/media/create-edi-accounts-3.png b/docs/modules/admin/assets/images/media/create-edi-accounts-3.png new file mode 100644 index 0000000000..6a883b4300 Binary files /dev/null and b/docs/modules/admin/assets/images/media/create-edi-accounts-3.png differ diff --git a/docs/modules/admin/assets/images/media/create-edi-accounts-4.png b/docs/modules/admin/assets/images/media/create-edi-accounts-4.png new file mode 100644 index 0000000000..8405fee96b Binary files /dev/null and b/docs/modules/admin/assets/images/media/create-edi-accounts-4.png differ diff --git a/docs/modules/admin/assets/images/media/create-edi-accounts-5.png b/docs/modules/admin/assets/images/media/create-edi-accounts-5.png new file mode 100644 index 0000000000..277d799480 Binary files /dev/null and b/docs/modules/admin/assets/images/media/create-edi-accounts-5.png differ diff --git a/docs/modules/admin/assets/images/media/cvmpage_4.jpg b/docs/modules/admin/assets/images/media/cvmpage_4.jpg new file mode 100644 index 0000000000..b5a4ff00ae Binary files /dev/null and b/docs/modules/admin/assets/images/media/cvmpage_4.jpg differ diff --git a/docs/modules/admin/assets/images/media/dvd.png b/docs/modules/admin/assets/images/media/dvd.png new file mode 100644 index 0000000000..b7222a870f Binary files /dev/null and b/docs/modules/admin/assets/images/media/dvd.png differ diff --git a/docs/modules/admin/assets/images/media/eaudio.png b/docs/modules/admin/assets/images/media/eaudio.png new file mode 100644 index 0000000000..d3f289d70f Binary files /dev/null and b/docs/modules/admin/assets/images/media/eaudio.png differ diff --git a/docs/modules/admin/assets/images/media/ebook.png b/docs/modules/admin/assets/images/media/ebook.png new file mode 100644 index 0000000000..e07e467193 Binary files /dev/null and b/docs/modules/admin/assets/images/media/ebook.png differ diff --git a/docs/modules/admin/assets/images/media/editrad_2.jpg b/docs/modules/admin/assets/images/media/editrad_2.jpg new file mode 100644 index 0000000000..d49200ada6 Binary files /dev/null and b/docs/modules/admin/assets/images/media/editrad_2.jpg differ diff --git a/docs/modules/admin/assets/images/media/enter-library-san-2.png b/docs/modules/admin/assets/images/media/enter-library-san-2.png new file mode 100644 index 0000000000..876cca0f07 Binary files /dev/null and b/docs/modules/admin/assets/images/media/enter-library-san-2.png differ diff --git a/docs/modules/admin/assets/images/media/enter-provider-san-1.png b/docs/modules/admin/assets/images/media/enter-provider-san-1.png new file mode 100644 index 0000000000..3f6037d2fe Binary files /dev/null and b/docs/modules/admin/assets/images/media/enter-provider-san-1.png differ diff --git a/docs/modules/admin/assets/images/media/enter-provider-san-2.png b/docs/modules/admin/assets/images/media/enter-provider-san-2.png new file mode 100644 index 0000000000..acd6f05ee5 Binary files /dev/null and b/docs/modules/admin/assets/images/media/enter-provider-san-2.png differ diff --git a/docs/modules/admin/assets/images/media/equip.png b/docs/modules/admin/assets/images/media/equip.png new file mode 100644 index 0000000000..39484cb44f Binary files /dev/null and b/docs/modules/admin/assets/images/media/equip.png differ diff --git a/docs/modules/admin/assets/images/media/event_def_details.png b/docs/modules/admin/assets/images/media/event_def_details.png new file mode 100644 index 0000000000..cfa21b72aa Binary files /dev/null and b/docs/modules/admin/assets/images/media/event_def_details.png differ diff --git a/docs/modules/admin/assets/images/media/event_def_details_2.png b/docs/modules/admin/assets/images/media/event_def_details_2.png new file mode 100644 index 0000000000..6bb189728e Binary files /dev/null and b/docs/modules/admin/assets/images/media/event_def_details_2.png differ diff --git a/docs/modules/admin/assets/images/media/evideo.png b/docs/modules/admin/assets/images/media/evideo.png new file mode 100644 index 0000000000..f8788ea56e Binary files /dev/null and b/docs/modules/admin/assets/images/media/evideo.png differ diff --git a/docs/modules/admin/assets/images/media/kit.png b/docs/modules/admin/assets/images/media/kit.png new file mode 100644 index 0000000000..7b76d03f5c Binary files /dev/null and b/docs/modules/admin/assets/images/media/kit.png differ diff --git a/docs/modules/admin/assets/images/media/lpbook.png b/docs/modules/admin/assets/images/media/lpbook.png new file mode 100644 index 0000000000..1c2c44a4fc Binary files /dev/null and b/docs/modules/admin/assets/images/media/lpbook.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-address_alert_staff_view.png b/docs/modules/admin/assets/images/media/lsa-address_alert_staff_view.png new file mode 100644 index 0000000000..4e5e19e964 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-address_alert_staff_view.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-barcode_completion_admin.png b/docs/modules/admin/assets/images/media/lsa-barcode_completion_admin.png new file mode 100644 index 0000000000..cb24319a86 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-barcode_completion_admin.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-barcode_completion_fields.png b/docs/modules/admin/assets/images/media/lsa-barcode_completion_fields.png new file mode 100644 index 0000000000..651f4148d9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-barcode_completion_fields.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-barcode_completion_multiple.png b/docs/modules/admin/assets/images/media/lsa-barcode_completion_multiple.png new file mode 100644 index 0000000000..071d0504a0 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-barcode_completion_multiple.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-1.png b/docs/modules/admin/assets/images/media/lsa-statcat-1.png new file mode 100644 index 0000000000..f54232ff5b Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-1.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-2.png b/docs/modules/admin/assets/images/media/lsa-statcat-2.png new file mode 100644 index 0000000000..63a5cb7da3 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-2.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-3.png b/docs/modules/admin/assets/images/media/lsa-statcat-3.png new file mode 100644 index 0000000000..f8ed82a5da Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-3.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-3a.png b/docs/modules/admin/assets/images/media/lsa-statcat-3a.png new file mode 100644 index 0000000000..724da96d5b Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-3a.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-4.png b/docs/modules/admin/assets/images/media/lsa-statcat-4.png new file mode 100644 index 0000000000..99aa0946c1 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-4.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-5.png b/docs/modules/admin/assets/images/media/lsa-statcat-5.png new file mode 100644 index 0000000000..076cb31185 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-5.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-6.png b/docs/modules/admin/assets/images/media/lsa-statcat-6.png new file mode 100644 index 0000000000..9dcdf35263 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-6.png differ diff --git a/docs/modules/admin/assets/images/media/lsa-statcat-8.png b/docs/modules/admin/assets/images/media/lsa-statcat-8.png new file mode 100644 index 0000000000..0d69c9d0cc Binary files /dev/null and b/docs/modules/admin/assets/images/media/lsa-statcat-8.png differ diff --git a/docs/modules/admin/assets/images/media/lse-1.png b/docs/modules/admin/assets/images/media/lse-1.png new file mode 100644 index 0000000000..1dcbf28426 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lse-1.png differ diff --git a/docs/modules/admin/assets/images/media/lse-2.png b/docs/modules/admin/assets/images/media/lse-2.png new file mode 100644 index 0000000000..311eadd49e Binary files /dev/null and b/docs/modules/admin/assets/images/media/lse-2.png differ diff --git a/docs/modules/admin/assets/images/media/lse-3.png b/docs/modules/admin/assets/images/media/lse-3.png new file mode 100644 index 0000000000..e50598422e Binary files /dev/null and b/docs/modules/admin/assets/images/media/lse-3.png differ diff --git a/docs/modules/admin/assets/images/media/lse-4.png b/docs/modules/admin/assets/images/media/lse-4.png new file mode 100644 index 0000000000..286feac106 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lse-4.png differ diff --git a/docs/modules/admin/assets/images/media/lse-5.png b/docs/modules/admin/assets/images/media/lse-5.png new file mode 100644 index 0000000000..f79a001d63 Binary files /dev/null and b/docs/modules/admin/assets/images/media/lse-5.png differ diff --git a/docs/modules/admin/assets/images/media/map.png b/docs/modules/admin/assets/images/media/map.png new file mode 100644 index 0000000000..f9f804746f Binary files /dev/null and b/docs/modules/admin/assets/images/media/map.png differ diff --git a/docs/modules/admin/assets/images/media/marc_import_remove_fields1.jpg b/docs/modules/admin/assets/images/media/marc_import_remove_fields1.jpg new file mode 100644 index 0000000000..67e5731e65 Binary files /dev/null and b/docs/modules/admin/assets/images/media/marc_import_remove_fields1.jpg differ diff --git a/docs/modules/admin/assets/images/media/marc_import_remove_fields2.jpg b/docs/modules/admin/assets/images/media/marc_import_remove_fields2.jpg new file mode 100644 index 0000000000..71d658b6f0 Binary files /dev/null and b/docs/modules/admin/assets/images/media/marc_import_remove_fields2.jpg differ diff --git a/docs/modules/admin/assets/images/media/marc_import_remove_fields3.png b/docs/modules/admin/assets/images/media/marc_import_remove_fields3.png new file mode 100644 index 0000000000..841868d7d9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/marc_import_remove_fields3.png differ diff --git a/docs/modules/admin/assets/images/media/marc_import_remove_fields5.jpg b/docs/modules/admin/assets/images/media/marc_import_remove_fields5.jpg new file mode 100644 index 0000000000..6acb2ab3ac Binary files /dev/null and b/docs/modules/admin/assets/images/media/marc_import_remove_fields5.jpg differ diff --git a/docs/modules/admin/assets/images/media/microform.png b/docs/modules/admin/assets/images/media/microform.png new file mode 100644 index 0000000000..6c4b2e1d6e Binary files /dev/null and b/docs/modules/admin/assets/images/media/microform.png differ diff --git a/docs/modules/admin/assets/images/media/modifycde_7.jpg b/docs/modules/admin/assets/images/media/modifycde_7.jpg new file mode 100644 index 0000000000..1354086606 Binary files /dev/null and b/docs/modules/admin/assets/images/media/modifycde_7.jpg differ diff --git a/docs/modules/admin/assets/images/media/multilingual_search1.png b/docs/modules/admin/assets/images/media/multilingual_search1.png new file mode 100644 index 0000000000..b88ca9e583 Binary files /dev/null and b/docs/modules/admin/assets/images/media/multilingual_search1.png differ diff --git a/docs/modules/admin/assets/images/media/multilingual_search2.PNG b/docs/modules/admin/assets/images/media/multilingual_search2.PNG new file mode 100644 index 0000000000..90f3dd4e64 Binary files /dev/null and b/docs/modules/admin/assets/images/media/multilingual_search2.PNG differ diff --git a/docs/modules/admin/assets/images/media/multilingual_search3.PNG b/docs/modules/admin/assets/images/media/multilingual_search3.PNG new file mode 100644 index 0000000000..f90c4ac94e Binary files /dev/null and b/docs/modules/admin/assets/images/media/multilingual_search3.PNG differ diff --git a/docs/modules/admin/assets/images/media/music.png b/docs/modules/admin/assets/images/media/music.png new file mode 100644 index 0000000000..132ca40b6f Binary files /dev/null and b/docs/modules/admin/assets/images/media/music.png differ diff --git a/docs/modules/admin/assets/images/media/new_event_def.png b/docs/modules/admin/assets/images/media/new_event_def.png new file mode 100644 index 0000000000..21fb860f32 Binary files /dev/null and b/docs/modules/admin/assets/images/media/new_event_def.png differ diff --git a/docs/modules/admin/assets/images/media/noncataloged_type_add.png b/docs/modules/admin/assets/images/media/noncataloged_type_add.png new file mode 100644 index 0000000000..9696573aa0 Binary files /dev/null and b/docs/modules/admin/assets/images/media/noncataloged_type_add.png differ diff --git a/docs/modules/admin/assets/images/media/permissions_1.png b/docs/modules/admin/assets/images/media/permissions_1.png new file mode 100644 index 0000000000..b0c4e450e7 Binary files /dev/null and b/docs/modules/admin/assets/images/media/permissions_1.png differ diff --git a/docs/modules/admin/assets/images/media/permissions_1a.png b/docs/modules/admin/assets/images/media/permissions_1a.png new file mode 100644 index 0000000000..9b3f81137a Binary files /dev/null and b/docs/modules/admin/assets/images/media/permissions_1a.png differ diff --git a/docs/modules/admin/assets/images/media/phonomusic.png b/docs/modules/admin/assets/images/media/phonomusic.png new file mode 100644 index 0000000000..0f21dd2862 Binary files /dev/null and b/docs/modules/admin/assets/images/media/phonomusic.png differ diff --git a/docs/modules/admin/assets/images/media/phonospoken.png b/docs/modules/admin/assets/images/media/phonospoken.png new file mode 100644 index 0000000000..32341cb19b Binary files /dev/null and b/docs/modules/admin/assets/images/media/phonospoken.png differ diff --git a/docs/modules/admin/assets/images/media/picture.png b/docs/modules/admin/assets/images/media/picture.png new file mode 100644 index 0000000000..e523300013 Binary files /dev/null and b/docs/modules/admin/assets/images/media/picture.png differ diff --git a/docs/modules/admin/assets/images/media/popbadge1_web_client.PNG b/docs/modules/admin/assets/images/media/popbadge1_web_client.PNG new file mode 100644 index 0000000000..53a35c01ce Binary files /dev/null and b/docs/modules/admin/assets/images/media/popbadge1_web_client.PNG differ diff --git a/docs/modules/admin/assets/images/media/popbadge2_web_client.PNG b/docs/modules/admin/assets/images/media/popbadge2_web_client.PNG new file mode 100644 index 0000000000..b2273ee463 Binary files /dev/null and b/docs/modules/admin/assets/images/media/popbadge2_web_client.PNG differ diff --git a/docs/modules/admin/assets/images/media/popbadge3_web_client.PNG b/docs/modules/admin/assets/images/media/popbadge3_web_client.PNG new file mode 100644 index 0000000000..bb06467104 Binary files /dev/null and b/docs/modules/admin/assets/images/media/popbadge3_web_client.PNG differ diff --git a/docs/modules/admin/assets/images/media/profile-5.png b/docs/modules/admin/assets/images/media/profile-5.png new file mode 100644 index 0000000000..bdafbca927 Binary files /dev/null and b/docs/modules/admin/assets/images/media/profile-5.png differ diff --git a/docs/modules/admin/assets/images/media/profile-6.png b/docs/modules/admin/assets/images/media/profile-6.png new file mode 100644 index 0000000000..5e3b429aee Binary files /dev/null and b/docs/modules/admin/assets/images/media/profile-6.png differ diff --git a/docs/modules/admin/assets/images/media/profile-7.png b/docs/modules/admin/assets/images/media/profile-7.png new file mode 100644 index 0000000000..26fec660f1 Binary files /dev/null and b/docs/modules/admin/assets/images/media/profile-7.png differ diff --git a/docs/modules/admin/assets/images/media/radcatrue_5.jpg b/docs/modules/admin/assets/images/media/radcatrue_5.jpg new file mode 100644 index 0000000000..beaeefd194 Binary files /dev/null and b/docs/modules/admin/assets/images/media/radcatrue_5.jpg differ diff --git a/docs/modules/admin/assets/images/media/radcvmcacolumns_3.jpg b/docs/modules/admin/assets/images/media/radcvmcacolumns_3.jpg new file mode 100644 index 0000000000..be065174cb Binary files /dev/null and b/docs/modules/admin/assets/images/media/radcvmcacolumns_3.jpg differ diff --git a/docs/modules/admin/assets/images/media/radmvcolumn_1.jpg b/docs/modules/admin/assets/images/media/radmvcolumn_1.jpg new file mode 100644 index 0000000000..02b00944a6 Binary files /dev/null and b/docs/modules/admin/assets/images/media/radmvcolumn_1.jpg differ diff --git a/docs/modules/admin/assets/images/media/receipt1.png b/docs/modules/admin/assets/images/media/receipt1.png new file mode 100644 index 0000000000..1544c726b6 Binary files /dev/null and b/docs/modules/admin/assets/images/media/receipt1.png differ diff --git a/docs/modules/admin/assets/images/media/receipt2.png b/docs/modules/admin/assets/images/media/receipt2.png new file mode 100644 index 0000000000..3c53077ae0 Binary files /dev/null and b/docs/modules/admin/assets/images/media/receipt2.png differ diff --git a/docs/modules/admin/assets/images/media/score.png b/docs/modules/admin/assets/images/media/score.png new file mode 100644 index 0000000000..f7b5c7be42 Binary files /dev/null and b/docs/modules/admin/assets/images/media/score.png differ diff --git a/docs/modules/admin/assets/images/media/serial.png b/docs/modules/admin/assets/images/media/serial.png new file mode 100644 index 0000000000..ab751d5655 Binary files /dev/null and b/docs/modules/admin/assets/images/media/serial.png differ diff --git a/docs/modules/admin/assets/images/media/software.png b/docs/modules/admin/assets/images/media/software.png new file mode 100644 index 0000000000..a347513012 Binary files /dev/null and b/docs/modules/admin/assets/images/media/software.png differ diff --git a/docs/modules/admin/assets/images/media/storing_z3950_credentials.jpg b/docs/modules/admin/assets/images/media/storing_z3950_credentials.jpg new file mode 100644 index 0000000000..fadaa9ac8a Binary files /dev/null and b/docs/modules/admin/assets/images/media/storing_z3950_credentials.jpg differ diff --git a/docs/modules/admin/assets/images/media/test_event_def.png b/docs/modules/admin/assets/images/media/test_event_def.png new file mode 100644 index 0000000000..313acb96c6 Binary files /dev/null and b/docs/modules/admin/assets/images/media/test_event_def.png differ diff --git a/docs/modules/admin/assets/images/media/test_event_def_output.png b/docs/modules/admin/assets/images/media/test_event_def_output.png new file mode 100644 index 0000000000..ced6610637 Binary files /dev/null and b/docs/modules/admin/assets/images/media/test_event_def_output.png differ diff --git a/docs/modules/admin/assets/images/media/vhs.png b/docs/modules/admin/assets/images/media/vhs.png new file mode 100644 index 0000000000..3cd6780569 Binary files /dev/null and b/docs/modules/admin/assets/images/media/vhs.png differ diff --git a/docs/modules/admin/assets/images/media/vid1.PNG b/docs/modules/admin/assets/images/media/vid1.PNG new file mode 100644 index 0000000000..ed8955f2af Binary files /dev/null and b/docs/modules/admin/assets/images/media/vid1.PNG differ diff --git a/docs/modules/admin/assets/images/media/vid2.PNG b/docs/modules/admin/assets/images/media/vid2.PNG new file mode 100644 index 0000000000..b22d6383d2 Binary files /dev/null and b/docs/modules/admin/assets/images/media/vid2.PNG differ diff --git a/docs/modules/admin/assets/images/media/vid3.PNG b/docs/modules/admin/assets/images/media/vid3.PNG new file mode 100644 index 0000000000..75ec4d5359 Binary files /dev/null and b/docs/modules/admin/assets/images/media/vid3.PNG differ diff --git a/docs/modules/admin/assets/images/media/vid4.PNG b/docs/modules/admin/assets/images/media/vid4.PNG new file mode 100644 index 0000000000..13690401bc Binary files /dev/null and b/docs/modules/admin/assets/images/media/vid4.PNG differ diff --git a/docs/modules/admin/assets/images/media/vid5.PNG b/docs/modules/admin/assets/images/media/vid5.PNG new file mode 100644 index 0000000000..1415605e6a Binary files /dev/null and b/docs/modules/admin/assets/images/media/vid5.PNG differ diff --git a/docs/modules/admin/assets/images/media/web_client_workstation_registration.png b/docs/modules/admin/assets/images/media/web_client_workstation_registration.png new file mode 100644 index 0000000000..7224672ca9 Binary files /dev/null and b/docs/modules/admin/assets/images/media/web_client_workstation_registration.png differ diff --git a/docs/modules/admin/assets/images/media/workstation_admin-1.jpg b/docs/modules/admin/assets/images/media/workstation_admin-1.jpg new file mode 100644 index 0000000000..0406e4afda Binary files /dev/null and b/docs/modules/admin/assets/images/media/workstation_admin-1.jpg differ diff --git a/docs/modules/admin/assets/images/media/workstation_admin-2.jpg b/docs/modules/admin/assets/images/media/workstation_admin-2.jpg new file mode 100644 index 0000000000..da0e056bce Binary files /dev/null and b/docs/modules/admin/assets/images/media/workstation_admin-2.jpg differ diff --git a/docs/modules/admin/assets/images/media/workstation_admin-3.png b/docs/modules/admin/assets/images/media/workstation_admin-3.png new file mode 100644 index 0000000000..b6485d4b99 Binary files /dev/null and b/docs/modules/admin/assets/images/media/workstation_admin-3.png differ diff --git a/docs/modules/admin/assets/images/media/workstation_admin-4.png b/docs/modules/admin/assets/images/media/workstation_admin-4.png new file mode 100644 index 0000000000..9d260b64bf Binary files /dev/null and b/docs/modules/admin/assets/images/media/workstation_admin-4.png differ diff --git a/docs/modules/admin/assets/images/media/workstation_admin-5.png b/docs/modules/admin/assets/images/media/workstation_admin-5.png new file mode 100644 index 0000000000..fe3f3dcf82 Binary files /dev/null and b/docs/modules/admin/assets/images/media/workstation_admin-5.png differ diff --git a/docs/modules/admin/assets/images/media/workstation_admin-6.jpg b/docs/modules/admin/assets/images/media/workstation_admin-6.jpg new file mode 100644 index 0000000000..4de9cf445e Binary files /dev/null and b/docs/modules/admin/assets/images/media/workstation_admin-6.jpg differ diff --git a/docs/modules/admin/assets/images/worklog.png b/docs/modules/admin/assets/images/worklog.png new file mode 100644 index 0000000000..22a1f1b2df Binary files /dev/null and b/docs/modules/admin/assets/images/worklog.png differ diff --git a/docs/modules/admin/pages/Best_Hold_Selection_Sort_Order.adoc b/docs/modules/admin/pages/Best_Hold_Selection_Sort_Order.adoc new file mode 100644 index 0000000000..680f681179 --- /dev/null +++ b/docs/modules/admin/pages/Best_Hold_Selection_Sort_Order.adoc @@ -0,0 +1,56 @@ +[#best_hold_selection_sort_order] += Best-Hold Selection Sort Order = +:toc: + +Best-Hold Selection Sort Order allows libraries to configure customized rules for Evergreen to use to select the best hold to fill at opportunistic capture. When an item is captured for a hold upon check-in, Evergreen evaluates the holds in the system that the item could fill. Evergreen uses a set of rules, or a Best-Hold Selection Sort Order, to determine the best hold to fill with the item. In previous version of Evergreen, there were two sets of rules for Evergreen to use to determine the best hold to fulfill: Traditional and FIFO (First In, First Out). Traditional uses Org Unit Proximity to identify the nearest hold to fill. FIFO follows a strict order of first-in, first-out rules. This feature allows new, custom Best-Hold Selection Sort Orders to be created. Existing Best-Hold Selection Sort Orders can also be modified. + + +== Preconfigured Best-Hold Orders == +Evergreen comes with six preconfigured Best-Hold Selection Sort Orders to choose from: + +* Traditional +* Traditional with Holds-go-home +* Traditional with Holds-always-go-home +* FIFO +* FIFO with Holds-go-home +* FIFO with Holds-always-go-home + +The Holds-go-home and Holds-always-go-home options allow libraries to determine how long they want to allow items to transit outside of the item’s home library, before it must return to its home library to fulfill any holds that are to be picked up there. Libraries can set this time limit in the library setting *Holds: Max foreign-circulation time*. The Library Settings Editor can be found under *Administration -> Local Administration -> Library Settings Editor*. + +== Create a New Best-Hold Selection Sort Order == +To create a new Best-Hold Selection Sort Order, go to *Administration -> Server Administration -> Best-Hold Selection Sort Order*. + +. Click *Create New*. +. Assign your Best-Hold Selection Sort Order a *Name*. +. Next, use the *Move Up* and *Move Down* buttons to arrange the fields in the order that you would like Evergreen to check when looking for the best hold to fill with an item at opportunistic capture. +. Click *Save Changes* to create your custom Best-Hold Selection Sort Order. + +image::media/best_hold_sort_order1.jpg[Best-Hold Selection Sort Order] + + +== Edit an Existing Best-Hold Selection Sort Order == +To edit an existing Best-Hold Selection Sort Order, go to *Administration -> Server Administration -> Best-Hold Selection Sort Order*. + +. Click *Edit Existing*. +. Choose the Best-Hold Selection Sort Order that you would like to edit from the drop down menu. +. Next, use the *Move Up* and *Move Down* buttons to arrange the fields in the new order that you would like Evergreen to check when looking for the best hold to fill with an item at opportunistic capture. +. Click *Save Changes* to save your edits. + +== Choosing the Best-Hold Selection Sort Order == +The Best-Hold Selection Sort Order can be set for an Org Unit in the *Library Settings Editor*. + +To select the Best-Hold Selection Sort Order that your Org Unit will use: + +. Go to *Administration -> Local Administration -> Library Settings Editor*. +. Locate the setting *Holds: Best-hold selection sort order*, and click *Edit*. +. Choose the *Context* org unit for this setting. +. Select the Best-hold selection sort order, or *Value*, from the drop down menu. +. Click *Update Setting*. + +image::media/best_hold_sort_order2.jpg[Library Settings Editor] + + +== Permissions to use this Feature == +To administer the custom Best-Hold Selection Sort Order interface, you need the following permission: + +* ADMIN_HOLD_CAPTURE_SORT diff --git a/docs/modules/admin/pages/MARC_Import_Remove_Fields.adoc b/docs/modules/admin/pages/MARC_Import_Remove_Fields.adoc new file mode 100644 index 0000000000..0776936c68 --- /dev/null +++ b/docs/modules/admin/pages/MARC_Import_Remove_Fields.adoc @@ -0,0 +1,54 @@ += MARC Import Remove Fields = +:toc: + +MARC Import Remove Fields allows staff to configure MARC tags to be automatically removed from bibliographic records when they are imported into Evergreen. This feature allows specific MARC tags to be removed from records that are imported through three different interfaces: + +* Cataloging -> Import Record from Z39.50 +* Cataloging -> MARC Batch Import/Export +* Acquisitions -> Load MARC Order Records + + +== Create a MARC Import Remove Fields profile == +To create a MARC Import Remove Fields profile, go to *Administration -> Server Administration -> MARC Import Remove Fields*. + +. Click *New Field Group*. +. Assign the Field Group a *Label*. This label will appear in the import interfaces. +. Assign an Org Unit *Owner*. +. Check the box next to *Always Apply* if you want Evergreen to apply this Remove Fields profile to all MARC records that are imported through the three affected interfaces. If you do not select *Always Apply*, staff will have the option to choose which Remove Fields profile to use when importing records. +. Click *Save*. +. The profile that you created will now appear in the list of MARC Import Remove Fields. +. Click on the hyperlinked *ID* number. This will bring you into the Remove Fields profile to configure the MARC tags to be removed. +. Click *New Field*. +. In the *Field*, enter the MARC tag to be removed. +. Click *Save*. +. Add *New Fields* until you have configured all the tags needed for this profile. +. Click *Return to Groups* to go back to the list of Remove Field profiles. + + +image::media/marc_import_remove_fields3.png[MARC Remove Fields Profile] + + +== Import Options == +The Label for each of the MARC Import Remove Fields profiles will appear on the three affected import screens. To select a profile, check the box next to the desired Label before importing the records. + +*Cataloging -> Import Record from Z39.50* + +image::media/marc_import_remove_fields1.jpg[Import Record from Z39.50] +{nbsp} + +*Cataloging -> MARC Batch Import/Export* + +image::media/marc_import_remove_fields2.jpg[MARC Batch Import/Export] +{nbsp} + +*Acquisitions -> Load MARC Order Records* + +image::media/marc_import_remove_fields5.jpg[Load MARC Order Records] + + +== Permissions to use this Feature == +The following permissions are required to use this feature: + +* CREATE_IMPORT_TRASH_FIELD +* UPDATE_IMPORT_TRASH_FIELD +* DELETE_IMPORT_TRASH_FIELD diff --git a/docs/modules/admin/pages/MARC_RAD_MVF_CRA.adoc b/docs/modules/admin/pages/MARC_RAD_MVF_CRA.adoc new file mode 100644 index 0000000000..ded3e27ccb --- /dev/null +++ b/docs/modules/admin/pages/MARC_RAD_MVF_CRA.adoc @@ -0,0 +1,214 @@ += MARC Record Attributes = +:toc: + +The MARC Record Attribute Definitions support the ingesting, indexing, searching, filtering, and delivering of bibliographic record attributes. + +To Access the MARC Record Attributes, click *Administration* -> *Server Administration* -> *MARC Record Attributes* + +== Managing Fixed Field Drop-down Context Menus == + +indexterm:[Fixed fields] +indexterm:[MARC editor,configuring] + +The MARC Editor includes Fixed Field Drop-down Context Menus, which make it easier for catalogers to select the right values for fixed fields +in both Bibliographic and Authority records. You can use the MARC Record Attributes interface to modify these dropdowns to make them better +suited for catalogers in your consortium. + +To edit these menus, you can follow these steps: + +. Click *Administration -> Server Administration -> MARC Record Attributes*. +. If there's not already a dropdown for your fixed field, click *New Attr. Definition* and fill out the form using other fixed field +attribute definitions as a model. +. If you can find an attribute definition for your fixed field in the list, click the "Manage" link in the Coded Value Maps column. +. Click *New Map*. +. In the SVF Attribute field, type the name of the Attribute you identified in steps 2-3. +. In the code field, type the actual value that will go into the fixed field (typically 1-4 characters). You can add an option to keep that fixed field empty by typing a space into this field. +. In the value field, type the short description you'd like your catalogers to see in the dropdown menu. +. Optional: add a longer description of this value in the Description field. +. Check the OPAC Visible checkbox. + + + +== Multi Valued Fields and Composite Record Attributes == + +*Multi Valued Fields* and *Composite Record Attributes* expands upon the Record Attribute Definitions feature to include capturing all occurrences of multi-valued elements in a record. *Multi Valued Fields* allows users to say that a bibliographic record contains multiple entries for a particular record attribute. *Composite Record Attributes* supports the application of a more complicated and nested form of structure to a record attribute definition. + +=== Multi Valued Fields === + +Multi Valued Fields allows for the capturing of multi-valued elements of a bibliographic record. Through the use of Multi Valued Fields, Evergreen recognizes that records are capable of storing multiple values. Multi Valued Fields are represented in the Record Attribute Definitions interface by a column named *Multi-valued?*. With *Multi-valued?* set to *True*, Evergreen will recognize the bibliographic records in the database that have multiple values mapping to the record attribute definition; it will also track and search on those values in the catalog. This feature will be particularly handy for bibliographic records representing a Blu-ray / DVD combo pack, since both format types can be displayed in the OPAC (if both formats were cataloged in the record). + +image::media/radmvcolumn_1.jpg[] + +To edit an existing record attribute definition and set the *Multi-valued?* field to *True*: + +. Click *Administration* on the menu bar +. Click *Server Administration*, then click *MARC Record Attributes* +. Double-click on the row of the record attribute definition that needs to be edited +. Select the *Multi-valued?* checkbox +. Click *Save* + +image::media/editrad_2.jpg[] + +=== Composite Record Attributes === + +Composite Record Attributes build on top of Evergreen’s ability to support record attributes that contain multiple entries. The Composite Record Attributes feature enables administrators to take a record attribute definition and apply a more complicated and nested form of structure to that particular record attribute. Two new Record Attribute Definitions columns have been added to facilitate the management of the Composite Record Attributes. The *Composite attribute?* column designates whether or not a particular record attribute definition is also a composite record attribute. The *Coded Value Maps* column contains a *Manage* link in each row that allows users to manage the Coded Value Maps for the record attributes. + +image::media/radcvmcacolumns_3.jpg[] + +=== Coded Value Maps === + +To manage the Coded Value Maps of a particular record attribute definition, click the *Manage* link located under the Coded Value Maps column for that record attribute. This will open the Coded Value Maps interface. What administrators see on the Coded Value Maps screen does not define the structure of the composite record attribute; they must go into the *Composite Attribute Entry Definitions* screen to view this information. + +image::media/cvmpage_4.jpg[] + +Within the Coded Value Maps screen, there is a column named *Composite Definition*. The *Composite Definition* column contains a *Manage* link that allows users to configure and to edit Composite Record Attribute definitions. In order to enable the *Manage* link (i.e. have the *Manage* link display as an option under the *Composite Definition* column), the *Composite attribute?* column (located back in the Record Attributes Definition page) must be set to *True*. + +To edit an existing record attribute definition and set the *Composite attribute?* field to True: + +. Click *Administration* on the menu bar +. Click *Server Administration*, then click *MARC Record Attributes* +. Double-click on the row of the record attribute definition that needs to be edited +. Select the *Composite attribute?* checkbox +. Click *Save* + +image::media/radcatrue_5.jpg[] + +Now that the *Composite attribute?* value is set to *True*, click on the *Manage* link located under the *Coded Value Maps* column for the edited record attribute definition. Back in the Coded Value Maps screen, a *Manage* link should now be exposed under the *Composite Definition* column. Clicking on a specific coded value’s *Manage* link will take the user into the *Composite Attribute Entry Definitions* screen for that specified coded value. + +=== Composite Attribute Entry Definitions === + +The Composite Attribute Entry Definitions screen is where administrators can locally define and edit Composite Record Attributes for specific coded values. For example: administrators can further refine and distinguish the way a “book” should be defined within their database, by bringing together the right combination of attributes together to truly define what a “book” is in their database. + +The top of the Composite Attribute Entry Definitions screen shows a parenthetically defined view of the *Composite Data Expression*. Below the Composite Data Expression is the *Composite Data Tree*. The Composite Data Tree is structured off of Boolean Operators, including the support of NOT operations. This nested form can be as deeply defined as it needs to be within the site’s database. + +image::media/caed_6.jpg[] + +To modify the *Composite Attribute Entry Definition*, any Boolean Operator can be deleted or have a coded value appended to it. The appended coded value can be any number of Coded Value Maps from any other Record Attribute Definition. So, administrators can choose from all the other existing record attribute definitions and create new nested structures to define entirely new data types. + +To modify the *Composite Attribute Entry Definition*: + +. Click *Add Child* for the specific Boolean Operator that needs to be modified, and a new window will open +. Select which *Record Attribute* needs to be represented in the structure under that particular Boolean Operator +. Select the *Attribute Type* from the dropdown options +. Select the *Value* of the Attribute Type from the dropdown options (dropdown options will be based on the Attribute Type selected) +. Click *Submit* +. The *Composite Data Expression* should now include the modification +. Once all modifications have been made, click *Save Changes* on the Composite Attribute Entry Definitions page + +image::media/modifycde_7.jpg[] + +=== Search and Icon Formats === + +==== Search and Icon Formats ==== + +The table below shows all the search and icon formats. In some cases they vary slightly, with the icon format being more restrictive. This is so that things such as a search for "All Books" will include Large Print books yet Large Print books will not show both a "Book" and "Large Print Book" icon. + +In the table below "Icon Format Only" portions of the definition are italicized and in square brackets: [_Icon format only data_] + +The definitions use the <> at the end of this document. + +[width="60%", cols="<,<,<"] +|==== +|*Icon* |*Search Label/Icon Label* |*Definition* +|image:media/blu-ray.png[] | Blu-ray | VR Format:s +|image:media/book.png[] | All books/Book | Item Type: a,t + +Bib Level: a,c,d,m + +NOT: Item Form: a,b,c,f,o,q,r,s _[,d]_ +|image:media/braille.png[] | Braille | Item Type: a + +Item Form: f +|image:media/casaudiobook.png[] | Cassette audiobook | Item Type: i + +SR Format: l +|image:media/casmusic.png[] | Audiocassette music recording | Item Type: j + +SR Format: l +|image:media/cdaudiobook.png[] | CD audiobook | Item Type: i + +SR Format: f +|image:media/cdmusic.png[] | CD music recording | Item Type: j + +SR Format: f +|image:media/dvd.png[] | DVD | VR Format: v +|image:media/eaudio.png[] | E-audio | Item Type: i + +Item Form: o,q,s +|image:media/ebook.png[]| E-book | Item Type: a,t + +Bib Level: a,c,d,m + +Item Form: o,q,s +|image:media/equip.png[] | Equipment, games, toys | Item Type: r +|image:media/evideo.png[] | E-video | Item Type: g + +Item Form: o,q,s +|image:media/kit.png[] | Kit | Item Type: o,p +|image:media/lpbook.png[] | Large print book | Item Type: a,t + +Bib Level: a,c,d,m + +Item Form: d +|image:media/map.png[] | Map | Item Type: e,f +|image:media/microform.png[] | Microform | Item Form: a,b,c +|image:media/music.png[] | All music/Music sound recording (unknown format) | Item Type: j + +_[NOT: SR Format: a,b,c,d,e,f,l]_ +|image:media/phonomusic.png[] | Phonograph music recording | Item Type: j + +SR Format: a,b,c,d,e +|image:media/phonospoken.png[] | Phonograph spoken recording | Item Type: i + +SR Format: a,b,c,d,e +|image:media/picture.png[] | Picture | Item type: k +|image:media/score.png[] | Music score | Item type: c,d +|image:media/serial.png[] | Serials and magazines | Bib Level: b,s +|image:media/software.png[] | Software and video games | Item Type: m +|image:media/vhs.png[] | VHS | VR Format: b +|==== + +[[anchor-2]] +==== Record Types ==== + +This table shows the record types currently used in determining elements of search and icon formats. They are based on a combination of the MARC Record Type (LDR 06) and Bibliographic Level (LDR 07) fixed fields. + +[width="30%", cols="<,<,<"] +|==== +| *Record Type* | *LDR 06* | *LDR 07* +| BKS | a,t | a,c,d,m +| MAP | e,f | a,b,c,d,i,m,s +| MIX | p | c,d,i +| REC | i,j | a,b,c,d,i,m,s +| SCO | c,d | a,b,c,d,i,m,s +| SER | a | b,i,s +| VIS | g,k,r,o | a,b,c,d,i,m,s +|==== + +[[anchor-1]] +===== Fixed Field Types ===== +This table details the fixed field types currently used for determining search and icon formats. See the <> section above for how the system determines them. + +[width="40%", cols="<,<,<,<"] +|==== +| *Label* | *Record Type* | *Tag* | *Position* +|Item Type | ANY | LDR | 06 +|Bib Level | ANY | LDR | 07 +.14+^.^| Item Format .2+^.^| BKS | 006 | 06 +| 008 | 23 +.2+^.^| MAP | 006 | 12 +|008 | 29 +.2+^.^| MIX | 006 | 06 +| 008 | 23 +.2+^.^| REC | 006 | 06 +| 008 | 23 +.2+^.^| SCO | 006 |06 +| 008 | 23 +.2+^.^| SER | 006 | 06 +| 008 | 23 +.2+^.^| VIS | 006 | 12 +| 008 | 29 +| SR Format | ANY | 007s | 03 +| VR Format | ANY | 007v | 04 +|==== + diff --git a/docs/modules/admin/pages/Org_Unit_Proximity_Adjustments.adoc b/docs/modules/admin/pages/Org_Unit_Proximity_Adjustments.adoc new file mode 100644 index 0000000000..a091e78e97 --- /dev/null +++ b/docs/modules/admin/pages/Org_Unit_Proximity_Adjustments.adoc @@ -0,0 +1,49 @@ += Org Unit Proximity Adjustments = +:toc: + +== Org Unit Proximity Adjustments == + +Org Unit Proximity Adjustments allow libraries to indicate lending preferences for holds between libraries in +an Evergreen consortium. When a hold is placed in Evergreen, the hold targeter looks for items that can fill +the hold. One factor that the hold targeter uses to choose the best item to fill the hold is the distance, +or proximity, between the capturing library and the pickup library for the request. The proximity is based +on the number of steps through the org tree that it takes to get from one org unit to another. + +image::media/Org_Unit_Prox_Adj1.png[Org Unit Proximity] +Org Unit Proximity between BR1 and BR4 = 4 + +Org Unit Proximity Adjustments allow libraries to customize the distances between org units, which provides +more control over which libraries are looked at when targeting copies to fill a hold. Evergreen can also be +configured to take Org Unit Proximity Adjustments into account during opportunistic capture through the +creation of a custom Best-Hold Selection Sort Order. See documentation xref:#best_hold_selection_sort_order[here] +for more information on Best-Hold Selection Sort Order. + +An Org Unit Proximity Adjustment can be created to tell Evergreen which libraries to look at first for items to fill a hold or which library to look at last. This may be useful for accounting for true transit costs or physical distances between libraries. It can also be used to identify libraries that have special lending agreements or preferences. Org Unit Proximity Adjustments can be created for all holds between two org units, or they can be created for holds on specific Shelving Locations and Circulation Modifiers. + +== Absolute and Relative Adjustments == +Two types of proximity adjustments can be created in Evergreen: Absolute adjustments and Relative adjustments. + +Absolute proximity adjustments allow you to replace the default proximity distance between two org units. An absolute adjustment could be made to tell the hold targeter to look at a specific library or library system first to find an item to fill a hold, before looking elsewhere in the consortium. + +Relative proximity adjustments allows the proximity between org units to be treated as closer or farther from one another than the default distance. A relative proximity adjustment could be used to identify a library that has limited hours or slow transit times to tell the hold targeter to look at that library last for items to fill a hold. + +== Create an Org Unit Proximity Adjustment == +.To create an Org Unit Proximity Adjustment between two libraries: +. In the Administration menu choose *Server Administration -> Org Unit Proximity Adjustments*. +. Click *New OU Proximity Adjustment*. +. Choose an *Item Circ Lib* from the drop down menu. +. Choose a *Hold Request Lib* from the drop down menu. +. If this proximity adjustment applies to a specific shelving location, select the appropriate *Shelving Location* from the drop down menu. +. If this proximity adjustment applies to a specific material type, select the appropriate *Circ Modifier* from the drop down menu. +. If this is an Absolute proximity adjustment, check the box next to *Absolute adjustment?* If you leave the box blank, a relative proximity adjustment will be applied. +. Enter the *Proximity Adjustment* between the *Item Circulating Library* and the *Request Library*. +. Click *Save*. + +image::media/Org_Unit_Prox_Adj2.png[Org Unit Proximity Adjustment] + +This will create a one-way proximity adjustment between Org Units. In this example this adjustment will apply to items requested at by a patron BR4 and filled at BR1. To create the reciprocal proximity adjustment, for items requested at BR1 and filled at BR4, create a second proximity adjustment between the two Org Units. + +== Permissions to use this Feature == +To create Org Unit Proximity Adjustments, you will need the following permission: + +* ADMIN_PROXIMITY_ADJUSTMENT diff --git a/docs/modules/admin/pages/SMS_messaging.adoc b/docs/modules/admin/pages/SMS_messaging.adoc new file mode 100644 index 0000000000..8087a0c085 --- /dev/null +++ b/docs/modules/admin/pages/SMS_messaging.adoc @@ -0,0 +1,125 @@ += SMS Text Messaging = +:toc: + +The SMS Text Messaging feature enables users to receive hold notices via text message. Users can opt-in to this hold notification as their default setting for all holds, or they +can receive specific hold notifications via text message. Users can also send call numbers and item locations via text message. + +[#administrative_setup] +== Administrative Setup == + +You cannot receive text messages from Evergreen by default. You must enable this feature to receive hold notices and item information from Evergreen via text message. + +=== Enable Text Messages === + +. Click *Administration* -> *Local Administration* -> *Library Settings Editor.* +. Select the setting, *Enable features that send SMS text messages.* +. Set the value to *True,* and click *Update Setting.* + +image::media/SMS_Text_Messaging1.png[Library Setting to enable SMS] + +=== Authenticate Patrons === + +By default, you must be logged into your OPAC account to send a text message +from Evergreen. However, if you turn on this setting, you can text message copy +information without having to login to your OPAC account. + +To disable the patron login requirement: + +. Click *Administration* -> *Local Administration* -> *Library Settings Editor.* +. Select the setting, *Disable auth requirement for texting call numbers*. +. Set the value to *True,* and click *Update Setting.* + +image::media/SMS_Text_Messaging2.png[Library Setting to disable SMS auth/login requirement] + +=== Configure SMS Carriers === + +A list of SMS carriers that can transmit text messages to users is available in the staff client. Library staff can edit this list, or add new carriers. + +To add or edit SMS carriers: + +. Click *Administration* -> *Server Administration* -> *SMS Carriers*. +. To add a new carrier, click the *New Carrier* button in the top right corner of the screen. To edit an existing carrier, double click in any white space in the carrier's row. ++ +image::media/SMS_Text_Messaging3.jpg[SMS_Text_Messaging3] ++ +. Enter a (geographical) *Region*. +. Enter the carrier's *Name*. +. Enter an *Email Gateway.* The SMS carrier can provide you with the content for this field. The $number field is converted to the user's phone number when the text message is generated. +. Check the *Active* box to use this SMS Carrier. + +image::media/SMS_Text_Messaging4.jpg[SMS_Text_Messaging4] + +=== Configure Text Message Templates === + +Library staff control the content and format of text messages through the templates in Notifications/Action Triggers. Patrons cannot add free text to their text messages. + +To configure the text of the SMS text message: + +. Click *Administration* -> *Local Administration* -> *Notifications/Action Triggers.* +. Create a new A/T and template, or use or modify an existing template. For example, a default template, "Hold Ready for Pickup SMS Notification," notifies users that the hold is ready for pickup. ++ +image::media/SMS_Text_Messaging5.png[SMS Notification Triggers list] ++ +. You can use the default template, or you can edit the template and add +content specific to your library. Click the hyperlinked name to edit the +Event Environment and Event Parameters. Or double-click the row to edit the +hold notice. ++ +image::media/SMS_Text_Messaging6.png[Hold Ready SMS Trigger Event Definition] + +== Receiving Holds Notices via Text Message == + +You can receive notification that your hold is ready for pickup from a text message that is sent to your mobile phone. + +. Login to your account. ++ +image::media/SMS_Text_Messaging12.jpg[SMS_Text_Messaging12] ++ +. Search the catalog. +. Retrieve a record, and click the *Place Hold* link. +. Select the option to retrieve hold notification via text message. +. Choose an SMS Carrier from the drop down menu. NOTE: You can enter your SMS carrier and phone number into your *Account Preferences* to skip steps five and six. +. Enter a phone number. +. Click *Submit.* + +image::media/SMS_Text_Messaging13.jpg[SMS_Text_Messaging13] + +[[Sending_Copy_Details_via_Text_Message]] +== Sending Copy Details via Text Message == + +You can search the catalog for an item, and, after retrieving results +for the item, click a hyperlink to send the copy information in a text +message. + +. Login to your account in the OPAC. NOTE: If you have disabled the +setting that requires patron login, then you do not have to login to +their accounts to send text messages. See +xref:#administrative_setup[Administrative Setup] for more information. ++ +image::media/SMS_Text_Messaging7.jpg[SMS_Text_Messaging7] ++ +. Search the catalog, and retrieve a title with copies. +. Click the *Text* link next to the call number. ++ +image::media/SMS_Text_Messaging8.png[Screenshot: Link to text copy details via SMS] ++ +. The text of the SMS Text Message appears. ++ +image::media/SMS_Text_Messaging9.png[Screenshot: Text message preview with submit form] ++ +. Choose an SMS Carrier from the drop down menu. NOTE: You can enter +your SMS carrier and phone number into your *Account Preferences* to +skip steps five and six. +. Enter a phone number. +. Click *Submit*. NOTE: Message and data rates may apply. +. The number and carrier are converted to an email address, and the text +message is sent to your mobile phone. The following confirmation message +will appear. ++ +image::media/SMS_Text_Messaging11.png[Screenshot: Confirmation page that SMS message was sent] + +*Permissions to use this Feature* + +ADMIN_SMS_CARRIER - Enables users to add/create/delete SMS Carrier entries. + + diff --git a/docs/modules/admin/pages/_attributes.adoc b/docs/modules/admin/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/admin/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/admin/pages/acquisitions_admin.adoc b/docs/modules/admin/pages/acquisitions_admin.adoc new file mode 100644 index 0000000000..d23433c62d --- /dev/null +++ b/docs/modules/admin/pages/acquisitions_admin.adoc @@ -0,0 +1,961 @@ += Acquisitions Administration = +:toc: + + +== Acquisitions Settings == + +indexterm:[acquisitions,permissions] + +Several setting in the Library Settings area of the Administration module pertain to +functions in the Acquisitions module. You can access these settings by clicking +_Administration -> Local Administration -> Library Settings Editor_. + +* CAT: Delete bib if all copies are deleted via Acquisitions lineitem +cancellation - If you cancel a line item, then all of the on order copies in the +catalog are deleted. If, when you cancel a line item, you also want to delete +the bib record, then set this setting to TRUE. +* Allow funds to be rolled over without bringing the money along - enables you +to move a fund's encumbrances from one year to the next without moving unspent +money. Unused money is not added to the next year's fund and is not available +for use. +* Allows patrons to create automatic holds from purchase requests. +* Default circulation modifier - This modifier would be applied to items that +are created in the acquisitions module +* Default copy location - This copy location would be applied to items that are +created in the acquisitions module +* Fund Spending Limit for Block - When the amount remaining in the fund, +including spent money and encumbrances, goes below this percentage, attempts to +spend from the fund will be blocked. +* Fund Spending Limit for Warning - When the amount remaining in the fund, +including spent money and encumbrances, goes below this percentage, attempts to +spend from the fund will result in a warning to the staff. +* Rollover Distribution Formulae Funds - When set to true, during fiscal +rollover, all distribution formulae will update to use new funds. +* Set copy creator as receiver - When receiving a copy in acquisitions, set the +copy "creator" to be the staff that received the copy +* Temporary barcode prefix - Temporary barcode prefix for items that are created +in the acquisitions module +* Temporary call number prefix - Temporary call number prefix for items that are +created in the acquisitions module + +== Cancel/Delay reasons == + +indexterm:[acquisitions,purchase order,cancellation] +indexterm:[acquisitions,line item,cancellation] + +The Cancel reasons link enables you to predefine the reasons for which a line +item or a PO can be cancelled. A default list of reasons appears, but you can +add custom reasons to this list. Applying the cancel reason will prevent the +item from appearing in a claims list and will allow you to cancel debits +associated with the purchase. Cancel reasons also enable you to delay +a purchase. For example, you could create a cancel reason of 'back ordered,' and +you could choose to keep the debits associated with the purchase. + +=== Create a cancel/delay reason === + +. To add a new cancel reason, click _Administration -> Acquisitions Administration -> +Cancel reasons_. + +. Click _New Cancel Reason_. + +. Select a using library from the drop-down menu. The using library indicates +the organizational units whose staff can use this cancel reason. This menu is +populated with the shortnames that you created for your libraries in the +organizational units tree (See Administration -> Server Administration -> Organizational +Units.) + +. Create a label for the cancel reason. This label will appear when you select a +cancel reason on an item or a PO. + +. Create a description of the cancel reason. This is a free text field and can +comprise any text of your choosing. + +. If you want to retain the debits associated with the cancelled purchase, click +the box adjacent to Keep Debits-> + +. Click _Save_. + +=== Delete a custom cancel/delay reason === + +You can delete custom cancel reason. + +. Select the checkbox for the custom cancel reason that should be deleted. + +. Click the _Delete Selected_ button. + +[TIP] +You cannot select the checkbox for any of the default cancel reasons because the +system expects those reasons to be available to handle EDI order responses. + + +== Claiming == + +indexterm:[acquisitions,claiming] + +Currently, all claiming is manual, but the admin module enables you to build +claim policies and specify the action(s) that users should take to claim items. + +=== Create a claim policy === + +The claim policy link enables you to name the claim policy and specify the +organization that owns it. + +. To create a claim policy, click _Administration -> Acquisitions Administration -> +Claim Policies_. +. Create a claim policy name. No limits exist on the number of characters that +can be entered in this field. +. Select an org unit from the drop-down menu. The org unit indicates the +organizational units whose staff can use this claim policy. This menu is +populated with the shortnames that you created for your libraries in the +organizational units tree (See Administration -> Server Administration -> Organizational +Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. ++ +. Enter a description. No limits exist on the number of characters that can be +entered in this field. +. Click _Save_. + +=== Create a claim type === + +The claim type link enables you to specify the reason for a type of claim. + +. To create a claim type, click _Administration -> Acquisitions Administration -> +Claim types_. +. Create a claim type. No limits exist on the number of characters that can be +entered in this field. +. Select an org unit from the drop-down menu. The org unit indicates the +organizational units whose staff can use this claim type. This menu is populated +with the shortnames that you created for your libraries in the organizational +units tree (See Administration -> Server Administration -> Organizational Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. ++ +. Enter a description. No limits exist on the number of characters that can be +entered in this field. +. Click _Save_. + +=== Create a claim event type === + +The claim event type describes the physical action that should occur when an +item needs to be claimed. For example, the user should notify the vendor via +email that the library is claiming an item. + +. To access the claim event types, click _Administration -> Acquisitions Administration -> +Claim event type_. +. Enter a code for the claim event type. No limits exist on the number of +characters that can be entered in this field. +. Select an org unit from the drop-down menu. The org unit indicates the +organizational units whose staff can use this event type. This menu is populated +with the shortnames that you created for your libraries in the organizational +units tree (See Administration -> Server Administration -> Organizational Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. ++ +. Enter a description. No limits exist on the number of characters that can be +entered in this field. +. If this claim is initiated by the user, then check the box adjacent to Library +Initiated. ++ +[NOTE] +Currently, all claims are initiated by a user. The ILS cannot automatically +claim an issue. ++ +. Click _Save_. + +=== Create a claim policy action === + +The claim policy action enables you to specify how long a user should wait +before claiming the item. + +. To access claim policy actions, click _Administration -> Acquisitions Administration -> +Claim Policy Actions_. + +. Select an Action (Event Type) from the drop-down menu. + +. Enter an action interval. This field indicates how long a user should wait +before claiming the item. + +. In the Claim Policy ID field, select a claim policy from the drop-down menu. + +. Click _Save_. + +[NOTE] +You can create claim cycles by adding multiple claim policy actions to a claim + policy. + +== Currency Types == + +indexterm:[acquisitions,currency types] + +Currency types can be created and applied to funds in the administrative module. +When a fund is applied to a copy or line item for purchase, the item will be +purchased in the currency associated with that fund. + + + +=== Create a currency type === + +. To create a new currency type, click _Administration -> Acquisitions Administration -> +Currency types_. + +. Enter the currency code. No limits exist on the number of characters that can +be entered in this field. + +. Enter the name of the currency type in Currency Label field. No limits exist +on the number of characters that can be entered in this field. + +. Click Save. + + + +=== Edit a currency type === + +. To edit a currency type, click your cursor in the row that you want to edit. +The row will turn blue. + +. Double click. The pop-up box will appear, and you can edit the fields. + +. After making changes, click Save. + +[NOTE] +From the currency types interface, you can delete currencies that have never +been applied to funds or used to make purchases. + +== Distribution Formulas == + +indexterm:[acquisitions,distribution formulas, templates] + +Distribution formulas allow you to specify the number of copies that should be +distributed to specific branches. They can also serve as templates allowing you +to predefine settings for your copies. You can create and reuse formulas as +needed. + +=== Create a distribution formula === + +. Click _Administration -> Acquisitions Administration -> Distribution Formulas_. +. Click _New Formula_. +. Enter a Formula Name. No limits exist on the number of characters that can be +entered in this field. +. Choose a Formula Owner from the drop-down menu. The Formula Owner indicates +the organizational units whose staff can use this formula. This menu is +populated with the shortnames that you created for your libraries in the +organizational units tree (See Administration -> Server Administration -> Organizational +Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. ++ +. Ignore the Skip Count field which is currently not used. +. Click _Save_. +. Click _New Entry_. +. Select an Owning Library from the drop-down menu. This indicates the branch +that will receive the items. This menu is populated with the shortnames that you +created for your libraries in the organizational units tree (See _Administration -> +Server Administration -> Organizational Units_). +. Select/enter any of the following copy details you want to predefine in the +distribution formula. +* Copy Location +* Fund +* Circ Modifier +* Collection Code +. In the Item Count field, enter the number of items that should be distributed +to the branch. You can enter the number or use the arrows on the right side of +the field. +. Click _Apply Changes_. The screen will reload. +. To view the changes to your formula, click Administration -> +Acquisitions Administration -> Distribution Formulas. The item_count will reflect +the entries to your distribution formula. + +[NOTE] +To edit the Formula Name, click the hyperlinked name of the formula in the top +left corner. A pop-up box will enable you to enter a new formula name. + +=== Edit a distribution formula === + +To edit a distribution formula, click the hyperlinked title of the formula. + +== Electronic Data Interchange == +indexterm:[acquisitions,EDI,accounts] +indexterm:[EDI,accounts] + +Many libraries use Electronic Data Interchange (EDI) accounts to send purchase orders and receive invoices + from providers electronically. In Evergreen users can setup EDI accounts and manage EDI messages in + the admin module. EDI messages and notes can be viewed in the acquisitions module. See +also the command line system administration manual, which includes some initial setup steps that are +required for use of EDI. + +=== Entering SANs (Standard Address Numbers) === + +For EDI to work your library must have a SAN and each of your providers must each supply you with their SAN. + +A SAN (Standard Address Number) is a unique 7 digit number that identifies your library. + +==== Entering a Library's SAN ==== + +These steps only need to be done once per library. + +. In Evergreen select _Administration_ -> _Server Administration_ -> _Organizational Units_ +. Find your library in the tree on the left side of the page and click on it to open the settings. ++ +[NOTE] +Multi-branch library systems will see an entry for each branch but should select their system's +top organization unit. ++ +. Click on the _Address_ tab. +. Click on the _Mailing Address_ tab. +. Enter your library's SAN in the field labeled _SAN_. +. Click _Save_. + +image::media/enter-library-san-2.png[Enter Library SAN] + + +==== Entering a Provider's SAN ==== + +These steps need to be repeated for every provider with which EDI is used. + +. In Evergreen select _Administration_ -> _Acquisitions Administration_ -> _Providers_. +. Click the hyperlinked name of the provider you would like to edit. ++ +image::media/enter-provider-san-1.png[Enter Provider SAN] + +. Enter your provider's SAN in the field labeled _SAN_. +. Click _Save_. ++ +image::media/enter-provider-san-2.png[Enter Provider SAN] + +=== Create an EDI Account === + +CAUTION: You *must* create your provider before you create an EDI account for the provider. + +. Contact your provider requesting the following information: +* Host +* Username +* Password +* Path +* Incoming Directory +* Provider's SAN + + +. In Evergreen select _Administration_ -> _Acquisitions Administration_ -> _EDI Accounts_. +. Click _New Account_. A pop-up will appear. ++ +image::media/create-edi-accounts-2.png[Create EDI Account] + +. Fill in the following fields: +* In the _Label_ field, enter a name for the EDI account. +* In the _Host_ field, enter the requisite FTP or SCP information supplied by +your provider. Be sure to include the protocol (e.g. `ftp://ftp.vendorname.com`) +* In the _Username_ field, enter the username supplied by your provider. +* In the _Password_ field, enter the password supplied by your provider. +* Select your library as the _Owner_ from the drop down menu. Multi-branch libraries should select their top level organizational + unit. +* The _Last Activity_ updates automatically with any inbound or outbound communication. +* In the _Provider_ field, enter the code used in Evergreen for your provider. +* In the _Path_ field, enter the path supplied by your provider. The path indicates a directory on +the provider's server where Evergreen will deposit its outgoing order files. ++ +[TIP] +If your vendor requests a specific file extension for EDI purchase orders, +such as `.ord`, enter the name of the directory, followed by a slash, +followed by an asterisk, followed by a period, followed by the extension. +For example, if the vendor requests that EDI purchase orders be sent to +a directory called `in` with the file extension `.ord`, your path would +be `in/*.ord`. ++ +* In the _Incoming Directory_ field, enter the incoming directory supplied by your provider. This indicates +the directory on the vendor’s server where Evergreen will retrieve incoming order responses and invoices. ++ +[NOTE] +Don't worry if your incoming directory is named `out` or `outgoing`. +From your vendor's perspective, this directory is outgoing, because +it contains files that the vendor is sending to Evergreen. However, +from Evergreen's perspective, these files are incoming. ++ +image::media/create-edi-accounts-3.png[Create EDI Account] + +. Click _Save_. +. Click on the link in the _Provider_ field. ++ +image::media/create-edi-accounts-4.png[Create EDI Account] + +. Select the EDI account that has just been created from the _EDI Default_ drop down menu. ++ +image::media/create-edi-accounts-5.png[Create EDI Account] + +. Click _Save_. + +=== EDI Messages === + +indexterm:[EDI,messages] +indexterm:[acquisitions,EDI,messages] + + +The EDI Messages screen displays all incoming and outgoing messages between the +library and its providers. To see details of a particular EDI message, +including the raw EDIFACT message, double click on a message entry. To find a +specific EDI message, the Filter options can be useful. Outside the Admin +interface, EDI messages that pertain to a specific purchase order can be +viewed from the purchase order interface (See _Acquisitions -> Purchase Orders_). + +== Exchange Rates == + +indexterm:[acquisitions,exchange rates] + +Exchange rates define the rate of exchange between currencies. Evergreen will +automatically calculate exchange rates for purchases. Evergreen assumes that the +currency of the purchasing fund is identical to the currency of the provider, +but it provides for two unique situations: If the currency of the fund that is +used for the purchase is different from the currency of the provider as listed +in the provider profile, then Evergreen will use the exchange rate to calculate +the price of the item in the currency of the fund and debit the fund +accordingly. When money is transferred between funds that use different +currency types, Evergreen will automatically use the exchange rate to convert +the money to the currency of the receiving fund. During such transfers, +however, staff can override the automatic conversion by providing an explicit +amount to credit to the receiving fund. + +=== Create an exchange rate === + +. To create a new exchange rate, click _Administration -> Acquisitions Administration -> +Exchange Rates_. + +. Click New Exchange Rate. + +. Enter the From Currency from the drop-down menu populated by the currency +types. + +. Enter the To Currency from the drop-down menu populated by the currency types. + +. Enter the exchange Ratio. + +. Click _Save_. + +=== Edit an exchange rate === + +Edit an exchange rate just as you would edit a currency type. + +== MARC Federated Search == + + +indexterm:[acquisitions,MARC federated search] + +The MARC Federated Search enables you to import bibliographic records into a +selection list or purchase order from a Z39.50 source. + +. Click _Acquisitions -> MARC Federated Search_. +. Check the boxes of Z39.50 services that you want to search. Your local +Evergreen Catalog is checked by default. Click Submit. ++ +image::media/acq_marc_search.png[search form] ++ +. A list of results will appear. Click the _Copies_ link to add copy information +to the line item. See <> for more +information. +. Click the Notes link to add notes or line item alerts to the line item. See +<> for more information. +. Enter a price in the _Estimated Price_ field. +. You can save the line item(s) to a selection list by checking the box on the +line item and clicking _Actions -> Save Items to Selection List_. You can also +create a purchase order from the line item(s) by checking the box on the line +item and clicking _Actions -> Create Purchase Order_. + +image::media/acq_marc_search-2.png[line item] + +== Fund Tags == + +indexterm:[acquisitions,funds,tags] + +You can apply tags to funds so that you can group funds for easy reporting. For +example, you have three funds for children's materials: Children's Board Books, +Children's DVDs, and Children's CDs. Assign a fund tag of 'children's' to each +fund. When you need to report on the amount that has been spent on all +children's materials, you can run a report on the fund tag to find total + expenditures on children's materials rather than reporting on each individual +fund. + +Create a Fund Tag + +. To create a fund tag, click _Administration -> Acquisitions Administration -> Fund Tags_. +. Click _New Fund Tag_. No limits exist on the number of characters that can be +entered in this field. +. Select a Fund Tag Owner from the drop-down menu. The owner indicates the +organizational unit(s) whose staff can use this fund tag. This menu is +populated with the shortnames that you created for your libraries in the +organizational units tree (See Administration -> Server Administration -> Organizational +Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. ++ +. Enter a Fund Tag Name. No limits exist on the number of characters that can be +entered in this field. +. Click _Save_. + +== Funding Sources == + +indexterm:[acquisitions,funding sources] + +Funding sources allow you to specify the sources that contribute monies to your +fund(s). You can create as few or as many funding sources as you need. These +can be used to track exact amounts for accounts in your general ledger. You can + then use funds to track spending and purchases for specific collections. + +=== Create a funding source === + +. To create a new funding source, click _Administration -> Acquisitions Administration -> +Funding Source_. +. Enter a funding source name. No limits exist on the number of characters that +can be entered in this field. +. Select an owner from the drop-down menu. The owner indicates the +organizational unit(s) whose staff can use this funding source. This menu is +populated with the shortnames that you created for your libraries in the +organizational units tree (See Administration -> Server Administration -> Organizational +Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. For example, if a system +is made the owner of a funding source, then users with appropriate permissions +at the branches within the system could also use the funding source. ++ +. Create a code for the source. No limits exist on the number of characters that + can be entered in this field. +. Select a currency from the drop-down menu. This menu is populated from the +choices in the Currency Types interface. +. Click _Save_. + +=== Allocate credits to funding sources === + +. Apply a credit to this funding source. + +. Enter the amount of money that the funding source contributes to the +organization. Funding sources are not tied to fiscal or calendar years, so you +can continue to add money to the same funding source over multiple years, e.g. +County Funding. Alternatively, you can name funding sources by year, e.g. County +Funding 2010 and County Funding 2011, and apply credits each year to the +matching source. + +. To apply a credit, click on the hyperlinked name of the funding source. The +Funding Source Details will appear. + +. Click _Apply Credit_. + +. Enter an amount to apply to this funding source. + +. Enter a note. This field is optional. + +. Click _Apply_. + +=== Allocate credits to funds === + +If you have already set up your funds, then you can then click the Allocate to +Fund button to apply credits from the funding sources to the funds. If you have +not yet set up your funds, or you need to add a new one, you can allocate +credits to funds from the funds interface. See section 1.2 for more information. + +. To allocate credits to funds, click _Allocate to Fund_. + +. Enter the amount that you want to allocate. + +. Enter a note. This field is optional. + +. Click _Apply_. + +=== Track debits and credits === + +You can track credits to and allocations from each funding source. These amounts + are updated when credits and allocations are made in the Funding Source + Details. Access the Funding Source Details by clicking on the hyperlinked name + of the Funding Source. + +== Funds == + +indexterm:[acquisitions,funds] + +Funds allow you to allocate credits toward specific purchases. In the funds +interface, you can create funds; allocate credits from funding sources to funds; + transfer money between funds; and apply fund tags to funds. Funds are created + for a specific year, either fiscal or calendar. These funds are owned by org + units. At the top of the funds interface, you can set a contextual org unit and + year. The drop-down menu at the top of the screen enables you to focus on funds + that are owned by specific organizational units during specific years. + +=== Create a fund === + +. To create a new fund, click _Administration -> Acquisitions Administration -> Funds_. +. Enter a name for the fund. No limits exist on the number of characters that +can be entered in this field. +. Create a code for the fund. No limits exist on the number of characters that +can be entered in this field. +. Enter a year for the fund. This can be a fiscal year or a calendar year. The +format of the year is YYYY. +. Select an org unit from the drop-down menu. The org unit indicates the +organizational units whose staff can use this fund. This menu is populated with +the shortnames that you created for your libraries in the organizational units +tree (See Administration -> Server Administration -> Organizational Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. See section ++ +. Select a currency type from the drop-down menu. This menu is comprised of +entries in the currency types menu. When a fund is applied to a line item or +copy, the price of the item will be encumbered in the currency associated with +the fund. +. Click the Active box to activate this fund. You cannot make purchases from +this fund if it is not active. +. Enter a Balance Stop Percent. The balance stop percent prevents you from +making purchases when only a specified amount of the fund remains. For example, +if you want to spend 95 percent of your funds, leaving a five percent balance in + the fund, then you would enter 95 in the field. When the fund reaches its + balance stop percent, it will appear in red when you apply funds to copies. +. Enter a Balance Warning Percent. The balance warning percent gives you a +warning that the fund is low. You can specify any percent. For example, if you +want to spend 90 percent of your funds and be warned when the fund has only 10 +percent of its balance remaining, then enter 90 in the field. When the fund +reaches its balance warning percent, it will appear in yellow when you apply +funds to copies. +. Check the Propagate box to propagate funds. When you propagate a fund, the ILS +will create a new fund for the following fiscal year with the same parameters +as your current fund. All of the settings transfer except for the year and the +amount of money in the fund. Propagation occurs during the fiscal year close-out +operation. +. Check the Rollover box if you want to roll over remaining funds into the same +fund next year. You should also check this box if you only want to roll over +encumbrances into next year's fund. +. Click _Save_. + +=== Allocate credits from funding sources to funds === + +Credits can be applied to funds from funding sources using the fund interface. +The credits that you apply to the fund can be applied later to purchases. + +. To access funds, click _Administration -> Acquisitions Administration -> Funds_. + +. Click the hyperlinked name of the fund. + +. To add a credit to the fund, click the Create Allocation tab. + +. Choose a Funding Source from the drop-down menu. + +. Enter an amount that you want to apply to the fund from the funding source. + +. Enter a note. This field is optional. + +. Click _Apply_. + +=== Transfer credits between funds === + +The credits that you allocate to funds can be transferred between funds if +desired. In the following example, you can transfer $500.00 from the Young Adult +Fiction fund to the Children's DVD fund. + +. To access funds, click _Administration -> Acquisitions Administration -> Funds_. + +. Click the hyperlinked name of the originating fund. + +. The Fund Details screen appears. Click Transfer Money. + +. Enter the amount that you would like to transfer. + +. From the drop-down menu, select the destination fund. + +. Add a note. This field is optional. + +. Click _Transfer_. + +=== Track balances and expenditures === + +The Fund Details allows you to track the fund's balance, encumbrances, and +amount spent. It also allows you to track allocations from the funding +source(s), debits, and fund tags. + +. To access the fund details, click on the hyperlinked name of the fund that you +created. + +. The Summary allows you to track the following: + +. Balance - The balance is calculated by subtracting both items that have been +invoiced and encumbrances from the total allocated to the fund. +. Total Allocated - This amount is the total amount allocated from the Funding +Source. +. Spent Balance - This balance is calculated by subtracting only the items that +have been invoiced from the total allocated to the fund. It does not include +encumbrances. +. Total Debits - The total debits are calculated by adding the cost of items +that have been invoiced and encumbrances. +. Total Spent - The total spent is calculated by adding the cost of items that +have been invoiced. It does not include encumbrances. +. Total Encumbered - The total encumbered is calculated by adding all +encumbrances. + + +=== Fund reporting === + +indexterm:[acquisitions,funds,reports] +indexterm:[reports,funds] + +A core source, Fund Summary, is available in the reports interface. This +core source enables librarians to easily run a report on fund activity. Fields +that are accessible in this interface include Remaining Balance, Total +Allocated, Total Encumbered, and Total Spent. + + +image::media/Core_Source_1.jpg[Core_Source1] + + + +=== Edit a fund === + +Edit a fund just as you would edit a currency type. + +=== Perform fiscal year close-out operation === + +indexterm:[acquisitions,funds,fiscal rollover] + +The Fiscal Year Close-Out Operation allows you to deactivate funds for the +current year and create analogous funds for the next year. It transfers +encumbrances to the analogous funds, and it rolls over any remaining funds if +you checked the rollover box when creating the fund. + +. To access the year end closeout of a fund, click Administration -> Server +Administration -> Acquisitions -> Funds. + +. Click _Fund Propagation and Rollover_. + +. Check the box adjacent to _Perform Fiscal Year Close-Out Operation_. + +. For funds that have the "Rollover" setting enabled, if you want to move the +fund's encumbrances to the next year without moving unspent money, check the +box adjacent to _Limit Fiscal Year Close-out Operation to Encumbrances_. ++ +[NOTE] +The _Limit Fiscal Year Close-out Operation to Encumbrances_ will only display +if the _Allow funds to be rolled over without bringing the money along_ Library +Setting has been enabled. This setting is available in the Library Setting +Editor accessible via _Administration_ -> _Local Administration_ -> _Library +Settings Editor_. ++ +image::media/Fiscal_Rollover1.jpg[Fiscal_Rollover1] + +. Notice that the context org unit reflects the context org unit that you +selected at the top of the Funds screen. + +. If you want to perform the close-out operation on the context org unit and its +child units, then check the box adjacent to Include Funds for Descendant Org +Units. + +. Check the box adjacent to dry run if you want to test changes to the funds +before they are enacted. Evergreen will generate a summary of the changes that +would occur during the selected operations. No data will be changed. + +. Click _Process_. + +. Evergreen will begin the propagation process. Evergreen will make a clone of +each fund, but it will increment the year by 1. + +== Invoice menus == + +indexterm:[acquisitions,invoices] + +Invoice menus allow you to create drop-down menus that appear on invoices. You +can create an invoice item type or invoice payment method. + +=== Invoice item type === + +The invoice item type allows you to enter the types of additional charges that +you can add to an invoice. Examples of additional charge types might include +taxes or processing fees. Charges for bibliographic items are listed separately +from these additional charges. A default list of charge types displays, but you +can add custom charge types to this list. Invoice item types can also be used +when adding non-bibliographic items to a purchase order. When invoiced, the +invoice item type will copy from the purchase order to the invoice. + +. To create a new charge type, click _Administration -> Acquisitions Administration -> +Invoice Item Type_. + +. Click _New Invoice Item Type_. + +. Create a code for the charge type. No limits exist on the number of characters +that can be entered in this field. + +. Create a label. No limits exist on the number of characters that can be +entered in this field. The text in this field appears in the drop-down menu on +the invoice. + +. If items on the invoice were purchased with the monies in multiple funds, then +you can divide the additional charge across funds. Check the box adjacent to +Prorate-> if you want to prorate the charge across funds. + +. Click _Save_. + +=== Invoice payment method === + +The invoice payment method allows you to predefine the type(s) of invoices and +payment method(s) that you accept. The text that you enter in the admin module +will appear as a drop-down menu in the invoice type and payment method fields on +the invoice. + +. To create a new invoice payment method, click _Administration -> +Acquisitions Administration -> Invoice Payment Method_. + +. Click _New Invoice Payment Method_. + +. Create a code for the invoice payment method. No limits exist on the number of +characters that can be entered in this field. + +. Create a name for the invoice payment method. No limits exist on the number of +characters that can be entered in this field. The text in this field appears in +the drop-down menu on the invoice. + +. Click _Save_. + +Payment methods can be deleted from this screen. + +== Line Item Features == +[[line_item_features]] + +indexterm:[acquisitions,line items] + +Line item alerts are predefined text that can be added to line items that are on +selection lists or purchase orders. You can define the alerts from which staff +can choose. Line item alerts appear in a pop-up box when the line item, or any +of its copies, are marked as received. + +=== Create a line item alert === + +. To create a line item alert, click _Administration -> Acquisitions Administration -> +Line Item Alerts_. + +. Click _New Line Item Alert Text_. + +. Create a code for the text. No limits exist on the number of characters that +can be entered in this field. + +. Create a description for the text. No limits exist on the number of characters +that can be entered in this field. + +. Select an owning library from the drop-down menu. The owning library indicates +the organizational units whose staff can use this alert. This menu is populated +with the shortnames that you created for your libraries in the organizational +units tree (See Administration -> Server Administration -> Organizational Units). + +. Click _Save_. + +=== Line item MARC attribute definitions === + +Line item attributes define the fields that Evergreen needs to extract from the +bibliographic records that are in the acquisitions database to display in the +catalog. Also, these attributes will appear as fields in the New Brief Record +interface. You will be able to enter information for the brief record in the +fields where attributes have been defined. + +== Providers == + +Providers are vendors. You can create a provider profile that includes contact +information for the provider, holdings information, invoices, and other +information. + +=== Create a provider === + +. To create a new provider, click _Administration_ -> _Acquisitions Administration_ -> +_Providers_. + +. Enter the provider name. + +. Create a code for the provider. No limits exist on the number of characters +that can be entered in this field. + +. Select an owner from the drop-down menu. The owner indicates the +organizational units whose staff can use this provider. This menu is populated +with the shortnames that you created for your libraries in the organizational +units tree (See Administration -> Server Administration -> Organizational Units). ++ +[NOTE] +The rule of parental inheritance applies to this list. ++ +. Select a currency from the drop-down menu. This drop-down list is populated by +the list of currencies available in the currency types. + +. A provider must be active in order for purchases to be made from that +provider. To activate the provider, check the box adjacent to Active. To +deactivate a vendor, uncheck the box. + +. Add the default # of copies that are typically ordered through the provider. +This number will automatically populate the line item's _Copies_ box on any PO's +associated with this provider. If another quantity is entered during the +selection or ordering process, it will override this default. If no number is +specified, the default number of copies will be zero. + +. Select a default claim policy from the drop-down box. This list is derived +from the claim policies that can be created + +. Select an EDI default. This list is derived from the EDI accounts that can be +created. + +. Enter the provider's email address. + +. In the Fax Phone field, enter the provider's fax number. + +. In the holdings tag field, enter the tag in which the provider places holdings +data. + +. In the phone field, enter the provider's phone number. + +. If prepayment is required to purchase from this provider, then check the box +adjacent to prepayment required. + +. Enter the Standard Address Number (SAN) for your provider. + +. Enter the web address for the provider's website in the URL field. + +. Click Save. + +=== Add contact and holdings information to providers === + +After you save the provider profile, the screen reloads so that you can save +additional information about the provider. You can also access this screen by +clicking the hyperlinked name of the provider on the Providers screen. The tabs +allow you to add a provider address and contact, attribute definitions, and +holding subfields. You can also view invoices associated with the provider. + +. Enter a Provider Address, and click Save. ++ +[NOTE] +Required fields for the provider address are: Street 1, city, state, country, +post code. You may have multiple valid addresses. ++ +. Enter the Provider Contact, and click Save. + +. Your vendor may include information that is specific to your organization in +MARC tags. You can specify the types of information that should be entered in +each MARC tag. Enter attribute definitions to correlate MARC tags with the +information that they should contain in incoming vendor records. Some technical +knowledge is required to enter XPath information. As an example, if you need to +import the PO Name, you could set up an attribute definition by adding an XPath +similar to: ++ +------------------------------------------------------------------------------ +code => purchase_order +xpath => //*[@tag="962"]/*[@code="p"] +Is Identifier => false +------------------------------------------------------------------------------ ++ +where 962 is the holdings tag and p is the subfield that contains the PO Name. + + +. You may have entered a holdings tag when you created the provider profile. You +can also enter holdings subfields. Holdings subfields allow you to +specify subfields within the holdings tag to which your vendor adds holdings +information, such as quantity ordered, fund, and estimated price. + +. Click invoices to access invoices associated with a provider. + +=== Edit a provider === + +Edit a provider just as you would edit a currency type. + +You can delete providers only if no purchase orders have been assigned to them. + diff --git a/docs/modules/admin/pages/actiontriggers.adoc b/docs/modules/admin/pages/actiontriggers.adoc new file mode 100644 index 0000000000..8d43a1c49a --- /dev/null +++ b/docs/modules/admin/pages/actiontriggers.adoc @@ -0,0 +1,278 @@ += Notifications / Action Triggers = +:toc: + + +== Introduction == + +indexterm:[action triggers, event definitions, notifications] + +Action Triggers give administrators the ability to set up actions for +specific events. They are useful for notification events such as hold notifications. + +To access the Action Triggers module, select *Administration* -> *Local Administration* -> *Notifications / Action triggers*. + +[NOTE] +========== +You must have Local Administrator permissions to access the Action Triggers module. +========== + +You will notice four tabs on this page: <>, <>, <> and <>. + + +[#event_definitions] + +== Event Definitions == + +Event Definitions is the main tab and contains the key fields when working with action triggers. These fields include: + +=== Table 1: Action Trigger Event Definitions === + + +|============================================== +|*Field* |*Description* +| Owning Library |The shortname of the library for which the action / trigger / hook is defined. +| Name |The name of the trigger event, that links to a trigger event environment containing a set of fields that will be returned to the <> and/or <> for processing. +| <> |The name of the trigger for the trigger event. The underlying action_trigger.hook table defines the Fieldmapper class in the core_type column off of which the rest of the field definitions ``hang''. +| Enabled |Sets the given trigger as enabled or disabled. This must be set to enabled for the Action trigger to run. +| Processing Delay |Defines how long after a given trigger / hook event has occurred before the associated action (``Reactor'') will be taken. +| Processing Delay Context Field |Defines the field associated with the event on which the processing delay is calculated. For example, the processing delay context field on the hold.capture hook (which has a core_type of ahr) is _capture_time_. +| Processing Group Context Field |Used to batch actions based on its associated group. +| <> |Links the action trigger to the Reactor. +| <> |The subroutines receive the trigger environment as an argument (see the linked Name for the environment definition) and returns either _1_ if the validator is _true_ or _0_ if the validator returns _false_. +| Event Repeatability Delay |Allows events to be repeated after this delay interval. +| Failure Cleanup |After an event is reacted to and if there is a failure a cleanup module can be run to clean up after the event. +| Granularity |Used to group events by how often they should be run. Options are Hourly, Daily, Weekly, Monthly, Yearly, but you may also create new values. +| Max Event Validity Delay |Allows events to have a range of time that they are valid. This value works with the *Processing Delay* to define a time range. +| Message Library Path |Defines the org_unit object for a Patron Message Center message. +| Message Template |A Template Toolkit template that can be used to generate output for a Patron Message Center message. The output may or may not be used by the reactor or another external process. +| Message Title |The title that will display on a Patron Message Center message. +| Message User Path |Defines the user object for a Patron Message Center message. +| Opt-In Settings Type |Choose which User Setting Type will decide if this event will be valid for a certain user. Use this to allow users to Opt-In or Opt-Out of certain events. +| Opt-In User Field |Set to the name of the field in the selected hook's core type that will link the core type to the actor.usr table. +| Success Cleanup |After an event is reacted to successfully a cleanup module can be run to clean up after the event. +| Template |A Template Toolkit template that can be used to generate output. The output may or may not be used by the reactor or another external process. +|============================================== + + +== Creating Action Triggers == + +. From the top menu, select *Administration* -> *Local Administration* -> *Notifications / Action triggers*. +. Click on the _New_ button. ++ +image::media/new_event_def.png[New Event Definition] ++ +. Select an _Owning Library_. +. Create a unique _Name_ for your new action trigger. +. Select the _Hook_. +. Check the _Enabled_ check box. +. Set the _Processing Delay_ in the appropriate format. E.g. _7 days_ to run 7 days from the trigger event or _00:01:00_ to run 1 hour after the _Processing Delay Context Field_. +. Set the _Processing Delay Context Field_ and _Processing Group Context Field_. +. Select the _Reactor_ and _Validator_. +. Set the _Event Repeatability Delay_. +. Select the _Failure Cleanup_ and _Granularity_. +. Set the _Max Event Validity Delay_. ++ +image::media/event_def_details.png[Event Definition Details] ++ +. If you wish to send a User Message through the Message Center, set a _Message Library Path_. Enter text in the _Message Template_. Enter a title for this message in _Message Title_, and set a value in _Message User Path_. +. Select the _Opt-In Setting Type_. +. Set the _Opt-In User Field_. +. Select the _Success Cleanup_. ++ +image::media/event_def_details_2.png[Event Definition Details] ++ +. Enter text in the _Template_ text box if required. These are for email messages. Here is a sample template for sending 90 day overdue notices: + + + [%- USE date -%] + [%- user = target.0.usr -%] + To: [%- params.recipient_email || user.email %] + From: [%- helpers.get_org_setting(target.home_ou.id, 'org.bounced_emails') || lib.email || params.sender_email || default_sender %] + Subject: Overdue Items Marked Lost + Auto-Submitted: auto-generated + + Dear [% user.family_name %], [% user.first_given_name %] + The following items are 90 days overdue and have been marked LOST. + [%- params.recipient_email || user.email %][%- params.sender_email || default_sender %] + [% FOR circ IN target %] + Title: [% circ.target_copy.call_number.record.simple_record.title %] + Barcode: [% circ.target_copy.barcode %] + Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] + Item Cost: [% helpers.get_copy_price(circ.target_copy) %] + Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] + Library: [% circ.circ_lib.name %] + [% END %] + + [% FOR circ IN target %] + Title: [% circ.target_copy.call_number.record.simple_record.title %] + Barcode: [% circ.target_copy.barcode %] + Due: [% date.format(helpers.format_date(circ.due_date), '%Y-%m-%d') %] + Item Cost: [% helpers.get_copy_price(circ.target_copy) %] + Total Owed For Transaction: [% circ.billable_transaction.summary.total_owed %] + Library: [% circ.circ_lib.name %] + [% END %] + +. Once you are satisfied with your new event trigger, click the _Save_ button located at the bottom of the form. + + +[TIP] +========= +A quick and easy way to create new action triggers is to clone an existing action trigger. +========= + +=== Cloning Existing Action Triggers === + +. Check the check box next to the action trigger you wish to clone. +. Click _Clone Selected_ on the top left of the page. +. An editing window will open. Notice that the fields will be populated with content from the cloned action trigger. Edit as necessary and give the new action trigger a unique Name. +. Click _Save_. + +=== Editing Action Triggers === + +. Double-click on the action trigger you wish to edit. +. The edit screen will appear. When you are finished editing, click _Save_ at the bottom of the form. Or click _Cancel_ to exit the screen without saving. + +[NOTE] +============ +Before deleting an action trigger, you should consider disabling it through the editing form. This way you can keep it for future use or cloning. +============ + +=== Deleting Action Triggers === + +. Check the check box next to the action trigger you wish to delete +. Click _Delete Selected_ on the top-right of the page. + +=== Testing Action Triggers === + +. Go to the list of action triggers. +. Click on the blue link text for the action trigger you'd like to test. ++ +image::media/test_event_def.png[Blue Link Text] ++ +. Go to the Test tab. +. If there is a test available, fill in the required information. +. View the output of the test. + +image::media/test_event_def_output.png[Test Output] + +WARNING: If you are testing an email or SMS notification, use a test account and email as an example. Using the Test feature will actually result in the notification being sent if configured correctly. Similarly, use a test item or barcode when testing a circulation-based event like Mark Lost since the test will mark the item as lost. + +[#hooks] + +=== Hooks === + +Hooks define the Fieldmapper class in the core_type column off of which the rest of the field definitions ``hang''. + + +==== Table 2. Hooks ==== + + +|======================= +| *Field* | *Description* +| Hook Key | A unique name given to the hook. +| Core Type | Used to link the action trigger to the IDL class in fm_IDL.xml +| Description | Text to describe the purpose of the hook. +| Passive | Indicates whether or not an event is created by direct user action or is circumstantial. +|======================= + +You may also create, edit and delete Hooks but the Core Type must refer to an IDL class in the fm_IDL.xml file. + + +[#reactors] + +=== Reactors === + +Reactors link the trigger definition to the action to be carried out. + +==== Table 3. Action Trigger Reactors ==== + + +|======================= +| Field | Description +| Module Name | The name of the Module to run if the action trigger is validated. It must be defined as a subroutine in `/openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm` or as a module in `/openils/lib/perl5/OpenILS/Application/Trigger/Reactor/*.pm`. +| Description | Description of the Action to be carried out. +|======================= + +You may also create, edit and delete Reactors. Just remember that there must be an associated subroutine or module in the Reactor Perl module. + + +[#validators] + +=== Validators === + +Validators set the validation test to be preformed to determine whether the action trigger is executed. + +==== Table 4. Action Trigger Validators ==== + + +|======================= +| Field | Description +| Module Name | The name of the subroutine in `/openils/lib/perl5/OpenILS/Application/Trigger/Reactor.pm` to validate the action trigger. +| Description | Description of validation test to run. +|======================= + +You may also create, edit and delete Validators. Just remember that their must be an associated subroutine in the Reactor.pm Perl module. + +[#processing_action_triggers] +== Processing Action Triggers == + +To run action triggers, an Evergreen administrator will need to run the trigger processing script. This should be set up as a cron job to run periodically. To run the script, use this command: + +---- +/openils/bin/action_trigger_runner.pl --process-hooks --run-pending +---- + +You have several options when running the script: + +* --run-pending: Run pending events to send emails or take other actions as +specified by the reactor in the event definition. + +* --process-hooks: Create hook events + +* --osrf-config=[config_file]: OpenSRF core config file. Defaults to: +/openils/conf/opensrf_core.xml + +* --custom-filters=[filter_file]: File containing a JSON Object which describes any hooks +that should use a user-defined filter to find their target objects. Defaults to: +/openils/conf/action_trigger_filters.json + +* --max-sleep=[seconds]: When in process-hooks mode, wait up to [seconds] for the lock file to go +away. Defaults to 3600 (1 hour). + +* --hooks=hook1[,hook2,hook3,...]: Define which hooks to create events for. If none are defined, it +defaults to the list of hooks defined in the --custom-filters option. +Requires --process-hooks. + +* --granularity=[label]: Limit creating events and running pending events to +those only with [label] granularity setting. + +* --debug-stdout: Print server responses to STDOUT (as JSON) for debugging. + +* --lock-file=[file_name]: Sets the lock file for the process. + +* --verbose: Show details of script processing. + +* --help: Show help information. + +Examples: + +* Run all pending events that have no granularity set. This is what you tell +CRON to run at regular intervals. ++ +---- +perl action_trigger_runner.pl --run-pending +---- + +* Batch create all "checkout.due" events ++ +---- +perl action_trigger_runner.pl --hooks=checkout.due --process-hooks +---- + +* Batch create all events for a specific granularity and to send notices for all +pending events with that same granularity. ++ +---- +perl action_trigger_runner.pl --run-pending --granularity=Hourly --process-hooks +---- + diff --git a/docs/modules/admin/pages/age_hold_protection.adoc b/docs/modules/admin/pages/age_hold_protection.adoc new file mode 100644 index 0000000000..6254f76320 --- /dev/null +++ b/docs/modules/admin/pages/age_hold_protection.adoc @@ -0,0 +1,23 @@ += Age hold protection = +:toc: + +indexterm:[Holds] +indexterm:[Holds, Age Protection] + +Age hold protection prevents new items from filling holds requested for pickup at a library other than the owning library for a specified period of time. + +You can define the protection period in *Administration* -> *Server Administration* -> *Age Hold Protect Rules*. + +The protection period when applied to a item record can start with the item record create date (default) or active date. You can change this setting in *Administration* -> *Local Administration* -> *Library Settings Editor*: Use Active Date for Age Protection. + +In addition to time period, you can set the proximity value to define which organizational units are allowed to act as pickup libraries. The proximity values affect holds as follows: + +* "0" allows only holds where pickup library = owning library +* "1" allows holds where pickup library = owning library, parent, and child organizational units +* "2" allows holds where pickup library = owning library, parent, child, and/or sibling organizational units + +Age protection only applies to individual item records. You cannot configure age protection rules in hold policies. + +== Active date display in OPAC == + +If a library uses the item's active date to calculate holds age protection, the active date will display with the item details instead of the create date in the staff client view of the catalog. Libraries that do not enable the _Use Active Date for Age Protection_ library setting will continue to display the create date. diff --git a/docs/modules/admin/pages/aged_circs.adoc b/docs/modules/admin/pages/aged_circs.adoc new file mode 100644 index 0000000000..21b4bb8ddb --- /dev/null +++ b/docs/modules/admin/pages/aged_circs.adoc @@ -0,0 +1,89 @@ += Aging Circulations = +:toc: + +.Use case +**** +Aging circulations helps to protect patron privacy and save disk space. +**** + +Evergreen allows for the bulk anonymization of circulation histories. Evergreen calls this aged circulation. Circulation statistics are preserved (total circs, last checkout/renewal date, checkout/renewal/checkin workstation, etc) but patron information (name : barcode) is replaced with text and the link to the patron record is removed. + +In the client, will show in the patron field in Circulation History Tab and Show Last Few Circulations. + +In the database, every time you attempt to `DELETE` a row from `action.circ`, it +copies over the appropriate data to `action.aged_circulation`, +then deletes the `action.circ` row. + +== Global Flags == + +There are four global flags used for aging circulations. + +1. Historical Circulation Retention Age - determines the timeframe for aging circulations based on transaction age (7 days, 14 days, 30 days, etc). + +2. Historical Circulations Per Item - determines how many circulations to keep (ex. 1, 2, 3). If set to 1, Evergreen will always keep the last (most recent) circulation. + +3. Historical Circulations use most recent xact_finish date instead of last circ's (true or false) + +4. Historical Circulations are kept for global retention age at a minimum, regardless of user preferences (true or false) + + + +== What Data is Aged? == + +Only completed transactions are aged. These circulations have been checked in (returned) and *do not* contain any unpaid fines or bills. + +Data that is not aged includes: + +* open transactions (i.e. checked out) +* closed transactions with unpaid fines +* closed transactions with unpaid bills +* the last X circulation(s) (determined by historical circulations per item flag) + + +[TIP] +========== +Aging circulations will not affect a patron being able to keep their checkout history. Minimal metadata is stored in the patron checkout history table. Once the corresponding circulation is aged, the full circulation metadata is no longer linked to the patron's reading history. +========== + +[TIP] +========== +Just aging circulations is not sufficient to protect patron circulation +history. Fully protecting these data would also involve a thoughtful +approach to logs and backups of these data. +========== + +[TIP] +========== +You can create a cron job to automatically age circulations. +========== + +== How Circulations are Aged == + +The action.aged_circulation table is for statistical reporting while breaking the link to the patron who had the item checked out. + +Circulations get moved under three circumstances in stock Evergreen: + +1. A patron is deleted. This moves all of the patron's circulations from action.circulation to action.aged_circulation + +2. A row or row(s) in action.circulation are deleted. The action.age_circ_on_delete trigger moves deleted action.circulations to action.aged_circulation. + +3. The action.purge_circulations function is run. This function is meant to be run periodically to enforce patron privacy. Its behavior is controlled by two internal flags: history.circ.retention_age and history.circ.retention_count. + +[TIP] +========== +The purge_circulations function is often run from a cron via the purge_circulations.srfsh script. +========== + + +[TIP] +========== +The purge_circulations function will take a *long* time to run for the first time on a system that has had much activity. The srfsh script will likely time out before the database function finishes and nothing will get moved. +========== + + +== Impacts on Billing Data == + +When a circulation is aged, billings and payments linked to the circulation are migrated from the active billing and payment tables to the `money.aged_billing` and `money.aged_payment` tables. + +NOTE: currently grocery bills are ignored and not aged. + diff --git a/docs/modules/admin/pages/allowed_payments.adoc b/docs/modules/admin/pages/allowed_payments.adoc new file mode 100644 index 0000000000..56b26bd518 --- /dev/null +++ b/docs/modules/admin/pages/allowed_payments.adoc @@ -0,0 +1,21 @@ +=== Setting limits on allowed payment amounts === + +Two new settings have been added to prevent library staff +from accidentally clearing all patron bills by scanning a +barcode into the Payment Amount field, or accidentally +entering the amount without a decimal point (such as you +would when using a cash register). + +Both settings are available via the Library Settings Editor. +The Payment amount threshold for Are You Sure? dialog +(`ui.circ.billing.amount_warn`) setting identifies the amount +above which staff will be asked if they're sure they want +to apply the payment. The Maximum payment amount allowed +(`ui.circ.billing.amount_limit`) setting identifies the +maximum amount of money that can be accepted through the +staff client. + +These settings only affect the staff client, not credit +cards accepted through the public catalog, or direct API +calls from third party tools. + diff --git a/docs/modules/admin/pages/apache_access_handler.adoc b/docs/modules/admin/pages/apache_access_handler.adoc new file mode 100644 index 0000000000..c898972ea0 --- /dev/null +++ b/docs/modules/admin/pages/apache_access_handler.adoc @@ -0,0 +1,141 @@ +[#apache_access_handler_perl_module] += Apache Access Handler Perl Module = +:toc: + +The OpenILS::WWW::AccessHandler Perl module is intended for limiting patron +access to configured locations in Apache. These locations could be folder +trees, static files, non-Evergreen dynamic content, or other Apache +features/modules. It is intended as a more patron-oriented and transparent +version of the OpenILS::WWW::Proxy and OpenILS::WWW:Proxy::Authen modules. + +Instead of using Basic Authentication the AccessHandler module instead redirects +to the OPAC for login. Once logged in additional checks can be performed, based +on configured variables: + + * Permission Checks (at Home OU or specified location) + * Home OU Checks (Org Unit or Descendant) + * "Good standing" Checks (Not Inactive or Barred) + +Use of the module is a simple addition to a Location block in Apache: + +[source,conf] +---- + + PerlAccessHandler OpenILS::WWW::AccessHandler + # For each option you wish to set: + PerlSetVar OPTION "VALUE" + +---- + +The available options are: + +OILSAccessHandlerLoginURL:: +* Default: /eg/opac/login +* The page to redirect to when Login is needed +OILSAccessHandlerLoginURLRedirectVar:: +* Default: redirect_to +* The variable the login page wants the "destination" URL stored in +OILSAccessHandlerFailURL:: +* Default: +* URL to go to if Permission, Good Standing, or Home OU checks fail. If not set + a 403 error is generated instead. To customize the 403 you could use an + ErrorDocument statement. +OILSAccessHandlerCheckOU:: +* Default: +* Org Unit to check Permissions at and/or to load Referrer from. Can be a + shortname or an ID. +OILSAccessHandlerPermission:: +* Default: +* Permission, or comma- or space-delimited set of permissions, the user must have to + access the protected area. +OILSAccessHandlerGoodStanding:: +* Default: 0 +* If set to a true value the user must be both Active and not Barred. +OILSAccessHandlerHomeOU:: +* Default: +* An Org Unit, or comma- or space-delimited set of Org Units, that the user's Home OU must + be equal to or a descendant of to access this resource. Can be set to + shortnames or IDs. +OILSAccessHandlerReferrerSetting:: +* Default: +* Library Setting to pull a forced referrer string out of, if set. + +As the AccessHandler module does not actually serve the content it is +protecting, but instead merely hands control back to Apache when it is done +authenticating, you can protect almost anything else you can serve with Apache. + +== Use Cases == +The general use of this module is "protect access to something else" - what that +something else is will vary. Some possibilities: + + * Apache features + ** Automatic Directory Indexes + ** Proxies (see below) + *** Electronic Databases + *** Software on other servers/ports + * Non-Evergreen software + ** Timekeeping software for staff + ** Specialized patron request packages + * Static files and folders + ** Semi-public Patron resources + ** Staff-only downloads + +== Proxying Websites == +One potentially interesting use of the AccessHandler module is to protect an +Apache Proxy configuration. For example, after installing and enabling +mod_proxy, mod_proxy_http, and mod_proxy_html you could proxy websites like so: + +[source,conf] +---- + + # Base "Rewrite URLs" configuration + ProxyHTMLLinks a href + ProxyHTMLLinks area href + ProxyHTMLLinks link href + ProxyHTMLLinks img src longdesc usemap + ProxyHTMLLinks object classid codebase data usemap + ProxyHTMLLinks q cite + ProxyHTMLLinks blockquote cite + ProxyHTMLLinks ins cite + ProxyHTMLLinks del cite + ProxyHTMLLinks form action + ProxyHTMLLinks input src usemap + ProxyHTMLLinks head profile + ProxyHTMLLinks base href + ProxyHTMLLinks script src for + + # To support scripting events (with ProxyHTMLExtended On) + ProxyHTMLEvents onclick ondblclick onmousedown onmouseup \ + onmouseover onmousemove onmouseout onkeypress \ + onkeydown onkeyup onfocus onblur onload \ + onunload onsubmit onreset onselect onchange + + # Limit all Proxy connections to authenticated sessions by default + PerlAccessHandler OpenILS::WWW::AccessHandler + + # Strip out Evergreen cookies before sending to remote server + RequestHeader edit Cookie "^(.*?)ses=.*?(?:$|;)(.*)$" $1$2 + RequestHeader edit Cookie "^(.*?)eg_loggedin=.*?(?:$|;)(.*)$" $1$2 + + + + # Proxy example.net + ProxyPass http://www.example.net/ + ProxyPassReverse http://www.example.net/ + ProxyPassReverseCookieDomain example.net example.com + ProxyPassReverseCookiePath / /proxy/example/ + + ProxyHTMLEnable On + ProxyHTMLURLMap http://www.example.net/ /proxy/example/ + ProxyHTMLURLMap / /proxy/mail/ + ProxyHTMLCharsetOut * + + # Limit to BR1 and BR3 users + PerlSetVar OILSAccessHandlerHomeOU "BR1,BR3" + +---- + +As mentioned above, this can be used for multiple reasons. In addition to +websites such as online databases for patron use you may wish to proxy software +for staff or patron use to make it appear on your catalog domain, or perhaps to +keep from needing to open extra ports in a firewall. diff --git a/docs/modules/admin/pages/apache_rewrite_tricks.adoc b/docs/modules/admin/pages/apache_rewrite_tricks.adoc new file mode 100644 index 0000000000..5008cb3308 --- /dev/null +++ b/docs/modules/admin/pages/apache_rewrite_tricks.adoc @@ -0,0 +1,148 @@ +[#apache_rewrite_tricks] += Apache Rewrite Tricks = +:toc: + +It is possible to use Apache's Rewrite Module features to perform a number of +useful tricks that can make people's lives much easier. + +== Short URLs == +Making short URLs for common destinations can simplify making printed media as +well as shortening or simplifying what people need to type. These are also easy +to add and require minimal maintenance, and generally can be implemented with a +single line addition to your eg_vhost.conf file. + +[source,conf] +---- +# My Account - http://host.ext/myaccount -> My Account Page +RewriteRule ^/myaccount https://%{HTTP_HOST}/eg/opac/myopac/main [R] + +# ISBN Search - http://host.ext/search/isbn/ -> Search Page +RewriteRule ^/search/isbn/(.*) /eg/opac/results?_special=1&qtype=identifier|isbn&query=$1 [R] +---- + +== Domain Based Content with RewriteMaps == +One creative use of Rewrite features is domain-based configuration in a single +eg_vhost.conf file. Regardless of how many VirtualHost blocks use the +configuration you don't need to duplicate things for minor changes, and can in +fact use wildcard VirtualHost blocks to serve multiple subdomains. + +For the wildcard blocks you will want to use a ServerAlias directive, and for +SSL VirtualHost blocks ensure you have a wildcard SSL certificate. + +[source,conf] +---- +ServerAlias *.example.com +---- + +For actually changing things based on the domain, or subdomain, you can use +RewriteMaps. Each RewriteMap is generally a lookup table of some kind. In the +following examples we will generally use text files, though database lookups +and external programs are also possible. + +Note that in the examples below we generally store things in Environment +Variables. From within Template Toolkit templates you can access environment +variables with the ENV object. + +.Template Toolkit ENV example, link library name/url if set +[source,html] +---- +[% IF ENV.eglibname && ENV.egliburl %][% ENV.eglibname %][% END %] +---- + +The first lookup to do is a domain to identifier, allowing us to re-use +identifiers for multiple domains. In addition we can also supply a default +identifier, for when the domain isn't present in the lookup table. + +.Apache Config +[source,conf] +---- +# This internal map allows us to lowercase our hostname, removing case issues in our lookup table +# If you prefer uppercase you can use "uppercase int:toupper" instead. +RewriteMap lowercase int:tolower +# This provides a hostname lookup +RewriteMap eglibid txt:/openils/conf/libid.txt +# This stores the identifier in a variable (eglibid) for later use +# In this case CONS is the default value for when the lookup table has no entry +RewriteRule . - [E=eglibid:${eglibid:${lowercase:%{HTTP_HOST}}|CONS}] +---- + +.Contents of libid.txt File +[source,txt] +---- +# Comments can be included +# Multiple TLDs for Branch 1 +branch1.example.com BRANCH1 +branch1.example.net BRANCH1 +# Branches 2 and 3 don't have alternate TLDs +branch2.example.com BRANCH2 +branch3.example.com BRANCH3 +---- + +Once we have identifiers we can look up other information, when appropriate. +For example, say we want to look up library names and URLs: + +.Apache Config +[source,conf] +---- +# Library Name Lookup - Note we provide no default in this case. +RewriteMap eglibname txt:/openils/conf/libname.txt +RewriteRule . - [E=eglibname:${eglibname:%{ENV:eglibid}}] +# Library URL Lookup - Also with no default. +RewriteMap egliburl txt:/openils/conf/liburl.txt +RewriteRule . - [E=egliburl:${egliburl:%{ENV:eglibid}}] +---- + +.Contents of libname.txt File +[source,txt] +---- +# Note that we cannot have spaces in the "value", so instead is used.   is also an option. +BRANCH1 Branch One +BRANCH2 Branch Two +BRANCH3 Branch Three +CONS Example Consortium Name +---- + +.Contents of liburl.txt File +[source,txt] +---- +BRANCH1 http://branch1.example.org +BRANCH2 http://branch2.example.org +BRANCH3 http://branch3.example.org +CONS http://example.org +---- + +Or, perhaps set the "physical location" variable for default search/display library: + +.Apache Config +[source,conf] +---- +# Lookup "physical location" IDs +RewriteMap eglibphysloc txt:/openils/conf/libphysloc.txt +# Note: physical_loc is a variable used in the TTOPAC and should not be re-named +RewriteRule . - [E=physical_loc:${eglibphysloc:%{ENV:eglibid}}] +---- + +.Contents of libphysloc.txt File +[source,txt] +---- +BRANCH1 4 +BRANCH2 5 +BRANCH3 6 +CONS 1 +---- + +Going further, you could also replace files to be downloaded, such as images or +stylesheets, on the fly: + +.Apache Config +[source,conf] +---- +# Check if a file exists based on eglibid and the requested file name +# Say, BRANCH1/opac/images/main_logo.png +RewriteCond %{DOCUMENT_ROOT}/%{ENV:eglibid}%{REQUEST_URI} -f +# Serve up the eglibid version of the file instead +RewriteRule (.*) /%{ENG:eglibid}$1 +---- + +Note that template files themselves cannot be replaced in that manner. + diff --git a/docs/modules/admin/pages/audio_alerts.adoc b/docs/modules/admin/pages/audio_alerts.adoc new file mode 100644 index 0000000000..1d02c9e410 --- /dev/null +++ b/docs/modules/admin/pages/audio_alerts.adoc @@ -0,0 +1,35 @@ +== Managing audio alerts == + +=== Globally silencing sounds === +indexterm:[audio alerts,silencing] +indexterm:[nosound.wav] + +The file `nosound.wav` can be used +to globally disable audio alerts for a specific event on an Evergreen system. + +For example, to silence the alert that sounds after a successful patron search: + +[source, bash] +------------------------------------------------------------------------------ +mkdir -p /openils/var/web/audio/notifications/success/patron/ +cd /openils/var/web/audio/notifications/success/patron/ +ln -s ../../nosound.wav by_search.wav +------------------------------------------------------------------------------ + + +=== Self-check interface === +indexterm:[audio alerts,self check interface] +indexterm:[self check interface,audio alerts] +indexterm:[audio_config.tt2] + +Sounds may play at certain events in the self check interface. These +events are defined in the `templates/circ/selfcheck/audio_config.tt2` +template. To use the default sounds, you could run the following command +from your Evergreen server as the *root* user (assuming that +`/openils/` is your install prefix): + +[source, bash] +------------------------------------------------------------------------------ +cp -r /openils/var/web/xul/server/skin/media/audio /openils/var/web/. +------------------------------------------------------------------------------ + diff --git a/docs/modules/admin/pages/authentication_proxy.adoc b/docs/modules/admin/pages/authentication_proxy.adoc new file mode 100644 index 0000000000..9cdaaeee7e --- /dev/null +++ b/docs/modules/admin/pages/authentication_proxy.adoc @@ -0,0 +1,97 @@ += Authentication Proxy = +:toc: + +indexterm:[authentication, proxy] + +indexterm:[authentication, LDAP] + +To support integration of Evergreen with organizational authentication systems, and to reduce the proliferation of user names and passwords, Evergreen offers a service called open-ils.auth_proxy. If you enable the service, open-ils.auth_proxy supports different authentication mechanisms that implement the authenticate method. You can define a chain of these authentication mechanisms to be tried in order within the *__* element of the _opensrf.xml_ configuration file, with the option of falling back to the native mode that uses Evergreen’s internal method of password authentication. + +This service only provides authentication. There is no support for automatic provisioning of accounts. To authenticate using any authentication system, the user account must first be defined in the Evergreen database. The user will be authenticated based on the Evergreen username and must match the user's ID on the authentication system. + +In order to activate Authentication Proxy, the Evergreen system administrator will need to complete the following steps: + +. Edit *_opensrf.xml_*. +.. Set the *_open-ils.auth_proxy_* app settings *_enabled_* tag to *_true_* +.. Add the *_authenticator_* to the list of authenticators or edit the existing example authenticator: ++ +[source,xml] +---- + + + ldap + OpenILS::Application::AuthProxy::LDAP_Auth + name.domain.com + ou=people,dc=domain,dc=com + cn=username,ou=specials,dc=domain,dc=com + uid + my_ldap_password_for_authid_user + + staff + opac + + + 103 + 104 + + +---- ++ +* *_name_* : Used to identify each authenticator. +* *_module_* : References to the perl module used by Evergreen to process the request. +* *_hostname_* : Hostname of the authentication server. +* *_basedn_* : Location of the data on your authentication server used to authenticate users. +* *_authid_* : Administrator ID information used to connect to the Authentication server. +* *_id_attr_* : Field name in the authenticator matching the username in the Evergreen database. +* *_password_* : Administrator password used to connect to the authentication server. Password for the *_authid_*. +* *_login_types_* : Specifies which types of logins will use this authenticator. This might be useful if staff use a different LDAP directory than general users. +* *_org_units_* : Specifies which org units will use the authenticator. This is useful in a consortium environment where libraries will use separate authentication systems. ++ +. Restart Evergreen and Apache to activate configuration changes. + +[TIP] +==================================================================== +If using proxy authentication with library employees that will click +the _Change Operator_ feature in the client software, then add +"Temporary" as a *_login_types_*. +==================================================================== + + +== Using arbitrary LDAP usernames == + +Authentication Proxy supports LDAP-based login with a username that is +different from your Evergreen username. + +.Use case +**** + +This feature may be useful for libraries that use an LDAP server for +single sign-on (SSO). Let's say you are a post-secondary library using +student or employee numbers as Evergreen usernames, but you want people +to be able to login to Evergreen with their SSO credentials, which may +be different from their student/employee number. To support this, +Authentication Proxy can be configured to accept your SSO username on login, +use it to look up your student/employee number on the LDAP server, and +log you in as the appropriate Evergreen user. + +**** + +To enable this feature, in the Authentication Proxy configuration for your LDAP server in +`opensrf.xml`, set `bind_attr` to the LDAP field containing your LDAP +username, and "id_attr" to the LDAP field containing your student or +employee number (or whatever other value is used as your Evergreen +username). If `bind_attr` is not set, Evergreen will assume that your +LDAP username and Evergreen username are the same. + +Now, let's say your LDAP server is only an authoritative auth provider +for Library A. Nothing prevents the server from reporting that your +student number is 000000, even if that Evergreen username is already in +use by another patron at Library B. We want to ensure that Authentication Proxy +does not use Library A's LDAP server to log you in as the Library B +patron. For this reason, a new `restrict_by_home_ou` setting has been +added to Authentication Proxy config. When enabled, this setting restricts LDAP +authentication to users belonging to a library served by that LDAP +server (i.e. the user's home library must match the LDAP server's +`org_units` setting in `opensrf.xml`). Use of this setting is strongly +recommended. + diff --git a/docs/modules/admin/pages/authorities.adoc b/docs/modules/admin/pages/authorities.adoc new file mode 100644 index 0000000000..ad32f08f1d --- /dev/null +++ b/docs/modules/admin/pages/authorities.adoc @@ -0,0 +1,146 @@ += Authorities = +:toc: + +== Authority Control Sets == + + +The tags and subfields that display in authority records in Evergreen are +proscribed by control sets. The Library of Congress control set is the default +control set in Evergreen. You can create customized +control sets for authority records. Also, you can define thesauri and authority +fields for these control sets. + +Patrons and staff will be able to browse authorities in the OPAC. The following +fields are browsable by default: author, series, subject, title, and topic. You +will be able to add custom browse axes in addition to these default fields. + +You can specify the MARC tags and subfields that an authority record should +contain. The Library of Congress control set exists in the staff client by +default. The control sets feature enables librarians to add or customize new +control sets. + +To access existing control sets, click *Administration* -> *Server Administration* -> +*Authority Control Sets*. + +image::media/Authority_Server_Admin_Menu.png[Server administration authority actions] + +=== Add a Control Set === + +. Click *Administration* -> *Server Administration* -> *Authority Control Sets*. +. Click *New Control Set*. +. Add a *Name* to the control set. Enter any number of characters. +. Add a *Description* of the control set. Enter any number of characters. +. Click *Save*. + +image::media/Authority_Control_Sets1.jpg[Authority_Control_Sets1] + +== Thesauri == + +A thesaurus describes the semantic rules that govern the meaning of words in a +MARC record. The thesaurus code, which indicates the specific thesaurus that +should control a MARC record, is encoded in a fixed field using the mnemonic +Subj in the authority record. Eleven thesauri associated with the Library of +Congress control set exist by default in the staff client. + +To access an existing thesaurus, click *Administration* -> *Server Administration* -> +*Authority Control Sets*, and choose the hyperlinked thesaurus that you +want to access, or click *Administration* -> *Server Administration* -> *Authority Thesauri*. + + +=== Add a Thesaurus === + +. Click *Administration* -> *Server Administration* -> *Authority Control Sets*, +and choose the hyperlinked thesaurus that you want to access, or click *Admin* +-> *Server Administration* -> *Authority Thesauri*. +. Click *New Thesaurus*. +. Add a *Thesaurus Code*. Enter any single, upper case character. +This character will be entered in the fixed fields of the MARC record. +. Add a *Name* to the thesaurus. Enter any number of characters. +. Add a *Description* of the thesaurus. Enter any number of characters. + +image::media/Authority_Control_Sets2.jpg[Authority_Control_Sets2] + +== Authority Fields == + + +Authority fields indicate the tags and subfields that should be entered in the +authority record. Authority fields also enable you to specify the type of data +that should be entered in a tag. For example, in an authority record governed +by a Library of Congress control set, the 100 tag would contain a "Heading - +Personal Name." Authority fields also enable you to create the corresponding +tag in the bibliographic record that would contain the same data. + +=== Create an Authority Field === + +. Click *Administration* -> *Server Administration* -> *Authority Control Sets*. +. Click *Authority Fields*. The number in parentheses indicates the number of +authority fields that have been created for the control set. +. Click *New Authority Field*. +. Add a *Name* to the authority field. Enter any number of characters. +. Add a *Description* to describe the type of data that should be entered in +this tag. Enter any number of characters. +. Select a *Main Entry* if you are linking the tag(s) to another entry. +. Add a *Tag* in the authority record. +. Add a subfield in the authority record. Multiple subfields should be entered +without commas or spaces. +. Add a *Non-filing indicator* (either 1 or 2) to denote which indicator +contains non-filing information. Leave empty if not applicable. + +. Click *Save*. ++ +image::media/Authority_Control_Sets_Fields_Edit.png[Authority Fields edit form] ++ +. Create the corresponding tag in the bibliographic record that should contain +this information. Click the *None* link in the *Controlled Bib Fields* column. +. Click *New Control Set Bib Field*. +. Add the corresponding tag in the bibliographic record. +. Click *Save*. + +image::media/Authority_Control_Sets4.jpg[Authority_Control_Sets4] + + + +== Browse Axes == + +Authority records can be browsed, by default, along five axes: author, series, +subject, title, and topic. Use the *Browse Axes* feature to create additional +axes. + + +=== Create a new Browse Axis === + +. Click *Administration* -> *Server Administration* -> *Authority Browse Axes* +. Click *New Browse Axis*. +. Add a *code*. Do not enter any spaces. +. Add a *name* to the axis that will appear in the OPAC. Enter any number of +characters. +. Add a *description* of the axis. Enter any number of characters. +. Add a *sorter attribute*. The sorter attribute indicates the order in which +the results will be displayed. ++ +image::media/Authority_Control_Sets5.jpg[Authority_Control_Sets5] +. Assign the axis to an authority so that users can find the authority record +when browsing authorities. Click *Administration* -> *Server Administration* -> +*Authority Control Sets*. +. Choose the control set to which you will add the axis. Click *Authority +Fields*. ++ +image::media/Authority_Control_Sets_Fields.png[Authority fields link] + +. Click the link in the *Axes* column of the tag of your choice. +. Click *New Browse Axis-Authority Field Map*. +. Select an *Axis* from the drop down menu. +. Click *Save*. + +image::media/Authority_Control_Sets6.jpg[Authority_Control_Sets6] + + +*Permissions to use this Feature* + + +To use authority control sets, you will need the following permissions: + +* CREATE_AUTHORITY_CONTROL_SET +* UPDATE_AUTHORITY_CONTROL_SET +* DELETE_AUTHORITY_CONTROL_SET + diff --git a/docs/modules/admin/pages/auto_suggest_search.adoc b/docs/modules/admin/pages/auto_suggest_search.adoc new file mode 100644 index 0000000000..23d7bf58c1 --- /dev/null +++ b/docs/modules/admin/pages/auto_suggest_search.adoc @@ -0,0 +1,30 @@ += Auto Suggest in Catalog Search = +:toc: + +The auto suggest feature suggestions for completing search terms as the user enters his search query. Ten suggestions are the default, but the number of suggestions is configurable at +the database level. Scroll through suggestions with your mouse, or use the arrow keys to scroll through the suggestions. Select a suggestion to view records that are linked to +this suggestion. This feature is not turned on by default. You must turn it on in the Administration module. + + +== Enabling this Feature == + +. To enable this feature, click *Administration* -> *Server Administration* -> *Global Flags*. +. Scroll down to item 10, OPAC. +. Double click anywhere in the row to edit the fields. +. Check the box adjacent to *Enabled* to turn on the feature. +. The *Value* field is optional. If you checked *Enabled* in step 4, and you leave this field empty, then Evergreen will only suggest searches for which there are any corresponding MARC records. ++ +NOTE: If you checked *Enabled* in step 4, and you enter the string, *opac_visible*, into this field, then Evergreen will suggest searches for which +there are matching MARC records with copies within your search scope. For example, it will suggest MARC records with copies at your branch. ++ +. Click *Save.* + +image::media/Auto_Suggest_in_Catalog_Search2.jpg[Auto_Suggest_in_Catalog_Search2] + +== Using this Feature == + +. Enter search terms into the basic search field. Evergreen will automatically suggest search terms. +. Select a suggestion to view records that are linked to this suggestion. + +image::media/Auto_Suggest_in_Catalog_Search1.jpg[Auto_Suggest_in_Catalog_Search1] + diff --git a/docs/modules/admin/pages/autorenewals.adoc b/docs/modules/admin/pages/autorenewals.adoc new file mode 100644 index 0000000000..0222a7c25f --- /dev/null +++ b/docs/modules/admin/pages/autorenewals.adoc @@ -0,0 +1,45 @@ += Autorenewals in Evergreen = +:toc: + +== Introduction == + +Circulation policies in Evergreen can now be configured to automatically renew items checked out on patron accounts. Circulations will be renewed automatically and patrons will not need to log in to their OPAC accounts or ask library staff to renew materials. + +Autorenewals are set in the Circulation Duration Rules, which allows this feature to be applied to selected circulation policies. Effectively, this makes autorenewals configurable by patron group, organizational unit or library, and circulation modifier. + +== Configure Autorenewals == + +Autorenewals are configured in *Administration -> Server Administration -> Circulation Duration Rules*. + +Enter the number of automatic renewals allowed in the new field called _max_auto_renewals_. The field called _max_renewals_ will still set the maximum number of manual renewals, whether staff or patron initiated. Typically, the _max_renewals_ value will be greater than _max_auto_renewals_, so that even if no more autorenewals are allowed, a patron may still renew via the OPAC. + +image::media/autorenew_circdur.PNG[Autorenewals in Circulation Duration Rules] + +The Circulation Duration Rule can then be applied to specific circulation policies (*Administration -> Local Administration -> Circulation Policies*) to implement autorenewals in Evergreen. + +== Autorenewal Notices and Action Triggers == + +Two new action triggers have been added to Evergreen for use with autorenewals. They can be found and configured in *Administration -> Local Administration -> Notifications/Action Triggers*. + +* Autorenew +- Uses the checkout.due hook to automatically renew circulations before they are due. +- Autorenewals will not occur if the item has holds, exceeds the maximum number of autorenewals allowed, or if the patron has been blocked from renewing items. + +* AutorenewNotify +- Email notification to inform patrons when their materials are automatically renewed or, if they are not automatically renewed due to meeting one of the criteria listed above. +- This notice can also be configured as an SMS notification. +- This notice does not change or interact with the Courtesy Notice (Pre-due Notice) that is also available in Evergreen. Libraries should evaluate whether they want to use both Courtesy Notices and Autorenewal notices. + +Sample of successful autorenewal notification: + +image::media/autorenew_renewnotice.PNG[Notification of Successful Autorenewal] + +Sample of blocked autorewal notification: + +image::media/autorenew_norenewnotice.PNG[Notification of Blocked Autorenewal] + +== Autorenewals in Patron Accounts == + +A new column called _AutoRenewalsRemaining_ indicates how many autorenewals are available for a transaction. + +image::media/autorenew_itemsout.PNG[Autorenewals Remaining in Patron Items Out] diff --git a/docs/modules/admin/pages/backups.adoc b/docs/modules/admin/pages/backups.adoc new file mode 100644 index 0000000000..6ab02a0e97 --- /dev/null +++ b/docs/modules/admin/pages/backups.adoc @@ -0,0 +1,202 @@ += Backing up your Evergreen System = +:toc: + +== Database backups == + +Although it might seem pessimistic, spending some of your limited time preparing for disaster is one of +the best investments you can make for the long-term health of your Evergreen system. If one of your +servers crashes and burns, you want to be confident that you can get a working system back in place -- +whether it is your database server that suffers, or an Evergreen application server. + +At a minimum, you need to be able to recover your system's data from your PostgreSQL database server: +patron information, circulation transactions, bibliographic records, and the like. If all else fails, +you can at least restore that data to a stock Evergreen system to enable your staff and patrons to find +and circulate materials while you work on restoring your local customizations such as branding, colors, +or additional functionality. This section describes how to back up your data so that you or a colleague +can help you recover from various disaster scenarios. + +=== Creating logical database backups === + +The simplest method to back up your PostgreSQL data is to use the `pg_dump` utility to create a logical +backup of your database. Logical backups have the advantage of taking up minimal space, as the indexes +derived from the data are not part of the backup. For example, an Evergreen database with 2.25 million +records and 3 years of transactions that takes over 120 GB on disk creates just a 7.0 GB compressed +backup file. The drawback to this method is that you can only recover the data at the exact point in time +at which the backup began; any updates, additions, or deletions of your data since the backup began will +not be captured. In addition, when you restore a logical backup, the database server has to recreate all +of the indexes--so it can take several hours to restore a logical backup of that 2.25 million record +Evergreen database. + +As the effort and server space required for logical database backups are minimal, your first step towards +preparing for disaster should be to automate regular logical database backups. You should also ensure +that the backups are stored in a different physical location, so that if a flood or other disaster strikes +your primary server room, you will not lose your logical backup at the same time. + +To create a logical dump of your PostgreSQL database: + +. Issue the command to back up your database: `pg_dump -Fc > `. If you +are not running the command as the postgres user on the database server itself, you may need to include +options such as `-U ` and `-h ` to connect to the database server. You can use a +newer version of the PostgreSQL to run `pg_dump` against an older version of PostgreSQL if your client +and server operating systems differ. The `-Fc` option specifies the "custom" format: a compressed format +that gives you a great deal of flexibility at restore time (for example, restoring only one table from +the database instead of the entire schema). +. If you created the logical backup on the database server itself, copy it to a server located in a +different physical location. + +You should establish a routine of nightly logical backups of your database, with older logical backups +being automatically deleted after a given interval. + +=== Restoring from logical database backups === + +To increase your confidence in the safety of your data, you should regularly test your ability to +restore from a logical backup. Restoring a logical backup that you created using the custom format +requires the use of the `pg_restore` tool as follows: + +. On the server on which you plan to restore the logical backup, ensure that you have installed +PostgreSQL and the corresponding server package prerequisites. The `Makefile.install` prerequisite +installer than came with your version of Evergreen contains an installation target that should +satisfy these requirements. Refer to the installation documentation for more details. +. As the `postgres` user, create a new database using the `createdb` command into which you will +restore the data. Base the new database on the _template0_ template database to enable the +combination of UTF8 encoding and C locale options, and specify the character type and collation +type as "C" using the `--lc-ctype` and `--lc-collate` parameters. For example, to create a new +database called "testrestore": `createdb --template=template0 --lc-ctype=C --lc-collate=C testrestore` +. As the `postgres` user, restore the logical backup into your newly created database using +the `pg_restore` command. You can use the `-j` parameter to use more CPU cores at a time to make +your recovery operation faster. If your target database is hosted on a different server, you can +use the `-U ` and `-h ` options to connect to that server. For example, +to restore the logical backup from a file named evergreen_20121212.dump into the "testrestore" +database on a system with 2 CPU cores: `pg_restore -j 2 -d testrestore evergreen_20171212.dump` + +=== Creating physical database backups with support for point-in-time recovery === + +While logical database backups require very little space, they also have the disadvantage of +taking a great deal of time to restore for anything other than the smallest of Evergreen systems. +Physical database backups are little more than a copy of the database file system, meaning that +the space required for each physical backup will match the space used by your production database. +However, physical backups offer the great advantage of almost instantaneous recovery, because the +indexes already exist and simply need to be validated when you begin database recovery. Your +backup server should match the configuration of your master server as closely as possible including +the version of the operating system and PostgreSQL. + +Like logical backups, physical backups also represent a snapshot of the data at the point in time +at which you began the backup. However, if you combine physical backups with write-ahead-log (WAL) +segment archiving, you can restore a version of your database that represents any point in time +between the time the backup began and the time at which the last WAL segment was archived, a +feature referred to as point-in-time recovery (PITR). PITR enables you to undo the damage that an +accidentally or deliberately harmful UPDATE or DELETE statement could inflict on your production +data, so while the recovery process can be complex, it provides fine-grained insurance for the +integrity of your data when you run upgrade scripts against your database, deploy new custom +functionality, or make global changes to your data. + +To set up WAL archiving for your production Evergreen database, you need to modify your PostgreSQL +configuration (typically located on Debian and Ubuntu servers in +`/etc/postgresql//postgresql.conf`): + +. Change the value of `archive_mode` to on +. Set the value of archive_command to a command that accepts the parameters `%f` (representing the +file name of the WAL segment) and %p (representing the complete path name for the WAL segment, +including the file name). You should copy the WAL segments to a remote file system that can be read +by the same server on which you plan to create your physical backups. For example, if `/data/wal` +represents a remote file system to which your database server can write, a possible value of +archive_command could be: `test ! -f /data/wal/%f && cp %p /data/wal/%f`, which effectively tests +to see if the destination file already exists, and if it does not, copies the WAL segment to that +location. This command can be and often is much more complex (for example, using `scp` or `rsync` +to transfer the file to the remote destination rather than relying on a network share), but you +can start with something simple. + +Once you have modified your PostgreSQL configuration, you need to restart the PostgreSQL server +before the configuration changes will take hold: +. Stop your OpenSRF services. +. Restart your PostgreSQL server. +. Start your OpenSRF services and restart your Apache HTTPD server. + +To create a physical backup of your production Evergreen database: + +. From your backup server, issue the +`pg_basebackup -x -D -U -h ` +command to create a physical backup of database on your backup server. + +You should establish a process for creating regular physical backups at periodic intervals, +bearing in mind that the longer the interval between physical backups, the more WAL segments +the backup database will have to replay at recovery time to get back to the most recent changes +to the database. For example, to be able to relatively quickly restore the state of your database +to any point in time over the past four weeks, you might take physical backups at weekly intervals, +keeping the last four physical backups and all of the corresponding WAL segments. + +=== Creating a replicated database === + +If you have a separate server that you can use to run a replica of your database, consider +replicating your database to that server. In the event that your primary database server suffers a +hardware failure, having a database replica gives you the ability to fail over to your database +replica with very little downtime and little or no data loss. You can also improve the performance of +your overall system by directing some read-only operations, such as reporting, to the database replica. +In this section, we describe how to replicate your database using PostgreSQL's streaming replication +support. + +You need to prepare your master PostgreSQL database server to support streaming replicas with several +configuration changes. The PostgreSQL configuration file is typically located on Debian and Ubuntu +servers at `/etc/postgresql//postgresql.conf`. The PostgreSQL host-based authentication +(`pg_hba.conf`) configuration file is typically located on Debian and Ubuntu servers at +`/etc/postgresql//pg_hba.conf`. Perform the following steps on your master database server: + +. Turn on streaming replication support. In postgresql.conf on your master database server, +change `max_wal_senders` from the default value of 0 to the number of streaming replicas that you need +to support. Note that these connections count as physical connections for the sake of the +`max_connections` parameter, so you might need to increase that value at the same time. +. Enable your streaming replica to endure brief network outages without having to rely on the +archived WAL segments to catch up to the master. In `postgresql.conf` on your production database server, +change `wal_keep_segments` to a value such as 32 or 64. +. Increase the maximum number of log file segments between automatic WAL checkpoints. In `postgresql.conf` +on your production database server, change checkpoint_segments from its default of 3 to a value such as +16 or 32. This improves the performance of your database at the cost of additional disk space. +. Create a database user for the specific purpose of replication. As the postgres user on the production +database server, issue the following commands, where replicant represents the name of the new user: ++ +[source,sql] +createuser replicant +psql -d ALTER ROLE replicant WITH REPLICATION; ++ +. Enable your replica database to connect to your master database server as a streaming replica. In +`pg_hba.conf` on your master database server, add a line to enable the database user replicant to connect +to the master database server from IP address 192.168.0.164: ++ +[source,perl] +host replication replicant 192.168.0.164/32 md5 ++ +. To enable the changes to take effect, restart your PostgreSQL database server. + +To avoid downtime, you can prepare your master database server for streaming replication at any maintenance +interval; then weeks or months later, when your replica server environment is available, you can begin +streaming replication. Once you are ready to set up the streaming replica, perform the following steps on +your replica server: + +. Ensure that the version of PostgreSQL on your replica server matches the version running on your production +server. A difference in the minor version (for example, 9.1.3 versus 9.1.5) will not prevent streaming +replication from working, but an exact match is recommended. +. Create a physical backup of the master database server. +. Add a `recovery.conf` file to your replica database configuration directory. This file contains the +information required to begin recovery once you start the replica database: ++ +[source,perl] +# turn on standby mode, disabling writes to the database +standby_mode = 'on' +# assumes WAL segments are available at network share /data/wal +restore_command = 'cp /data/wal/%f %p' +# connect to the master database to being streaming replication +primary_conninfo = 'host=kochab.cs.uoguelph.ca user=replicant password= ++ +. Start the PostgreSQL database server on your replica server. It should connect to the master. If the +physical backup did not take too long and you had a high enough value for `wal_keep_segments` set on your +master server, the replica should begin streaming replication. Otherwise, it will replay WAL segments +until it catches up enough to begin streaming replication. +. Ensure that the streaming replication is working. Check the PostgreSQL logs on your replica server and +master server for any errors. Connect to the replica database as a regular database user and check for +recent changes that have been made to your master server. + +Congratulations, you now have a streaming replica database that reflects the latest changes to your Evergreen +data! Combined with a routine of regular logical and physical database backups and WAL segment archiving +stored on a remote server, you have a significant insurance policy for your system's data in the event that +disaster does strike. + diff --git a/docs/modules/admin/pages/booking-admin.adoc b/docs/modules/admin/pages/booking-admin.adoc new file mode 100644 index 0000000000..993bc9a5cf --- /dev/null +++ b/docs/modules/admin/pages/booking-admin.adoc @@ -0,0 +1,190 @@ += Booking Module Administration = +:toc: + +== Creating Bookable Non-Bibliographic Resources == + +Staff with the required permissions (Circulator and above) can create bookable non-bibliographic resources such as laptops, projectors, and meeting rooms. + +The following pieces make up a non-bibliographic resource: + +* Resource Type +* Resource Attribute +* Resource Attribute Values +* Resource +* Resource Attribute Map + +You need to create resource types and resource attributes (features of the resource types), and add booking items (resources) to individual resource type. Each resource attribute may have multiple values. You need to link the applicable features (resource attributes and values) to individual item (resource) through the Resource Attribute Map. Before you create resources (booking items) you need to have a resource type and associated resource attributes and values, if any, for them. + +=== Create New Resource Type === + +1) Select Administration -> Booking Administration -> Resource Types. + +image::media/booking-create-resourcetype_webclient-1.png[] + +2) A list of current resource types will appear. Use Back and Next buttons to browse the whole list. + +image::media/booking-create-resourcetype-2.png[] + +[NOTE] +You may also see cataloged items in the list. Those items have been marked bookable or booked before. + + +3) To create a new resource type, click New Resource Type in the top right corner, . + +image::media/booking-create-resourcetype-3.png[] + +4) A box will appear in which you create your new type of resource. + +image::media/booking-create-bookable-1.png[] + +* Resource Type Name - Give your resource a name. +* Fine Interval - How often will fines be charged? This period can be input in several ways: + +[NOTE] +==================================================================== +** second(s), minute(s), hour(s), day(s), week(s), month(s), year(s) +** sec(s), min(s) +** s, m, h +** 00:00:30, 00:01:00, 01:00:00 +==================================================================== + +* Fine Amount - The amount that will be charged at each Fine Interval. +* Owning Library - The home library of the resource. +* Catalog Item - (Function not currently available.) +* Transferable - This allows the item to be transferred between libraries. +* Inter-booking and Inter-circulation Interval - The amount of time required by your library between the return of a resource and a new reservation for the resource. This interval uses * the same input conventions as the Fine Interval. +* Max Fine Amount - The amount at which fines will stop generating. + +5) Click Save when you have entered the needed information. + +image::media/booking-create-resourcetype-4.png[] + +6) The new resource type will appear in the list. + +image::media/booking-create-resourcetype-5.png[] + +=== Create New Resource Attribute === + +1) Select Administration -> Booking Administration -> Resource Attributes. + +2) Click New Resource Attribute in the top right corner. + +3) A box will appear in which you can add the attributes of the resource. Attributes are categories of descriptive information that are provided to the staff member when the booking request is made. For example, an attribute of a projector may be the type of projector. Other attributes might be the number of seats available in a room, or the computing platform of a laptop. + +image::media/booking-create-bookable-2.png[] + +* Resource Attribute Name - Give your attribute a name. +* Owning Library - The home library of the resource. +* Resource Type - Type in the first letter to list then choose the Resource Type to which the Attribute is applicable. +* Is Required - (Function not currently available.) + +4) Click Save when the necessary information has been entered. + +5) The added attribute will appear in the list. + +[NOTE] +One resource type may have multiple attributes. You may repeat the above procedure to add more. + +=== Create New Resource Attribute Value === + +1) One resource attribute may have multiple values. To add new attribute value, select Administration -> Booking Administration -> Resource Attribute Values. + +2) Click New Resource Attribute Value in the top right corner. + +3) A box will appear in which you assign a value to a particular attribute. Values can be numbers, words, or a combination of them, that describe the particular aspects of the resource that have been defined as Attributes. As all values appear on the same list for selection, values should be as unique as possible. For example, a laptop may have a computing platform that is either PC or Mac. + +image::media/booking-create-bookable-3.png[] + +* Owning Library - The home library of the resource. +* Resource Attribute - The attribute you wish to assign the value to. +* Valid Value - Enter the value for your attribute. + +4) Click Save when the required information has been added. + +5) The attribute value will appear in the list. Each attribute should have at least two values attached to it; repeat this process for all applicable attribute values. + +=== Create New Resource === + +1) Add items to a resource type. Click Administration -> Booking Administration -> Resources. + +2) Click New Resource in the top right corner. + +3) A box will appear. Add information for the resource. + +image::media/booking-create-bookable-4.png[] + +* Owning Library - The home library of the resource. +* Resource Type - Type in the first letter of the resource type's name to list then select the resource type for your item. +* Barcode - Barcode for the resource. +* Overbook - This allows a single item to be reserved, picked up, and returned by multiple patrons during overlapping or identical time periods. +* Is Deposit Required - (Function not currently available.) +* Deposit Amount - (Function not currently available.) +* User Fee - (Function not currently available.) + +4) Click Save when the required information has been added. + +5) The resource will appear in the list. + +[NOTE] +One resource type may have multiple resources attached. + +=== Map Resource Attributes and Values to Resources === + +1) Use Resource Attribute Maps to bring together the resources and their attributes and values. Select Administration -> Booking Administration -> Resource Attribute Maps. + +2) Click New Resource Attribute Map in the right top corner. + +3) A box will appear in which you will map your attributes and values to your resources. + +image::media/booking-create-bookable-5.png[] + +* Resource - Enter the barcode of your resource. +* Resource Attribute - Select an attribute that belongs to the Resource Type. +* Attribute Value - Select a value that belongs to your chosen attribute and describes your resource. If your attribute and value do not belong together you will be unable to save. + +4) Click Save once you have entered the required information. + +[NOTE] +A resource may have multiple attributes and values. Repeat the above steps to map all. + +5) The resource attribute map will appear in the list. + +Once all attributes have been mapped your resource will be part of a hierarchy similar to the example below. + +image::media/booking-create-bookable-6.png[] + + +== Editing Non-Bibliographic Resources == + +Staff with the required permissions can edit aspects of existing non-bibliographic resources. For example, resource type can be edited in the event that the fine amount for a laptop changes from $2.00 to $5.00. + +=== Editing Resource Types === + +1) Bring up your list of resource types. Select Administration -> Booking Administration -> Resource Types. + +2) A list of current resource types will appear. + +3) Double click anywhere on the line of the resource type you would like to edit. + +4) The resource type box will appear. Make your changes and click Save. + +5) Following the same procedure you may edit Resource Attributes, Attributes Values, Resources and Attribute Map by selecting them on Administration -> Booking Administration. + + + + +== Deleting Non-bibliographic Resources == + +1) To delete a booking resource, go to Administration -> Booking Administration -> Resources. + +2) Select the checkbox in front the resource you want to delete. Click Delete Selected. The resource will disappear from the list. + +Following the same procedure you may delete Resource Attributes Maps. + +You may also delete Resource Attribute Values, Resource Attributes and Resource Types. But you have to delete them in the reverse order when you create them to make sure the entry is not in use when you try to delete it. + +This is the deletion order: Resource Attribute Map/Resources -> Resource Attribute Values -> Resource Attributes -> Resource Types. + + + + diff --git a/docs/modules/admin/pages/circing_uncataloged_materials.adoc b/docs/modules/admin/pages/circing_uncataloged_materials.adoc new file mode 100644 index 0000000000..8389a83876 --- /dev/null +++ b/docs/modules/admin/pages/circing_uncataloged_materials.adoc @@ -0,0 +1,73 @@ +== Circulating uncataloged materials == + +=== Introduction === + +This section discusses settings for circulating items that are not cataloged. +Evergreen offers two ways to circulate an item that is not in the catalog: + +* Pre-cataloged items (also known as on-the-fly items) have a barcode, as +well as some basic metadata which staff members enter at the time of checkout. +These are represented in Evergreen with an item record which has to be manually +deleted or transferred when it is no longer needed. + +* Non-cataloged items (also known as ephemeral items) do not have barcodes, +have no metadata, and are not represented with an item record. No fines +accrue on these materials, but Evergreen does collect statistics on these +circulations. + +=== Pre-cataloged item settings === + +indexterm:[on-the-fly circulation] +indexterm:[pre-cataloged items,routing to a different library] + +By default, when a pre-cataloged item is created, Evergreen sets the _Circ Library_ +field to the library where it was checked out. You may change this so that the +circ library is set to a different library. This can be helpful in cases where the +cataloger who fixes pre-cataloged items is at another library, and you'd like all +pre-cataloged items to be routed to that cataloger's library when they are returned. + +To change this setting: + +. Go to Administration > Local Administration > Library Settings Editor. +. Choose _Pre-cat Item Circ Lib_. +. Click _Edit_. +. Select the appropriate context. For example, if all pre-cataloged items in your +system should have the same circ library, you should choose your system as the +context. +. Type in the shortname of the library that should be in the circ lib field. Make +sure to type this correctly, or Evergreen won't be able to create pre-cataloged +items. + +NOTE: Evergreen always sets the owning library of pre-cataloged items to be the +consortium. + +=== Non-cataloged item settings === + +indexterm:[ephemeral items] + +In Evergreen, libraries may elect to create their own local non-cataloged item +types. For example, you may choose to circulate non-cataloged paperbacks or magazine +back-issues, but not wish to catalog them. + +==== Adding a new non-cataloged type ==== + +. Go to Administration > Local Administration > Non-Cataloged Types Editor. +. Under _Create a new non-cataloged type_, start filling out the appropriate + information. +. Choose an appropriate duration. This period of time will be used to calculate + a due date that is displayed to the patron on the patron's receipt and _My Account_ + view in the public catalog. The item will be automatically removed from the + _My Account_ view the day after the due date. +. The _Circulate In-House?_ checkbox is only for your records. This checkbox does + not affect how these materials circulate. +. Click the _Create_ button when you are done. + +image::media/noncataloged_type_add.png[] + +==== Deleting a non-cataloged type ==== + +. Go to Administration > Local Administration > Non-Cataloged Types Editor. +. Click the _Delete_ button next to the type you wish to delete. Note that + if any non-cataloged items of this type have ever been entered, you will + not be able to delete it. + diff --git a/docs/modules/admin/pages/circulation_limit_groups.adoc b/docs/modules/admin/pages/circulation_limit_groups.adoc new file mode 100644 index 0000000000..e3dda15318 --- /dev/null +++ b/docs/modules/admin/pages/circulation_limit_groups.adoc @@ -0,0 +1,46 @@ += Circulation Limit Sets = +:toc: + +== Maximum Checkout by Shelving Location == + +This feature enables you to specify the maximum number of checkouts of items by +shelving location and is an addition to the circulation limit sets. Circulation +limit sets refine circulation policies by limiting the number of items that +users can check out. Circulation limit sets are linked by name to circulation +policies. + +To limit checkouts by shelving location: + +. Click *Administration -> Local Administration -> Circulation Limit Sets*. +. Click *New* to create a new circulation limit set. +. In the *Owning Library* field, select the library that can create and edit +this limit set. +. Enter a *Name* for the circulation set. You will select the *Name* to link +the circulation limit set to a circulation policy. +. Enter the number of *Items Out* that a user can take from this shelving location. +. Enter the *Min Depth*, or the minimum depth in the org tree that Evergreen +will consider as valid circulation libraries for counting items out. The min +depth is based on org unit type depths. For example, if you want the items in +all of the circulating libraries in your consortium to be eligible for +restriction by this limit set when it is applied to a circulation policy, then +enter a zero (0) in this field. +. Check the box adjacent to *Global Flag* if you want all of the org units in +your consortium to be restricted by this limit set when it is applied to a +circulation policy. Otherwise, Evergreen will only apply the limit to the direct +ancestors and descendants of the owning library. +. Enter a brief *Description* of the circulation limit set. +. Click *Save*. + +image::media/Maximum_Checkout_by_Copy_Location1.jpg[Maximum_Checkout_by_Copy_Location1] + +To link the circulation limit set to a circulation policy: + +. Click *Administration* -> *Local Administration* -> *Circulation Policies* +. Select an existing circulation policy, or create a new one. +. Scroll down to the *Linked Limit Sets*. +. Select the *Name* of the limit set that you want to add to the circulation +policy. +. Click *Add*. +. Click *Save*. + +image::media/Maximum_Checkout_by_Copy_Location2.jpg[Maximum_Checkout_by_Copy_Location2] diff --git a/docs/modules/admin/pages/closed_dates.adoc b/docs/modules/admin/pages/closed_dates.adoc new file mode 100644 index 0000000000..bceb70591a --- /dev/null +++ b/docs/modules/admin/pages/closed_dates.adoc @@ -0,0 +1,48 @@ += Set closed dates using the Closed Dates Editor = +:toc: + +indexterm:[Closed Dates] + +These dates are in addition to your regular weekly closed days. Both regular closed days and those entered in the Closed Dates Editor affect due dates and fines: + +* *Due dates.* Due dates that would fall on closed days are automatically pushed forward to the next open day. Likewise, if an item is checked out at 8pm, for example, and would normally be due on a day when the library closes before 8pm, Evergreen pushes the due date forward to the next open day. +* *Overdue fines.* Overdue fines may not be charged on days when the library is closed. This fine behavior depends on how the _Charge fines on overdue circulations when closed_ setting is configured in the Library Settings Editor. + +Closed dates do not affect the processing delays for Action/Triggers. For example, if your library has a trigger event that marks items as lost after 30 days, that 30 day period will include both open and closed dates. + +== Adding a closure == + +. Select _Administration > Local Administration_. +. Select _Closed Dates Editor_. +. Select type of closure: typically Single Day or Multiple Day. +. Click the Calendar gadget to select the All Day date or starting and ending + dates. +. Enter a Reason for closure (optional). +. Click *Apply to all of my libraries* if your organizational unit has children + units that will also be closed. This will add closed date entries to all of those + child libraries. ++ +[NOTE] +By default, creating a closed date in a parent organizational unit does _not_ also +close the child unit. For example, adding a system-level closure will not also +close all of that system's branches, unless you check the *Apply to all of my libraries* +box. ++ +. Click *Save*. + +image::media/closed_dates.png[] + +Now that your organizational structure is established, you can begin +configuring permissions for the staff users of your Evergreen system. + +== Detailed closure == + +If your closed dates include a portion of a business day, you should create a detailed closing. + +. Select _Administration -> Local Administration_. +. Select _Closed Dates Editor_. +. Select _Add Detailed Closing_. +. Enter applicable dates, times, and a descriptive reason for the closing. +. Click Save. +. Check the Apply to all of my libraries box if your library is a multi-branch system and the closing applies to all of your branches. + diff --git a/docs/modules/admin/pages/cn_prefixes_and_suffixes.adoc b/docs/modules/admin/pages/cn_prefixes_and_suffixes.adoc new file mode 100644 index 0000000000..4d5ae6290a --- /dev/null +++ b/docs/modules/admin/pages/cn_prefixes_and_suffixes.adoc @@ -0,0 +1,43 @@ += Call Number Prefixes and Suffixes = +:toc: + +You can configure call number prefixes and suffixes in the Admin module. This feature ensures more precise cataloging because each cataloger will have access to an identical drop down menu of call number prefixes and suffixes that are used at his library. In addition, it may streamline cataloging workflow. Catalogers can use a drop down menu to enter call number prefixes and suffixes rather than entering them manually. You can also run reports on call number prefixes and suffixes that would facilitate collection development and maintenance. + + +== Configure call number prefixes == + +Call number prefixes are codes that precede a call number. + +To configure call number prefixes: + +1. Select *Administration -> Server Administration -> Call Number Prefixes*. +2. Click *New Prefix*. +3. Enter the *call number label* that will appear on the item. +4. Select the *owning library* from the drop down menu. Staff at this library, and its descendant org units, with the appropriate permissions, will be able to apply this call number prefix. +5. Click *Save*. + + + +image::media/Call_Number_Prefixes_and_Suffixes_2_21.jpg[Call_Number_Prefixes_and_Suffixes_2_21] + + + +== Configure call number suffixes == + +Call number suffixes are codes that succeed a call number. + +To configure call number suffixes: + +1. Select *Administration -> Server Administration -> Call Number Suffixes*. +2. Click *New Suffix*. +3. Enter the *call number label* that will appear on the item. +4. Select the *owning library* from the drop down menu. Staff at this library, and its descendant org units, with the appropriate permissions, will be able to apply this call number suffix. +5. Click *Save*. + + +image::media/Call_Number_Prefixes_and_Suffixes_2_22.jpg[Call_Number_Prefixes_and_Suffixes_2_22] + + +== Apply Call Number Prefixes and Suffixes == + +You can apply call number prefixes and suffixes to items from a pre-configured list in the Holdings Editor. diff --git a/docs/modules/admin/pages/copy_locations.adoc b/docs/modules/admin/pages/copy_locations.adoc new file mode 100644 index 0000000000..bed58bb841 --- /dev/null +++ b/docs/modules/admin/pages/copy_locations.adoc @@ -0,0 +1,109 @@ += Administering shelving locations = +:toc: + +== Creating new shelving locations == + +. Click _Administration_. +. Click _Local Administration_. +. Click _Shelving Locations Editor_. +. Type the name of the shelving location. +. In _OPAC Visible_, choose whether you would like items in this shelving location + to appear in the catalog. +. In _Hold Verify_, +. In _Checkin Alert_, choose whether you would like a routing alert to appear + when an item in this location is checked in. This is intended for special + locations, such as 'Display', that may require special handling, or that + temporarily contain items that are not normally in that location. ++ +NOTE: By default, these alerts will only display when an item is checked in, _not_ +when it is used to record an in-house use. ++ +To also display these alerts when an item in your location is scanned for in-house +use, go to Administration > Local Administration > Library Settings Editor and +set _Display shelving location check in alert for in-house-use_ to True. ++ +. If you would like a prefix or suffix to be added to the call numbers of every + volume in this location, enter it. +. If you would like, add a URL to the _URL_ field. When a URL is entered in + this field, the associated shelving location will display as a link in the Public + Catalog summary display. This link can be useful for retrieving maps or other + directions to the shelving location to aid users in finding material. +. If you would like to override any item-level circulation/hold policies to + make sure that items in your new location can't circulate or be holdable, + choose _No_ in the appropriate field. If you choose _Yes_, Evergreen will + use the typical circulation and hold policies to determine circulation + abilities. + +== Deleting shelving locations == + +You may only delete a shelving location if: +. it doesn't contain any items, or +. it only contains deleted items. + +Evergreen preserves shelving locations in the database, so no statistical information +is lost when a shelving location is deleted. + +== Modifying shelving location order == + +. Go to _Administration_. +. Go to _Local Administration_. +. Click _Shelving Location Order_. +. Drag and drop the locations until you are satisfied with their order. +. Click _Apply changes_. + + +== Shelving location groups == + +.Use case +**** +Mayberry Public Library provides a scope allowing users to search for all +children's materials in their library. The library's children's scope +incorporates several shelving locations used at the library, including Picture +Books, Children's Fiction, Children's Non-Fiction, Easy Readers, and Children's +DVDs. The library also builds a similar scope for YA materials that incorporates +several shelving locations. +**** + +This feature allows staff to create and name sets of shelving locations to use as +a search filter in the catalog. OPAC-visible groups will display within the +library selector in the Public Catalog. When a user selects a group +and performs a search, the set of results will be limited to records that have +items in one of the shelving locations within the group. Groups can live at any +level of the library hierarchy and may include shelving locations from any parent +org unit or child org unit. + +NOTE: To work with Shelving Location Groups, you will need the ADMIN_COPY_LOCATION_GROUP +permission. + +=== Create a Shelving Location Group === + +. Click Administration -> Local Administration -> Shelving Location Groups. +. At the top of the screen is a drop down menu that displays the org unit tree. + Select the unit within the org tree to which you want to add a shelving location group. + The shelving locations associated with the org unit appear in the Shelving Locations column. +. In the column called _Location Groups_, click _New_. +. Choose how you want the shelving location group to display to patrons in the catalog's + org unit tree in the OPAC. By default, when you add a new shelving location group, the + group displays in the org unit tree beneath any branches or sub-libraries of its + parental org unit. If you check the box adjacent to Display above orgs, then the + group will appear above the branches or sub-libraries of its parental org unit. +. To make the shelving location group visible to users searching the public catalog, check + the box adjacent to Is OPAC visible? +. Enter a _Name_ for the shelving location group. +. Click Save. The name of the Shelving Location Group appears in the Location Groups. +. Select the shelving locations that you want to add to the group, and click Add. The shelving + locations will populate the middle column, Group Entries. +. The shelving location group is now visible in the org unit tree in the catalog. Search + the catalog to retrieve results from any of the shelving locations that you added to + the shelving location group. + +=== Order Shelving Location Groups === + +If you create more than one shelving location group, then you can order the groups in the +org unit tree. + +. Click Administration -> Local Administration -> Shelving Location Groups. +. Three icons appear next to each location group. Click on the icons to drag the shelving + location groups into the order in which you would like them to appear in the catalog. +. Search the catalog to view the reorder of the shelving location groups. + diff --git a/docs/modules/admin/pages/copy_statuses.adoc b/docs/modules/admin/pages/copy_statuses.adoc new file mode 100644 index 0000000000..915ea67926 --- /dev/null +++ b/docs/modules/admin/pages/copy_statuses.adoc @@ -0,0 +1,93 @@ += Item Status = +:toc: + +indexterm:[copy status] + +To navigate to the item status editor from the staff client menu, select +*Administration* -> *Server Administration* -> *Item Statuses*. + +The Item Status Editor is used to add, edit and delete statuses of items in +your system. + +For each status, you can set the following properties: + +* Holdable - If checked, users can place holds on items in this status, +provided there are no other flags or rules preventing holds. If unchecked, +users cannot place holds on items in this status. +* OPAC Visible - If checked, items in this status will be visible in the +public catalog. If unchecked, items in this status will not be visible in the +public catalog, but they will be visible when using the catalog in the staff +client. +* Sets item active - If checked, moving an item that does not yet have an +active date to this status will set the active date. If the item already has +an active date, then no changes will be made to the active date. If unchecked, +this status will never set the item's active date. +* Is Available - If checked, items with this status will appear in catalog +searches where "limit to available" is selected as a search filter. Also, +items with this status will check out without status warnings. +By default, the "Available" and "Reshelving" statuses have the "Is Available" +flag set. The flag may be applied to local/custom statuses via the item status +admin interface. + +Evergreen comes pre-loaded with a number of item statuses. + +.Stock item statuses and default settings +[options="header"] +|============================================== +|ID|Name|Holdable|OPAC Visible|Sets copy active +|0|Available|true|true|true +|1|Checked out|true|true|true +|2|Bindery|false|false|false +|3|Lost|false|false|false +|4|Missing|false|false|false +|5|In process|true|true|false +|6|In transit|true|true|false +|7|Reshelving|true|true|true +|8|On holds shelf|true|true|true +|9|On order|true|true|false +|10|ILL|false|false|true +|11|Cataloging|false|false|false +|12|Reserves|false|true|true +|13|Discard/Weed|false|false|false +|14|Damaged|false|false|false +|15|On reservation shelf|false|false|true +|16|Long Overdue|false|false|false +|17|Lost and Paid|false|false|false +|============================================== + +== Adding Item Statuses == + +. In the _New Status_ field, enter the name of the new status you wish to add. +. Click _Add_. +. Locate your new status and check the _Holdable_ check box if you wish to allow +users to place holds on items in this status. Check _OPAC Visible_ if you wish +for this status to appear in the public catalog. Check _Sets copy active_ if you +wish for this status to set the active date for new items. +. Click _Save Changes_ at the bottom of the screen to save changes to the new +status. + +image::media/copy_status_add.png[Adding item statuses] + +== Deleting Item Statuses == + +. Highlight the statuses you wish to delete. Ctrl-click to select more than one +status. +. Click _Delete Selected_. +. Click _OK_ to verify. + +image::media/copy_status_delete.png[Deleting item statuses] + +[NOTE] +You will not be able to delete statuses if items currently exist with that +status. + +== Editing Item Statuses == + +. Double click on a status name to change its name. Enter the new name. + +. To change whether a status is holdable, visible in the OPAC, or sets the +item's active date, check or uncheck the relevant checkbox. + +. Once you have finished editing the statuses, remember to click Save Changes. + +image::media/copy_status_edit.png[Editing item statuses] diff --git a/docs/modules/admin/pages/copy_tags_admin.adoc b/docs/modules/admin/pages/copy_tags_admin.adoc new file mode 100644 index 0000000000..79697b0a9e --- /dev/null +++ b/docs/modules/admin/pages/copy_tags_admin.adoc @@ -0,0 +1,70 @@ += Item Tags (Digital Bookplates) = +:toc: + +indexterm:[copy tags] + +Item Tags allow staff to apply custom, pre-defined labels or tags to items. Item tags are visible in the public catalog and are searchable in both the staff client and public catalog based on configuration. This feature was designed to be used for Digital Bookplates to attach donation or memorial information to items, but may be used for broader purposes to tag items. + + +== Administration == + +New Permissions: + +* ADMIN_COPY_TAG_TYPES: required to create a new tag type under *Server Administration->Item Tag Types* +* ADMIN_COPY_TAG: required to create a new tag under *Local Administration->Item Tags* + +NOTE: The existing permission UPDATE_COPY is required to assign a tag to a item + + +New Library Settings: + +* OPAC: Enable Digital Bookplate Search: when set to _True_ for a given org unit, the digital bookplate search option will be available in the catalog. + + +== Creating item Tags == +There are two components to this feature: Item Tag Types and Item Tags. + +Item Tag Types are used to define the type of tag, such as “Bookplates” or “Local History Notes”, as well as the organizational unit scope for use of the tag type. + +Item Tags are associated with a Item Tag Type and are used to configure the list of tags that can be applied to copies, such as a list of memorial or donation labels, that are applicable to a particular organizational unit. + +=== Create Item Tag Types === + +. Go to *Administration->Server Administration->Item Tag Types*. +. In the upper left hand corner, click *New Record*. A dialog box will appear. Assign the following to create a new Item Tag Type: +.. *Code*: a code to identify the item tag type. +.. *Label*: a label that will appear in drop down menus to identify the item tag type. +.. *Owner*: the organizational unit that can see and use the item tag type. +. Click *Save* and the new Item Tag Type will appear in the list. Next create the associated Item Tags. + +image::media/copytags1.PNG[Create Item Tag Types] + +image::media/copytags2.PNG[Item Tag Types Grid View] + +=== Create Item Tags === + +. Go to *Administration->Local Administration->Item Tags*. +. In the upper left hand corner, click *New Record*. A dialog box will appear. Assign the following to create a new Item Tag: +.. *Item Tag Type*: select the Item Tag Type with which you want to associate the new Item Tag. +.. *Label*: assign a label to the new item tag. +.. *Value*: assign a value to the new item tag. This will display in the catalog. +.. *Staff Note*: a note may be added to guide staff in when to apply the item tag. +.. *Is OPAC Visible?*: If a item tag is OPAC Visible, it can be searched for and viewed in the OPAC and the staff catalog. If a item tag is not OPAC Visible, it can only be searched for and viewed in the staff catalog. +.. *Owner*: select the organization unit at which this tag can be seen and used. +. Click *Save* and the new Item Tag will appear in the list. + +image::media/copytags3.PNG[Create Item Tags] + +image::media/copytags4.PNG[Item Tags Grid View] + + +== Managing Item Tags == + +=== Editing Tags === + +Existing item tags can be edited by selecting a tag and clicking *Actions->Edit Record* or right-clicking on a tag and selecting *Edit Record*. The dialog box will appear and you can modify the item tag. Click *Save* to save any changes. Changes will be propagated to any items that the tag has been attached to. + +=== Deleting Tags === + +Existing item tags can be deleted by selecting a tag and clicking *Actions->Delete Record* or right-clicking on a tag and selecting *Delete Record*. Deleting a tag will delete the tag from any items it was attached to in the catalog. + diff --git a/docs/modules/admin/pages/desk_payments.adoc b/docs/modules/admin/pages/desk_payments.adoc new file mode 100644 index 0000000000..25b861af2d --- /dev/null +++ b/docs/modules/admin/pages/desk_payments.adoc @@ -0,0 +1,37 @@ += Cash Reports = +:toc: + +Cash reports are useful for quickly getting information about money that +your library has collected from patrons. This can be helpful in a few +different scenarios, such as: + +. Reconciling a cash drawer at the end of the day. +. Seeing how popular a specific payment type is (perhaps when evaluating +a food-for-fines program). + +To use the cash reports, + +. Under the _Administration_ menu, choose _Local Administration_. +. Click _Cash reports_. +. Select the time period and library you are interested in. This +interface defaults to showing payments accepted during the current day. +. Click _Submit_. + +[TIP] +==== +You can click on the names of columns to sort the reports. +==== + +[TIP] +==== +You need the _VIEW_TRANSACTION_ permission to view these reports. +==== + +[NOTE] +==== +These payments are divided into two different types: _Desk payments_ -- +in which a staff member simply accepted a credit card, check, or cash +payment -- and _User payments_ -- in which a staff member had to make a +specific decision about whether to accept a payment of goods or work; or +forgave or granted credit to a particular patron. +==== diff --git a/docs/modules/admin/pages/ebook_api.adoc b/docs/modules/admin/pages/ebook_api.adoc new file mode 100644 index 0000000000..adf79e8cf6 --- /dev/null +++ b/docs/modules/admin/pages/ebook_api.adoc @@ -0,0 +1,123 @@ +== Ebook API integration == + +Evergreen supports integration with third-party APIs provided by OverDrive and +OneClickdigital. + +When ebook API integration is enabled, the following features are supported: + + * Bibliographic records from these vendors that appear in your +public catalog will include vendor holdings and availability information. + * Patrons can check out and place holds on OverDrive and OneClickdigital ebook +titles from within the public catalog. + * When a user is logged in, the public catalog dashboard and My Account +interface will include information about that user's checkouts and holds for +supported vendors. + +WARNING: The ability to check out and place holds on ebook titles is an experimental +feature in 3.0. It is not recommended for production use without careful +testing. + +For API integration to work, you need to request API access from the +vendor and configure your Evergreen system according to the instructions +below. You also need to configure the new `open-ils.ebook_api` service. + +This feature assumes that you are importing MARC records supplied by the +vendor into your Evergreen system, using Vandelay or some other MARC +import method. This feature does not search the vendor's online +collections or automatically import vendor records into your system; it +merely augments records that are already in Evergreen. + +A future Evergreen release will add the ability for users to check out +titles, place holds, etc., directly via the public catalog. + +=== Ebook API service configuration === +This feature uses the new `open-ils.ebook_api` OpenSRF service. This +service must be configured in your `opensrf.xml` and `opensrf_core.xml` +config files for ebook API integration to work. See +`opensrf.xml.example` and `opensrf_core.xml.example` for guidance. + +=== OverDrive API integration === +Before enabling OverDrive API integration, you will need to request API +access from OverDrive. OverDrive will provide the values to be used for +the following new org unit settings: + + * *OverDrive Basic Token*: The basic token used for API client + authentication. To generate your basic token, combine your client + key and client secret provided by OverDrive into a single string + ("key:secret"), and then base64-encode that string. On Linux, you + can use the following command: `echo -n "key:secret" | base64 -` + * *OverDrive Account ID*: The account ID (a.k.a. library ID) for your + OverDrive API account. + * *OverDrive Website ID*: The website ID for your OverDrive API + account. + * *OverDrive Authorization Name*: The authorization name (a.k.a. + library name) designated by OverDrive for your library. If your + OverDrive subscription includes multiple Evergreen libraries, you + will need to add a separate value for this setting for each + participating library. + * *OverDrive Password Required*: If your library's OverDrive + subscription requires the patron's PIN (password) to be provided + during patron authentication, set this setting to "true." If you do + not require the patron's PIN for OverDrive authentication, set this + setting to "false." (If set to "true," the password entered by a + patron when logging into the public catalog will be cached in plain text in + memcached.) + * *OverDrive Discovery API Base URI* and *OverDrive Circulation API + Base URI*: By default, Evergreen uses OverDrive's production API, so + you should not need to set a value for these settings. If you want + to use OverDrive's integration environment, you will need to add the + appropriate base URIs for the discovery and circulation APIs. See + OverDrive's developer documentation for details. + * *OverDrive Granted Authorization Redirect URI*: Evergreen does not + currently support granted authorization with OverDrive, so this + setting is not currently in use. + +For more information, consult the +https://developer.overdrive.com/docs/getting-started[OverDrive API +documentation]. + +To enable OverDrive API integration, adjust the following public catalog settings +in `config.tt2`: + + * `ebook_api.enabled`: set to "true". + * `ebook_api.overdrive.enabled`: set to "true". + * `ebook_api.overdrive.base_uris`: list of regular expressions + matching OverDrive URLs found in the 856$9 field of older OverDrive + MARC records. As of fall 2016, OverDrive's URL format has changed, + and the record identifier is now found in the 037$a field of their + MARC records, with "OverDrive" in 037$b. Evergreen will check the + 037 field for OverDrive record identifiers; if your system includes + older-style OverDrive records with the record identifier embedded in + the 856 URL, you need to specify URL patterns with this setting. + +=== OneClickdigital API integration === +Before enabling OneClickdigital API integration, you will need to +request API access from OneClickdigital. OneClickdigital will provide +the values to be used for the following new org unit settings: + + * *OneClickdigital Library ID*: The identifier assigned to your + library by OneClickdigital. + * *OneClickdigital Basic Token*: Your client authentication token, + supplied by OneClickdigital when you request access to their API. + +For more information, consult the +http://developer.oneclickdigital.us/[OneClickdigital API documentation]. + +To enable OneClickdigital API integration, adjust the following public catalog +settings in `config.tt2`: + + * `ebook_api.enabled`: set to "true". + * `ebook_api.oneclickdigital.enabled`: set to "true". + * `ebook_api.oneclickdigital.base_uris`: list of regular expressions + matching OneClickdigital URLs found in the 859$9 field of your MARC + records. Evergreen uses the patterns specified here to extract + record identifiers for OneClickdigital titles. + +=== Additional configuration === +Evergreen communicates with third-party vendor APIs using the new +`OpenILS::Utils::HTTPClient` module. This module is configured using +settings in `opensrf.xml`. The default settings should work for most +environments by default, but you may need to specify a custom location +for the CA certificates installed on your server. You can also disable +SSL certificate verification on HTTPClient requests altogether, but +doing so is emphatically discouraged. diff --git a/docs/modules/admin/pages/ebook_api_service.adoc b/docs/modules/admin/pages/ebook_api_service.adoc new file mode 100644 index 0000000000..6b5546f613 --- /dev/null +++ b/docs/modules/admin/pages/ebook_api_service.adoc @@ -0,0 +1,11 @@ += ebook_api service = + +The `open-ils.ebook_api` service looks up title and +patron information from specified ebook vendor APIs. + +The Evergreen catalog accesses data from this service +through OpenSRF JS bindings. + +The `OpenILS::Utils::HTTPClient` module is required +for this service. + diff --git a/docs/modules/admin/pages/emergency_closing_handler.adoc b/docs/modules/admin/pages/emergency_closing_handler.adoc new file mode 100644 index 0000000000..7901f1eea2 --- /dev/null +++ b/docs/modules/admin/pages/emergency_closing_handler.adoc @@ -0,0 +1,82 @@ += Emergency Closing Handler = +:toc: + +== Introduction == + +The *Closed Dates Editor* now includes an Emergency Closing feature that allows libraries to shift due dates and expiry dates to the next open day. Overdue fines will be automatically voided for the day(s) the library is marked closed. Once an Emergency Closing is processed, it is permanent and cannot be rolled back. + +== Administration == + +=== Permissions === + +To create an Emergency Closing, the EMERGENCY_CLOSING permission needs to be granted to the user for all locations to be affected by an emergency closing. + +== Create an emergency closing == + +The Emergency Closing feature is located within the *Closed Dates Editor* screen, which can be accessed via *Administration -> Local Administration -> Closed Dates Editor*. + +Within the closed dates editor screen, scheduled closed dates are listed and can be scoped by specific org unit and date. The date filter in the upper right-hand corner will show upcoming library closings on or after the selected date in the filter. + +image::media/ECHClosedDatesEditorAddClosing.png[Add Closing] + +Select *Add closing* to begin the emergency closing process. A pop-up will appear with fields to fill out. + +image::media/ECHLibraryClosingConstruction.png[Create Closing for One Full Day] + +*Library* - Using the dropdown window, select the org unit which will be closing. + +*Apply to all of my libraries* - When selected, this checkbox will apply the emergency closing date to the selected org unit and any associated child org unit(s). + +*Closing Type* - The following Closing Type options are available in a drop down window: +* One full day +* Multiple days +* Detailed closing + +The _Multiple days_ and _Detailed closing_ options will display different date options (e.g. start and end dates) in the next field if selected. + +image::media/ECHLibraryClosingMultipleDays.png[Create Closing for Multiple Days] + +image::media/ECHLibraryClosingDetailed.png[Create Detailed Closing] + +*Date* - Select which day or days the library will be closed. + +[NOTE] +======================== +*NOTE* The Closed Dates editor is now date-aware. If a selected closed date is either in the past, or nearer in time than the end of the longest configured circulation period, staff will see a notification that says "Possible Emergency Closing" in both the dialog box and in the bottom right-hand corner. +======================== + +*Reason* - Label the reason for library closing accordingly, e.g. 3/15 Snow Day + +=== Emergency Closing Handler === + +When a date is chosen that is nearer in time than the end of the longest configured circulation period or in the past, then a *Possible Emergency Closing* message will appear in the pop-up and in the bottom right-hand corner of the screen. Below the Possible Emergency Closing message, two checkboxes appear: *Emergency* and *Process Immediately*. + +[NOTE] +========================= +*NOTE* The *Emergency* checkbox must still be manually selected in order to actually set the closing as an Emergency Closing. +========================= + +By selecting the *Emergency* checkbox, the system will void any overdue fines incurred for that emergency closed day or days and push back any of the following dates to the next open day as determined by the library’s settings: +* item due dates +* shelf expire times +* booking start times + +image::media/ECHClosingSnowDay.png[Create Emergency Closing] + +When selecting the *Process Immediately* checkbox, Evergreen will enact the Emergency Closing changes immediately once the Emergency Closed Date information is saved. If Process Immediately is not selected at the time of creation, staff will need to go back and edit the closing later, or the Emergency processing will not occur. + +Upon clicking *OK*, a progress bar will appear on-screen. After completion, the Closed Dates Editor screen will update, and under the Emergency Closing Processing Summary column, the number of affected/processed Circulations, Holds, and Reservations will be listed. + +image::media/ECHLibraryClosingDone.png[Emergency Closing Processing Complete] + +=== Editing Closing to process Emergency Closing === + +If *Process immediately* is not selected during an Emergency Closing event creation, staff will need to edit the existing Emergency Closing event and process the affected items. + +In the Closed Dates Editor screen, select the existing Emergency Closing event listed. Then, go to *Actions -> Edit closing*. + +image::media/ECHEditClosing.png[Edit Closing] + +A pop-up display will appear with the same format as creating a Closed Dates event with the Emergency checkbox checked and the Process Immediately un-checked at the bottom. Select the *Process immediately* checkbox, and then *OK*. A progress bar will appear on-screen, the Emergency Closing processing will take occur, and the Closed Dates Editor display will update. + +image::media/ECHEditClosingModal.png[Edit Closing Pop-Up] diff --git a/docs/modules/admin/pages/floating_groups.adoc b/docs/modules/admin/pages/floating_groups.adoc new file mode 100644 index 0000000000..6072fb7d4c --- /dev/null +++ b/docs/modules/admin/pages/floating_groups.adoc @@ -0,0 +1,120 @@ += Floating Groups = +:toc: + +Before floating groups items could float or not. If they floated then they floated everywhere, with no restrictions. + +After floating groups where an item will float is defined by what group it has been assigned to. + +== Floating Groups == + +Each floating group comes with a name and a manual flag, plus zero or more group members. The name is used solely for selection and display purposes. + +The manual flag dictates whether or not the "Manual Floating Active" checkin modifier needs to be active for an item to float. This allows for greater control over when items float. It also prevents automated checkins via SIP2 from triggering floats. + +=== Floating Group Members === + +Each member of a floating group references an org unit and has a stop depth, an optional max depth, and an exclude flag. + +=== Org Unit === + +The org unit and all descendants are included, unless max depth is set, in which case the tree is cut off at the max depth. + +=== Stop Depth === + +The stop depth is the highest point from the current item circ library to the checkin library for the item that will be traversed. If the item has to go higher than the stop depth on the tree the member rule in question is ignored. + +=== Max Depth === + +As mentioned with the org unit, the max depth is the furthest down on the tree from the org unit that gets included. This is based on the entire tree, not just off of the org unit. So in the default tree a max depth of 1 will stop at the system level no matter if org unit is set to CONS or SYS1. + +=== Exclude === + +Exclude, if set, causes floating to not happen for the member. Excludes always take priority, so you can remove an org unit from floating without having to worry about other rules overriding it. + +== Examples == + +=== Float Everywhere === + +This is a default floating rule to emulate the previous floating behavior for new installs and upgrades. + +One member: + +* Org Unit: CONS +* Stop Depth: 0 +* Max Depth: Unset +* Exclude: Off + +=== Float Within System === + +This would permit an item to float anywhere within a system, but would return to the system if it was returned elsewhere. + +One member: + +* Org Unit: CONS +* Stop Depth: 1 +* Max Depth: Unset +* Exclude: Off + +=== Float To All Branches === + +This would permit an item to float to any branch, but not to sublibraries or bookmobiles. + +One member: + +* Org Unit: CONS +* Stop Depth: 0 +* Max Depth: 2 +* Exclude: Off + +=== Float To All Branches Within System === + +This would permit an item to float to any branch in a system, but not to sublibraries or bookmobiles, and returning to the system if returned elsewhere. + +One member: + +* Org Unit: CONS +* Stop Depth: 1 +* Max Depth: 2 +* Exclude: Off + +=== Float Between BR1 and BR3 === + +This would permit an item to float between BR1 and BR3 specifically, excluding sublibraries and bookmobiles. + +It would consist of two members, identical other than the org unit: + +* Org Unit: BR1 / BR3 +* Stop Depth: 0 +* Max Depth: 2 +* Exclude: Off + +=== Float Everywhere Except BM1 === + +This would allow an item to float anywhere except for BM1. It accomplishes this with two members. + +The first includes all org units, just like Float Everywhere: + +* Org Unit: CONS +* Stop Depth: 0 +* Max Depth: Unset +* Exclude: Off + +The second excludes BM1: + +* Org Unit: BM1 +* Stop Depth: 0 +* Max Depth: Unset +* Exclude: On + +That works because excludes are applied first. + +=== Float into, but not out of, BR2 === + +This would allow an item to float into BR2, but once there it would never leave. Why you would want to allow items to float to but not from a single library I dunno, but here it is. This takes advantage of the fact that the rules say where we can float *to*, but outside of stop depth don't care where we are floating *from*. + +One member: + +* Org Unit: BR2 +* Stop Depth: 0 +* Max Depth: Unset +* Exclude: Off diff --git a/docs/modules/admin/pages/hold_driven_recalls.adoc b/docs/modules/admin/pages/hold_driven_recalls.adoc new file mode 100644 index 0000000000..7de6254d92 --- /dev/null +++ b/docs/modules/admin/pages/hold_driven_recalls.adoc @@ -0,0 +1,50 @@ += Hold-driven recalls = +:toc: + +indexterm:[hold-driven recalls] +indexterm:[circulation, recalls, hold-driven] + +In academic libraries, it is common for groups like faculty and graduate +students to have extended loan periods (for example, 120 days), while +others have more common loan periods such as 3 weeks. In these environments, +it is desirable to have a hold placed on an item that has been loaned out +for an extended period to trigger a 'recall', which: + + . Truncates the loan period + . Sets the remaining available renewals to 0 + . 'Optionally': Changes the fines associated with overdues for the new due + date + . 'Optionally': Notifies the current patron of the recall, including the + new due date and fine level + +== Enabling hold-driven recalls == + +By default, holds do not trigger recalls. To enable hold-driven recalls +of circulating items, library settings must be changed as follows: + + . Click *Administration* -> *Local Administration* -> *Library Settings Editor.* + . Set the *Recalls: Circulation duration that triggers a recall + (recall threshold)* setting. The recall threshold is specified as an + interval (for example, "21 days"); any items with a loan duration of + less that this interval are not considered for a recall. + . Set the *Recalls: Truncated loan period (return interval)* setting. + The return interval is specified as an interval (for example, "7 days"). + The due date on the recalled item is changed to be the greater of either + the recall threshold or the return interval. + . 'Optionally': Set the *Recalls: An array of fine amount, fine interval, + and maximum fine* setting. If set, this applies the specified fine rules + to the current circulation period for the recalled item. + +When a hold is placed and no available items are found by the hold targeter, +the recall logic checks to see if the recall threshold and return interval +settings are set; if so, then the hold targeter checks the currently +checked-out items to determine if any of the currently circulating items at +the designated pickup library have a loan duration longer than the recall +threshold. If so, then the eligible item with the due date nearest to the +current date is recalled. + +== Editing the item recall notification email template == + +The template for the item recall notification email is contained in the +'Item Recall Email Notice' template, found under *Administration* -> *Local +Administration* -> *Notifications / Action Triggers*. diff --git a/docs/modules/admin/pages/hold_targeter_service.adoc b/docs/modules/admin/pages/hold_targeter_service.adoc new file mode 100644 index 0000000000..783375401f --- /dev/null +++ b/docs/modules/admin/pages/hold_targeter_service.adoc @@ -0,0 +1,4 @@ += hold-targeter service = + +The `open-ils.hold-targeter` service is used to target holds. + diff --git a/docs/modules/admin/pages/hours.adoc b/docs/modules/admin/pages/hours.adoc new file mode 100644 index 0000000000..5aefbf27bc --- /dev/null +++ b/docs/modules/admin/pages/hours.adoc @@ -0,0 +1,9 @@ +=== Setting regular library hours === + +You may do this in _Administration_ > _Server Administration_ > _Organizational +Units_. + +The *Hours of Operation* tab is where you enter regular, weekly hours. Holiday +and other closures are set in the *Closed Dates Editor*. Hours of operation and +closed dates impact due dates and fine accrual. + diff --git a/docs/modules/admin/pages/infrastructure_auth_browse.adoc b/docs/modules/admin/pages/infrastructure_auth_browse.adoc new file mode 100644 index 0000000000..b89eed9b92 --- /dev/null +++ b/docs/modules/admin/pages/infrastructure_auth_browse.adoc @@ -0,0 +1,37 @@ += Infrastructure Changes to Authority Browse = +:toc: + +As part of a larger development and consulting project to improve how authority records are used in public catalog browse, improvements have been made to how authority records are indexed in Evergreen. This will not result in any direct changes to the public catalog, but will create infrastructure for improvements to the browse list. Specifically, a configuration table will be used to specify how browse entries from authority records should be generated. This new tables will supplement the existing authority control set configuration tables but will not replace them. + +== Backend functionality == + +The new configuration table, authority.heading_field, specifies how headings can be extracted from MARC21 authority records. The general mechanism is similar to how config.metabib_field specifies how bibliographic records should be indexed: the XML representation of the MARC21 authority record is first passed through a stylesheet specified by the authority.heading_field definition, then XPath expressions are used to extract the heading for generating browse entries for the authority.simple_heading and metabib.browse_entry tables. + +The initial set of definitions supplied for authority.heading_field use the MARCXML to MADS 2.1 stylesheet; this helps ensure that heading strings extracted from authority records will match headings extracted from bibliographic records using the MODS stylesheet. + +== Staff User Interface == + +An interface for configuring authority headings is available in Server Administration in the web-based staff client, under the name "Authority Headings Fields". + +When navigated to, the interface looks like this: + +images::media/auth_browse_infra1.png[] + +Individual heading field definitions can be edited like this: + +images::media/auth_browse_infra2.png[] + +The available fields are: + +* Heading type: this can be personal_name, corporate_name, meeting_name, uniform_title, named_event, chronology_term, topical_term, geographic_name, genre_form_term, or medium_of_performance_term. +* Heading purpose: this can be main, related, or variant, corresponding to authority record 1XX, 5XX, or 4XX fields respectively. +* Heading field label: Label for use by administrators +* Heading XSLT Format: This core +* Heading XPath: Main XPath expression for selecting a part of the authority record to extract a heading from. +* Heading Component XPath: XPath express for selecting parts of a heading string from the elements selected by Heading XPath. +* Related/Variant Type XPath: Expression used, for variant and related headings, for identifying the specific purpose of the heading (e.g., broader term, narrower term, etc.). +* Thesaurus XPath: Expression used for extracting the thesaurus that controls the heading +* Thesaurus Override XPath: Expression used for identifying the thesaurus that controls a related heading. +* Joiner string: String used to stitch together components of the heading into a single display string. If not set, " -- " is used. + +It should be noted that unless one has non-standard authority records, it is recommended that changes to the authority heading field definitions be minimized. diff --git a/docs/modules/admin/pages/librarysettings.adoc b/docs/modules/admin/pages/librarysettings.adoc new file mode 100644 index 0000000000..8a1bb2e5d6 --- /dev/null +++ b/docs/modules/admin/pages/librarysettings.adoc @@ -0,0 +1,512 @@ += Library Settings Editor = +:toc: + +== Introduction == +(((Library Settings Editor))) + +With the *Library Settings Editor* one can optionally customize +Evergreen's behavior for a particular library or library system. For +descriptions of available settings see the xref:#settings_overview[Settings Overview] table below. + +== Editing Library Settings == + +1. To open the *Library Settings Editor* select *Admin* -> *Local +Administration* -> *Library Settings Editor*. +2. Settings having effects on the same function or module are grouped +together. You may browse the list or search for the entry you want to +edit. Type in your search term in the filter box. You may clear or +re-apply the filter by clicking *Clear Filter* or *Filter*. ++ +image::media/lse-1.png[Filtering the Library Settings Editor List] ++ +3. To edit an entry click *Edit* in the line. +4. Read the instruction in the pop-up window. Make the change. Click +*Update Setting* to save the change. Click *Delete Setting* if you wish +to delete it. ++ +image::media/lse-2.png[Editing a Library Setting] ++ +5. Click *History* to view the previous values, if any, of a setting. +You can revert back to an old value by clicking *revert*. ++ +image::media/lse-3.png[Library Setting History] + +NOTE: Please note that different settings may require different data +formats, which are listed in the xref:#settings_overview[Settings Overview] table. Refer to the xref:#data_types[Data Types] table at the +bottom of this page for more information. + +== Exporting/Importing Library Settings == +((("Exporting", "Library Settings Editor"))) +((("Importing", "Library Settings Editor"))) + +1. To export library settings, click the *Export* button on the above +*Library Setting Editor* screen. Click *Copy* in the pop-up window. +Those settings displayed on the screen are copied to the clipboard. +Paste the contents to a text editor, such as Notepad. Save the file on +your computer. ++ +image::media/lse-4.png[Exporting Library Settings] ++ +2. To import library settings, click the *Import* button on the *Library +Settings Editor* screen. Open your previously saved file and copy the +contents. Click *Paste* in the pop-up window. Click *Submit*. ++ +image::media/lse-5.png[Importing Library Settings] + +[#settings_overview] +== Settings Overview == + +The settings are grouped together in separate tables based on functions +and modules, which are affected by the setting. They are in the same +sequence as you see in the staff client. Each table describes the +available settings in the group and shows which can be changed on a +per-library basis. At the bottom is the table with a list of + xref:#data_types[data types] with details about acceptable settings +values. + +((("Acquisitions", "Library Settings Editor"))) + +[[lse-acq]] +.Acquisitions +[options="header"] +|======== +|Setting|Description|Data type|Notes +|Allow funds to be rolled over without bringing money along|Allow funds to be rolled over without bringing the money along. This makes money left in the old fund disappear, modeling its return to some outside entity.|True/False| +|Allows patrons to create automatic holds from purchase requests.|Allows patrons to create automatic holds from purchase requests.|True/False| +|Default circulation modifier|Sets the default circulation modifier for use in acquisitions.|Text| +|Default copy location|Sets the default item location(shelving location) for use in acquisitions.|Selection list| +|Fund Spending Limit for Block|When the amount remaining in the fund, including spent money and encumbrances, goes below this percentage, attempts to spend from the fund will be blocked.|Number| +|Fund Spending Limit for Warning|When the amount remaining in the fund, including spent money and encumbrances, goes below this percentage, attempts to spend from the fund will result in a warning to the staff.|Number| +|Rollover Distribution Formulae Funds|During fiscal rollover, update distribution formulae to use new funds|True/False| +|Set copy creator as receiver|When receiving an item in acquisitions, set the item "creator" to be the staff that received the item|True/False| +|Temporary barcode prefix|Temporary barcode prefix added to temporary item records.|Text| +|Temporary call number prefix|Temporary call number prefix|Text| +|Upload Activate PO|Activate the purchase order by default during ACQ file upload|True/False| +|Upload Create PO|Create a purchase order by default during ACQ file upload|True/False| +|Upload Default Insufficient Quality Fall-Thru Profile|Default low-quality fall through profile used during ACQ file upload|Selection List|Match Only Merge and Full Overlay are the selections. +|Upload Default Match Set|Default match set to use during ACQ file upload|Selection List|Can be set to authority test or biblio +|Upload Default Merge Profile|Default merge profile to use during ACQ file upload|Selection List|Match Only Merge and Full Overlay are the selections. +|Upload Default Min. Quality Ratio|Default minimum quality ratio used during ACQ file upload|Number| +|Upload Default Provider|Default provider to use during ACQ file upload|Selection List|This list is populated by your Providers. +|Upload Import Non Matching by Default|Import non-matching records by default during ACQ file upload|True/False| +|Upload Load Items for Imported Records by Default|Load items for imported records by default during ACQ file upload|True/False| +|Upload Merge on Best Match by Default|Merge records on best match by default during ACQ file upload|True/False| +|Upload Merge on Exact Match by Default|Merge records on exact match by default during ACQ file upload|True/False| +|Upload Merge on Single Match by Default|Merge records on single match by default during ACQ file upload|True/False| +|======== + +((("Booking", "Library Settings Editor"))) +((("Cataloging", "Library Settings Editor"))) + +[[lse-cataloging]] +.Booking and Cataloging +[options="header"] +|====================== +|Setting|Description|Data type|Notes +|Allow email notify|Permit email notification when a reservation is ready for pick-up.|True/false| +|Elbow room|Elbow room specifies how far in the future you must make a reservation on an item if that item will have to transit to reach its pick-up location. It secondarily defines how soon a reservation on a given item must start before the check-in process will opportunistically capture it for the reservation shelf.|Duration| +|Default Classification Scheme|Defines the default classification scheme for new call numbers: 1 = Generic; 2 = Dewey; 3 = LC|Number|It has effect on call number sorting. +|Default copy status (fast add)|Default status when an item is created using the "Fast Item Add" interface.|Selection list|Default: In process +|Default copy status (normal)|Default status when an item is created using the normal volume/copy creator interface.|Selection list| +|Defines the control number identifier used in 003 and 035 fields||Text| +|Delete bib if all items are deleted via Acquisitions line item cancellation.||True/False| +|Delete volume with last copy|Automatically delete a volume when the last linked item is deleted.|True/False|Default TRUE +|Maximum Parallel Z39.50 Batch Searches|The maximum number of Z39.50 searches that can be in-flight at any given time when performing batch Z39.50 searches|Number| +|Maximum Z39.50 Batch Search Results|The maximum number of search results to retrieve and queue for each record + Z39 source during batch Z39.50 searches|Number| +|Spine and pocket label font family|Set the preferred font family for spine and pocket labels. You can specify a list of fonts, separated by commas, in order of preference; the system will use the first font it finds with a matching name. For example, "Arial, Helvetica, serif".|Text| +|Spine and pocket label font size|Set the default font size for spine and pocket labels|Number| +|Spine and pocket label font weight|Set the preferred font weight for spine and pocket labels. You can specify "normal", "bold", "bolder", or "lighter".|Text| +|Spine label left margin|Set the left margin for spine labels in number of characters.|Number| +|Spine label line width|Set the default line width for spine labels in number of characters. This specifies the boundary at which lines must be wrapped.|Number| +|Spine label maximum lines|Set the default maximum number of lines for spine labels.|Number| +|====================== + +((("Circulation", "Library Settings Editor"))) + +[[lse-circulation]] +.Circulation +[options="header"] +|=========== +|Setting|Description|Data type|Notes +|Allow others to use patron account (privacy waiver)|Add a note to a user account indicating that specified people are allowed to place holds, pick up holds, check out items, or view borrowing history for that user account.|True/False| +|Auto-extend grace periods|When enabled grace periods will auto-extend. By default this will be only when they are a full day or more and end on a closed date, though other options can alter this.|True/False| +|Auto-extending grace periods extend for all closed dates|It works when the above setting "Auto-Extend Grace Periods" is set to TRUE. If enabled, when the grace period falls on a closed date(s), it will be extended past all closed dates that intersect, but within the hard-coded limits (your library's grace period).|True/False| +|Auto-extending grace periods include trailing closed dates|It works when the above setting "Auto-Extend Grace Periods" is set to TRUE. If enabled, grace periods will include closed dates that directly follow the last day of the grace period. A backdated check-in with effective date on the closed dates will assume the item is returned after hours on the last day of the grace period.|True/False|Useful when libraries' book drop equipped with AMH. +|Block hold request if hold recipient privileges have expired||True/False| +|Cap max fine at item price|This prevents the system from charging more than the item price in overdue fines|True/False| +|Charge fines on overdue circulations when closed|When set to True, fines will be charged during scheduled closings and normal weekly closed days.|True/False| +|Checkout fills related hold|When a patron checks out an item and they have no holds that directly target the item, the system will attempt to find a hold for the patron that could be fulfilled by the checked out item and fulfills it. On the Staff Client you may notice that when a patron checks out an item under a title on which he/she has a hold, the hold will be treated as filled though the item has not been assigned to the patron's hold.|True/false| +|Checkout fills related hold on valid copy only|When filling related holds on checkout only match on items that are valid for opportunistic capture for the hold. Without this set a Title or Volume hold could match when the item is not holdable. With this set only holdable items will match.|True/False| +|Checkout auto renew age|When an item has been checked out for at least this amount of time, an attempt to check out the item to the patron that it is already checked out to will simply renew the circulation. If the checkout attempt is done within this time frame, Evergreen will prompt for choosing Renewing or Check-in then Checkout the item.|Duration| +|Display copy alert for in-house-use|Setting to true for an organization will cause an alert to appear with the copy's alert message, if it has one, when recording in-house-use for the copy.|True/False| +|Display copy location check in alert for in-house-use|Setting to true for an organization will cause an alert to display a message indicating that the item needs to be routed to its location if the location has check in alert set to true.|True/False| +|Do not change fines/fees on zero-balance LOST transaction|When an item has been marked lost and all fines/fees have been completely paid on the transaction, do not void or reinstate any fines/fees EVEN IF "Void lost item billing when returned" and/or "Void processing fee on lost item return" are enabled|True/False| +|Do not include outstanding Claims Returned circulations in lump sum tallies in Patron Display.|In the Patron Display interface, the number of total active circulations for a given patron is presented in the Summary sidebar and underneath the Items Out navigation button. This setting will prevent Claims Returned circulations from counting toward these tallies.|True/False| +|Hold shelf status delay|The purpose is to provide an interval of time after an item goes into the on-holds-shelf status before it appears to patrons that it is actually on the holds shelf. This gives staff time to process the item before it shows as ready-for-pick-up.|Duration| +|Include Lost circulations in lump sum tallies in Patron Display.|In the Patron Display interface, the number of total active circulations for a given patron is presented in the Summary sidebar and underneath the Items Out navigation button. This setting will include Lost circulations as counting toward these tallies.|True/False| +|Invalid patron address penalty|When set, if a patron address is set to invalid, a penalty is applied.|True/False| +|Item status for missing pieces|This is the Item Status to use for items that have been marked or scanned as having Missing Pieces. In the absence of this setting, the Damaged status is used.|Selection list| +|Load patron from Checkout|When scanning barcodes into Checkout auto-detect if a new patron barcode is scanned and auto-load the new patron.|True/False| +|Long-Overdue Check-In Interval Uses Last Activity Date|Use the long-overdue last-activity date instead of the due_date to determine whether the item has been checked out too long to perform long-overdue check-in processing. If set, the system will first check the last payment time, followed by the last billing time, followed by the due date. See also "Long-Overdue Max Return Interval"|True/False| +|Long-Overdue Items Usable on Checkin|Long-overdue items are usable on checkin instead of going "home" first|True/False| +|Long-Overdue Max Return Interval|Long-overdue check-in processing (voiding fees, re-instating overdues, etc.) will not take place for items that have been overdue for (or have last activity older than) this amount of time|Duration| +|Lost check-in generates new overdues|Enabling this setting causes retroactive creation of not-yet-existing overdue fines on lost item check-in, up to the point of check-in time (or max fines is reached). This is different than "restore overdue on lost", because it only creates new overdue fines. Use both settings together to get the full complement of overdue fines for a lost item|True/False| +|Lost items usable on checkin|Lost items are usable on checkin instead of going 'home' first|True/false| +|Max patron claims returned count|When this count is exceeded, a staff override is required to mark the item as claims returned.|Number| +|Maximum visible age of User Trigger Events in Staff Interfaces|If this is unset, staff can view User Trigger Events regardless of age. When this is set to an interval, it represents the age of the oldest possible User Trigger Event that can be viewed.|Duration| +|Minimum transit checkin interval|In-Transit items checked in this close to the transit start time will be prevented from checking in|Duration| +|Number of Retrievable Recent Patrons|Number of most recently accessed patrons that can be re-retrieved in the staff client. A value of 0 or less disables the feature. Defaults to 1.|Number| +|Patron merge address delete|Delete address(es) of subordinate user(s) in a patron merge.|True/False| +|Patron merge barcode delete|Delete barcode(s) of subordinate user(s) in a patron merge|True/False| +|Patron merge deactivate card|Mark barcode(s) of subordinate user(s) in a patron merge as inactive.|True/False| +|Patron Registration: Cloned patrons get address copy|If True, in the Patron editor, addresses are copied from the cloned user. If False, addresses are linked from the cloned user which can only be edited from the cloned user record.|True/False| +|Printing: custom JavaScript file|Full URL path to a JavaScript File to be loaded when printing. Should implement a print_custom function for DOM manipulation. Can change the value of the do_print variable to false to cancel printing.|Text| +|Require matching email address for password reset requests||True/False| +|Restore Overdues on Long-Overdue Item Return||True/False| +|Restore overdues on lost item return|If true when a lost item is checked in overdue fines are charged (up to the maximum fines amount)|True/False| +|Specify search depth for the duplicate patron check in the patron editor|When using the patron registration page, the duplicate patron check will use the configured depth to scope the search for duplicate patrons.|Number| +|Suppress hold transits group|To create a group of libraries to suppress Hold Transits among them. All libraries in the group should use the same unique value. Leave it empty if transits should not be suppressed.|Text| +|Suppress non-hold transits group|To create a group of libraries to suppress Non-Hold Transits among them. All libraries in the group should use the same unique value. Leave it empty if Non-Hold Transits should not be suppressed.|Text| +|Suppress popup-dialogs during check-in.|When set to True, no pop-up window for exceptions on check-in. But the accompanying sound will be played.|True/False| +|Target copies for a hold even if copy's circ lib is closed|If this setting is true at a given org unit or one of its ancestors, the hold targeter will target items from this org unit even if the org unit is closed (according to the Org Unit's closed dates.).|True/False|Set the value to True if you want to target items for holds at closed circulating libraries. Set the value to False, or leave it unset, if you do not want to enable this feature. +|Target copies for a hold even if copy's circ lib is closed IF the circ lib is the hold's pickup lib|If this setting is true at a given org unit or one of its ancestors, the hold targeter will target items from this org unit even if the org unit is closed (according to the Org Unit's closed dates) IF AND ONLY IF the item's circ lib is the same as the hold's pickup lib.|True/False| Set the value to True if you want to target items for holds at closed circulating libraries when the circulating library of the item and the pickup library of the hold are the same. Set the value to False, or leave it unset, if you do not want to enable this feature. +|Truncate fines to max fine amount||True/False|Default:TRUE +|Use Lost and Paid copy status|Use Lost and Paid copy status when lost or long overdue billing is paid|True/False| +|Void Long-Overdue Item Billing When Returned||True/False| +|Void Processing Fee on Long-Overdue Item Return||True/False| +|Void longoverdue item billing when claims returned||True/False| +|Void longoverdue item processing fee when claims returned||True/False| +|Void lost item billing when claims returned||True/False| +|Void lost item billing when returned|If true,when a lost item is checked in the item replacement bill (item price) is voided.|True/False| +|Void lost item processing fee when claims returned|When an item is marked claims returned that was marked Lost, the item processing fee will be voided.|True/False| +|Void lost max interval|Items that have been overdue this long will not result in lost charges being voided when returned, and the overdue fines will not be restored, either. Only applies if *Circ: Void lost item billing* or *Circ: Void processing fee on lost item* are true.|Duration| +|Void processing fee on lost item return|Void processing fee when lost item returned|True/False| +|Warn when patron account is about to expire|If set, the staff client displays a warning this number of days before the expiry of a patron account. Value is in number of days.|Duration| +|=========== + +((("Credit Card Processing", "Library Settings Editor"))) + +[[lse-credit-cards]] +.Credit Card Processing +[options="header"] +|====================== +|Setting|Description|Data type|Notes +|AuthorizeNet login|Authorize.Net Username|Text|Obtain from Authorize.Net at http://www.authorize.net +|AuthorizeNet password|Authorize.Net Password|Text|Obtain from Authorize.Net +|AuthorizeNet server|Required if using a developer/test account with Authorize.Net.|Text|Enter the server name from Authorize.Net. This is for use on test or developer account. If using live, leave blank. +|AuthorizeNet test mode|Places Authorize.Net transactions in Test Mode|True/False| +|Enable AuthorizeNet payments|This actually enables use of Authorize.Net|True/False| +|Enable PayPal payments|This will enable use of PayPal payments through the staff client.|True/False| +|Enable PayflowPro payments|This will enable the use of PayPal's Payflow Pro. This is not the same as PayPal.|True/False| +|Enable Stripe payments|This will enable the use of the stripe credit card processing.|True/False|https://stripe.com +|Name default credit processor|This might be "AuthorizeNet", "PayPal", "PayflowPro", or "Stripe".|Text|This sets the company that you will use to process the credit cards. +|PayPal login|Enter the PayPal login Username|Text|Obtain from PayPal +|PayPal password|Enter the PayPal password.|Text|Obtain from PayPal. +|PayPal signature|HASH Signature for PayPal|Text|Enter the HASH obtained from PayPal. +|PayPal test mode|Places the PayPal credit card payments in test mode.|True/False|This sends the transactions to PayPal's development.paypal.com server for testing only. +|PayflowPro login/merchant ID|Enter the PayflowPro Merchant ID|Text|Obtain from Payflow Pro Partner. +|PayflowPro partner|Enter the Partner ID from your Payflow Partner|Text|This will obtained from your Payflow Pro partner. This can be "PayPal" or "VeriSign", sometimes others. +|PayflowPro password|Password for PayflowPro|Text|Obtain from Payflow Pro Partner +|PayflowPro test mode|Place Payflow Pro in test mode.|True/False|Do not really process transactions, but stay in test mode - uses pilot-payflowpro.paypal.com instead of the usual host. +|PayflowPro vendor|Currently the same as the Payflow Pro login.|Text|Obtain from Payflow Pro partner. +|Stripe publishable key|Publishable API Key from stripe.|Text| +|Stripe secret key|Secret API key from stripe.|Text| +|====================== + +((("Finances", "Library Settings Editor"))) + +[[lse-finances]] +.Finances +[options="header"] +|======== +|Setting|Description|Data type|Notes +|Allow credit card payments|If enabled, patrons will be able to pay fines accrued at this location via credit card.|True/False| +|Charge item price when marked damaged|If true Evergreen bills item price to the last patron who checked out the damaged item. Staff receive an alert with patron information and must confirm the billing.| True/false| +|Charge lost on zero|If set to True, default item price will be charged when an item is marked lost even though the price in item record is 0.00 (same as no price). If False, only processing fee, if used, will be charged.|True/false| +|Charge processing fee for damaged items|Optional processing fee billed to last patron who checked out the damaged item. Staff receive an alert with patron information and must confirm the billing.|Number(Dollar)| Disabled when set to 0 +|Default item price|Replacement charge for lost items if price is unset in the *Copy Editor*. Does not apply if item price is set to $0|Number(dollars)| +|Disable Patron Credit|Do not allow patrons to accrue credit or pay fines/fees with accrued credit|True/False| +|Leave transaction open when long overdue balance equals zero|Leave transaction open when long-overdue balance equals zero. This leaves the long-overdue copy on the patron record when it is paid|True/False| +|Leave transaction open when lost balance equals zero|Leave transaction open when lost balance equals zero. This leaves the lost item on the patron record when it is paid|True/False| +|Long-Overdue Materials Processing Fee|The amount charged in addition to item price when an item is marked Long-Overdue|Number|Currency +|Lost materials processing fee|The amount charged in addition to item price when an item is marked lost.| Number|Currency +|Maximum Item Price|When charging for lost items, limit the charge to this as a maximum.|Number|Currency +|Minimum Item Price|When charging for lost items, charge this amount as a minimum.|Number|Currency +|Negative Balance Interval (DEFAULT)|Amount of time after which no negative balances (refunds) are allowed on circulation bills. The "Prohibit negative balance on bills" setting must also be set to "true".|Duration| +|Negative Balance Interval for Lost|Amount of time after which no negative balances (refunds) are allowed on bills for lost/long overdue materials. The "Prohibit negative balance on bills for lost materials" setting must also be set to "true".|Duration| +|Negative Balance Interval for Overdues|Amount of time after which no negative balances (refunds) are allowed on bills for overdue materials. The "Prohibit negative balance on bills for overdue materials" setting must also be set to "true".|Duration| +|Prohibit negative balance on bills (Default)|Default setting to prevent negative balances (refunds) on circulation related bills. Set to "true" to prohibit negative balances at all times or, when used in conjunction with an interval setting, to prohibit negative balances after a set period of time.|True/False| +|Prohibit negative balance on bills for lost materials|Prevent negative balances (refunds) on bills for lost/long overdue materials. Set to "true" to prohibit negative balances at all times or, when used in conjunction with an interval setting, to prohibit negative balances after an interval of time.|True/False| +|Prohibit negative balance on bills for overdue materials|Prevent negative balances (refunds) on bills for lost/long overdue materials. Set to "true" to prohibit negative balances at all times or, when used in conjunction with an interval setting, to prohibit negative balances after an interval of time.|True/False| +|Void Overdue Fines When Items are Marked Long-Overdue|If true overdue fines are voided when an item is marked Long-Overdue|True/False| +|Void overdue fines when items are marked lost|If true overdue fines are voided when an item is marked lost|True/False| +|======== + +((("GUI", "Library Settings Editor"))) +((("Graphic User Interface", "Library Settings Editor"))) +((("Patron Registration Settings", "Library Settings Editor"))) + +[[lse-gui]] +.GUI: Graphic User Interface +[options="header",separator="!"] +!=========================== +!Setting!Description!Data type!Notes +!Alert on empty bib records!Alert staff when the last item for a record is being deleted.!True/False! +!Button bar!If TRUE, the staff client button bar appears by default on all workstations registered to your library; staff can override this setting at each login.!True/False! +!Cap results in Patron Search at this number.!The maximum number of results returned per search. If 100 is set up here, any search will return 100 records at most.!Number! +!Default Country for New Addresses in Patron Editor!This is the default Country for new addresses in the patron editor.!Text! +!Default hotkeyset!Default Hotkeyset for clients (filename without the .keyset). Examples: Default, Minimal, and None!Text!Individual workstations' default overrides this setting. +!Default ident type for patron registration!This is the default Ident Type for new users in the patron editor.!Selection list! +!Default showing suggested patron registration fields!Instead of All fields, show just suggested fields in patron registration by default.!True/False! +!Disable the ability to save list column configurations locally.!GUI: Disable the ability to save list column configurations locally. If set, columns may still be manipulated, however, the changes do not persist. Also, existing local configurations are ignored if this setting is true.!True/False! +!Enable Experimental Angular Staff Catalog!Adds an entry to the Web client's search menu so that staff can experiment with the new Angular Staff Catalog.!True/False! +!Example for Day_phone field on patron registration!The example on validation on the Day_phone field in patron registration.!Text! +!Example for Email field on patron registration!The example on validation on the Email field in patron registration.!Text! +!Example for Evening-phone on patron registration!The example on validation on the Evening-phone field in patron registration.!Text! +!Example for Other-phone on patron registration!The example on validation on the Other-phone field in patron registration.!Text! +!Example for phone fields on patron registration!The example on validation on phone fields in patron registration. Applies to all phone fields without their own setting.!Text! +!Example for Postal Code field on patron registration!The example on validation on the Postal Code field in patron registration.!Text! +!Format Dates with this pattern.!Format Dates with this pattern (examples: "yyyy-MM-dd" for "2010-04-26, "MMM d, yyyy" for "Apr 26, 2010"). Formats are effective in display (not editing) area.!Text! +!Format Times with this pattern.!Format Times with this pattern '(examples: "h:m:s.SSS a z" for "2:07:20.666 PM Eastern Daylight Time", "HH:mm" for "14:07")'. Formats are effective in display (not editing) area.!Text! +!GUI: Hide these fields within the Item Attribute Editor.!Sets which fields in the Item Attribute Editor to hide in the staff client.!Text!This is useful to hide attributes that are not used. +!Horizontal layout for Volume/Copy Creator/Editor.!The main entry point for this interface is in Holdings Maintenance, Actions for Selected Rows, Edit Item Attributes / Call Numbers / Replace Barcodes. This setting changes the top and bottom panes (if FALSE) for that interface into left and right panes (if TRUE).!True/False! +!Idle timeout!If you want staff client windows to be minimized after a certain amount of system idle time, set this to the number of seconds of idle time that you want to allow before minimizing (requires staff client restart).!Number! +!Items Out Claims Returned display setting!Value is a numeric code, describing which list the circulation should appear while checked out and whether the circulation should continue to appear in the bottom list, when checked in with outstanding fines. 1 = top list, bottom list. 2 = bottom list, bottom list. 5 = top list, do not display. 6 = bottom list, do not display.!Number! +!Items Out Long-Overdue display setting!Value is a numeric code, describing which list the circulation should appear while checked out and whether the circulation should continue to appear in the bottom list, when checked in with outstanding fines. 1 = top list, bottom list. 2 = bottom list, bottom list. 5 = top list, do not display. 6 = bottom list, do not display.!Number! +!Items Out Lost display setting!Value is a numeric code, describing which list the circulation should appear while checked out and whether the circulation should continue to appear in the bottom list, when checked in with outstanding fines. 1 = top list, bottom list. 2 = bottom list, bottom list. 5 = top list, do not display. 6 = bottom list, do not display.!Number! +!Max user activity entries to retrieve (staff client)!Sets the maximum number of recent user activity entries to retrieve for display in the staff client.!Number! +!Maximum previous checkouts displayed! The maximum number of previous circulations the staff client will display when investigating item details!Number! +!Patron circulation summary is horizontal!!True/False! +!Record in-house use: # of uses threshold for Are You Sure? dialog.!In the Record In-House Use interface, a submission attempt will warn if the # of uses field exceeds the value of this setting.!Number! +!Record In-House Use: Maximum # of uses allowed per entry.!The # of uses entry in the Record In-House Use interface may not exceed the value of this setting.!Number! +!Regex for barcodes on patron registration!The Regular Expression for validation on barcodes in patron registration.!Regular Expression! +!Regex for Day_phone field on patron registration! The Regular Expression for validation on the Day_phone field in patron registration. Note: The first capture group will be used for the "last 4 digits of phone number" as patron password feature, if enabled. Ex: "[2-9]\d{2}-\d{3}-(\d{4})( x\d+)?" will ignore the extension on a NANP number.!Regular expression! +!Regex for Email field on patron registration!The Regular Expression on validation on the Email field in patron registration.!Regular expression! +!Regex for Evening-phone on patron registration!The Regular Expression on validation on the Evening-phone field in patron registration.!Regular expression! +!Regex for Other-phone on patron registration!The Regular Expression on validation on the Other-phone field in patron registration.!Regular expression! +!Regex for phone fields on patron registration!The Regular Expression on validation on phone fields in patron registration. Applies to all phone fields without their own setting.!Regular expression!`^(?:(?:\+?1\s*(?:[.-]\s*)?)?(?:\(\s*([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9])\s*\)|([2-9]1[02-9]|[2-9][02-8]1|[2-9][02-8][02-9]))\s*(?:[.-]\s*)?)?([2-9]1[02-9]|[2-9][02-9]1|[2-9][02-9]{2})\s*(?:[.-]\s*)?([0-9]{4})(?:\s*(?:#|x\.?|ext\.?|extension)\s*(\d+))?$` is a US phone number +!Regex for Postal Code field on patron registration!The Regular Expression on validation on the Postal Code field in patron registration.!Regular expression! +!Require at least one address for Patron Registration!Enforces a requirement for having at least one address for a patron during registration. If set to False, you need to delete the empty address before saving the record. If set to True, deletion is not allowed.!True/False! +!Require XXXXX field on patron registration!The XXXXX field will be required on the patron registration screen.!True/False!XXXXX can be Country, State, Day-phone, Evening-phone, Other-phone, DOB, Email, or Prefix. +!Require staff initials for entry/edit of patron standing penalties and messages.!Appends staff initials and edit date into patron standing penalties and messages.!True/False! +!Require staff initials for entry/edit of patron notes.!Appends staff initials and edit date into patron note content.!True/False! +!Require staff initials for entry/edit of copy notes.!Appends staff initials and edit date into copy note content.!True/False! +!Show billing tab first when bills are present!If true accounts for patrons with bills will open to the billing tab instead of check out!True/false! +!Show XXXXX field on patron registration!The XXXXX field will be shown on the patron registration screen. Showing a field makes it appear with required fields even when not required. If the field is required this setting is ignored.!True/False! +!Suggest XXXXX field on patron registration!The XXXXX field will be suggested on the patron registration screen. Suggesting a field makes it appear when suggested fields are shown. If the field is shown or required this setting is ignored.!True/False! +!Juvenile account requires parent/guardian!When this setting is set to true, a value will be required in the patron editor when the juvenile flag is active.!True/False! +!Toggle off the patron summary sidebar after first view.!When true, the patron summary sidebar will collapse after a new patron sub-interface is selected.!True/False! +!URL for remote directory containing list column settings.!The format and naming convention for the files found in this directory match those in the local settings directory for a given workstation. An administrator could create the desired settings locally and then copy all the tree_columns_for_* files to the remote directory.!Text! +!Uncheck bills by default in the patron billing interface!Uncheck bills by default in the patron billing interface, and focus on the Uncheck All button instead of the Payment Received field.!True/False! +!Unified Volume/Item Creator/Editor!If True, combines the Volume/Copy Creator and Item Attribute Editor in some instances.!True/False! +!Work Log: maximum actions logged!Maximum entries for "Most Recent Staff Actions" section of the Work Log interface.!Number! +!Work Log: maximum patrons logged!Maximum entries for "Most Recently Affected Patrons..." section of the Work Log interface.!Number! +!=========================== + +((("Global", "Library Settings Editor"))) + +[[lse-global]] +.Global +[options="header"] +|====== +|Setting|Description|Data type|Notes +|Allow multiple username changes|If enabled (and Lock Usernames is not set) patrons will be allowed to change their username when it does not look like a barcode. Otherwise username changing in the OPAC will only be allowed when the patron's username looks like a barcode.|True/False|Default TRUE. +|Global default locale||Number| +|Lock Usernames|If enabled username changing via the OPAC will be disabled.|Default FALSE| +|Password format|Defines acceptable format for OPAC account passwords|Regular expression|Default requires that passwords "be at least 7 characters in length,contain at least one letter (a-z/A-Z), and contain at least one number. +|Patron barcode format|Defines acceptable format for patron barcodes|Regular expression| +|Patron username format|Regular expression defining the patron username format, used for patron registration and self-service username changing only|Regular expression| +|====== + +((("Holds", "Library Settings Editor"))) + +[[lse-holds]] +.Holds +[options="header"] +|===== +|Setting|Description|Data type|Notes +|Behind desk pickup supported|If a branch supports both a public holds shelf and behind-the-desk pickups, set this value to true. This gives the patron the option to enable behind-the-desk pickups for their holds by selecting Hold is behind Circ Desk flag in patron record.|True/False| +|Best-hold selection sort order|Defines the sort order of holds when selecting a hold to fill using a given copy at capture time|Selection list| +|Block renewal of items needed for holds|When an item could fulfill a hold, do not allow the current patron to renew|True/False| +|Cancelled holds display age|Show all cancelled holds that were cancelled within this amount of time|Duration| +|Cancelled holds display count|How many cancelled holds to show in patron holds interfaces|Number| +|Clear shelf copy status|Any copies that have not been put into reshelving, in-transit, or on-holds-shelf (for a new hold) during the clear shelf process will be put into this status. This is basically a purgatory status for copies waiting to be pulled from the shelf and processed by hand|Selection list| +|Default estimated wait|When predicting the amount of time a patron will be waiting for a hold to be fulfilled, this is the default estimated length of time to assume an item will be checked out.|Duration| +|Default hold shelf expire interval|Hold Shelf Expiry Time is calculated and inserted into hold record based on this interval when capturing a hold.|Duration| +|Expire alert interval|Time before a hold expires at which to send an email notifying the patron|Duration| +|Expire interval|Amount of time until an unfulfilled hold expires|Duration| +|FIFO|Force holds to a more strict First-In, First-Out capture. Default is SAVE-GAS, which gives priority to holds with pickup location the same as checkin library.|True/False|Applies only to multi-branch libraries. Default is SAVE-GAS. +|Hard boundary||Number| +|Hard stalling interval||Duration| +|Has local copy alert|If there is an available item at the requesting library that could fulfill a hold during hold placement time, alert the patron.|True/False| +|Has local copy block|If there is an available item at the requesting library that could fulfill a hold during hold placement time, do not allow the hold to be placed.|True/False| +|Max foreign-circulation time|Time a item can spend circulating away from its circ lib before returning there to fill a hold|Duration|For multi-branch libraries. +|Maximum library target attempts|When this value is set and greater than 0, the system will only attempt to find a item at each possible branch the configured number of times|Number|For multi-branch libraries. +|Minimum estimated wait|When predicting the amount of time a patron will be waiting for a hold to be fulfilled, this is the minimum estimated length of time to assume an item will be checked out.|Duration | +|Org unit target weight|Org Units can be organized into hold target groups based on a weight. Potential items from org units with the same weight are chosen at random.|Number| +|Reset request time on un-cancel|When a hold is uncancelled, reset the request time to push it to the end of the queue|True/False| +|Skip for hold targeting|When true, don't target any items at this org unit for holds|True/False| +|Soft boundary|Holds will not be filled by items outside this boundary if there are holdable items within it.|Number | +|Soft stalling interval|For this amount of time, holds will not be opportunistically captured at non-pickup branches.|Duration| +For multiple branch libraries +|Use Active Date for age protection|When calculating age protection rules use the Active date instead of the Creation Date.|True/False|Default TRUE +|Use weight-based hold targeting|Use library weight based hold targeting|True/False| +|===== + +((("Library", "Library Settings Editor"))) + +[[lse-library]] +.Library +[options="header"] +|======= +|Setting|Description|Data type|Notes +|Change reshelving status interval|Amount of time to wait before changing an item from “Reshelving” status to “available”|Duration| +The default is at midnight each night for items with "Reshelving" status for over 24 hours. +|Claim never checked out: mark copy as missing|When a circ is marked as claims-never-checked-out, mark the item as missing|True/False| +|Claim return copy status|Claims returned copies are put into this status. Default is to leave the copy in the Checked Out status|Selection list| +|Courier code|Courier Code for the library. Available in transit slip templates as the %courier_code% macro.|Text| +|Juvenile age threshold|Upper cut-off age for patrons to be considered juvenile, calculated from date of birth in patron accounts|Duration (years)| +|Library information URL (such as "http://example.com/about.html")|URL for information on this library, such as contact information, hours of operation, and directions. Use a complete URL, such as "http://example.com/hours.html".|Text| +|Mark item damaged voids overdues|When an item is marked damaged, overdue fines on the most recent circulation are voided.|True/False| +|Pre-cat item circ lib|Override the default circ lib of "here" with a pre-configured circ lib for pre-cat items. The value should be the "shortname" (aka policy name) of the org unit|Text | +|Telephony: Arbitrary line(s) to include in each notice callfile|This overrides lines from opensrf.xml. Line(s) must be valid for your target server and platform (e.g. Asterisk 1.4).|Text| +|======= + +((("OPAC", "Library Settings Editor"))) + +[[lse-opac]] +.OPAC +[options="header"] +|==== +|Setting|Description|Data type|Notes +|Allow Patron Self-Registration|Allow patrons to self-register, creating pending user accounts|True/False| +|Allow pending addresses|If true patrons can edit their addresses in the OPAC. Changes must be approved by staff|True/False| +|Auto-Override Permitted Hold Blocks (Patrons)|This will allow patrons with the permission "HOLD_ITEM_CHECKED_OUT.override" to automatically override permitted holds.|True/False|When a patron places a hold in the OPAC that fails, and the patron has the permission to override the failed hold, this automatically overrides the failed hold rather than requiring the patron to manually override the hold. Default is False. +|Jump to details on 1 hit (OPAC)|When a search yields only 1 result, jump directly to the record details page. This setting only affects the public OPAC|True/False| +|Jump to details on 1 hit (staff client)|When a search yields only 1 result, jump directly to the record details page. This setting only affects the PAC within the staff client|True/False| +|OPAC: Number of staff client saved searches to display on left side of results and record details pages|If unset, the OPAC (only when wrapped in the staff client!) will default to showing you your ten most recent searches on the left side of the results and record details pages. If you actually don't want to see this feature at all, set this value to zero at the top of your organizational tree.|Number| +|OPAC: Org Unit is not a hold pickup library|If set, this org unit will not be offered to the patron as an option for a hold pickup location. This setting has no affect on searching or hold targeting.|True/False| +|Org unit hiding depth|This will hide certain org units in the public OPAC if the Original Location (url param "ol") for the OPAC inherits this setting. This setting specifies an org unit depth, that together with the OPAC Original Location determines which section of the Org Hierarchy should be visible in the OPAC. For example, a stock Evergreen installation will have a 3-tier hierarchy (Consortium/System/Branch), where System has a depth of 1 and Branch has a depth of 2. If this setting contains a depth of 1 in such an installation, then every library in the System in which the Original Location belongs will be visible, and everything else will be hidden. A depth of 0 will effectively make every org visible. The embedded OPAC in the staff client ignores this setting.|Number| +|Paging shortcut links for OPAC Browse|The characters in this string, in order, will be used as shortcut links for quick paging in the OPAC browse interface. Any sequence surrounded by asterisks will be taken as a whole label, not split into individual labels at the character level, but only the first character will serve as the basis of the search.|Text| +|Patron Self-Reg. Display Timeout|Number of seconds to wait before reloading the patron self-registration interface to clear sensitive data|Duration| +|Patron Self-Reg. Expire Interval|If set, this is the amount of time a pending user account will be allowed to sit in the database. After this time, the pending user information will be purged|Duration| +|Payment history age limit|The OPAC should not display payments by patrons that are older than any interval defined here.|Duration| +|Tag Circulated Items in Results|When a user is both logged in and has opted in to circulation history tracking, turning on this setting will cause previous (or currently) circulated items to be highlighted in search results|True/False| +|Tag Circulated Items in Results|When a user is both logged in and has opted in to circulation history tracking, turning on this setting will cause previous (or currently) circulated items to be highlighted in search results.|True/False|Default TRUE +|Use fully compressed serial holdings|Show fully compressed serial holdings for all libraries at and below the current context unit|True/False| +|Warn patrons when adding to a temporary book list|Present a warning dialogue when a patron adds a book to the temporary book list.|True/False| +|==== + +((("Offline", "Library Settings Editor"))) +((("Program", "Library Settings Editor"))) + +[[lse-offline]] +.Offline and Program +[options="header"] +|=================== +|Setting|Description|Data type|Notes +|Skip offline checkin if newer item Status Changed Time.|Skip offline checkin transaction (raise exception when processing) if item Status Changed Time is newer than the recorded transaction time. WARNING: The Reshelving to Available status rollover will trigger this.|True/False| +|Skip offline checkout if newer item Status Changed Time.|Skip offline checkout transaction (raise exception when processing) if item Status Changed Time is newer than the recorded transaction time. WARNING: The Reshelving to Available status rollover will trigger this.|True/False| +|Skip offline renewal if newer item Status Changed Time.|Skip offline renewal transaction (raise exception when processing) if item Status Changed Time is newer than the recorded transaction time. WARNING: The Reshelving to Available status rollover will trigger this.|True/False| +|Disable automatic print attempt type list|Disable automatic print attempts from staff client interfaces for the receipt types in this list. Possible values: "Checkout", "Bill Pay", "Hold Slip", "Transit Slip", and "Hold/Transit Slip". This is different from the Auto-Print checkbox in the pertinent interfaces in that it disables automatic print attempts altogether, rather than encouraging silent printing by suppressing the print dialogue. The Auto-Print checkbox in these interfaces have no effect on the behavior for this setting. In the case of the Hold, Transit, and Hold/Transit slips, this also suppresses the alert dialogues that precede the print dialogue (the ones that offer Print and Do Not Print as options).|Text| +|Retain empty bib records|Retain a bib record even when all attached copies are deleted|True/False| +|Sending email address for patron notices|This email address is for automatically generated patron notices (e.g. email overdues, email holds notification). It is good practice to set up a generic account, like info@nameofyourlibrary.org, so that one person’s individual email inbox doesn’t get cluttered with emails that were not delivered.|Text| +|=================== + +((("Receipt Templates", "Library Settings Editor"))) +((("SMS Settings", "Library Settings Editor"))) +((("Text Messaging", "Library Settings Editor"))) + +[[lse-receipt]] +.Receipt Templates and SMS Text Message +[options="header"] +|====================================== +|Setting|Description|Data type|Notes +|Content of alert_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(alert_text)%|Text| +|Content of event_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(event_text)%|Text| +|Content of footer_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(footer_text)%|Text| +|Content of header_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(header_text)%|Text| +|Content of notice_text include|Text/HTML/Macros to be inserted into receipt templates in place of %INCLUDE(notice_text)%|Text| +|Disable auth requirement for texting call numbers.|Disable authentication requirement for sending call number information via SMS from the OPAC.|True/False| +|Enable features that send SMS text messages.|Current features that use SMS include hold-ready-for-pickup notifications and a "Send Text" action for call numbers in the OPAC. If this setting is not enabled, the SMS options will not be offered to the user. Unless you are carefully silo-ing patrons and their use of the OPAC, the context org for this setting should be the top org in the org hierarchy, otherwise patrons can trample their user settings when jumping between orgs.|True/False| +|====================================== + +((("Security", "Library Settings Editor"))) + +[[lse-security]] +.Security +[options="header"] +|======== +|Setting|Description|Data type|Notes +|Default level of patrons' internet access|Enter numbers 1 (Filtered), 2 (Unfiltered), or 3 (No Access)|Number| +|Maximum concurrently active self-serve password reset requests|Prevent the creation of new self-serve password reset requests until the number of active requests drops back below this number.|Number| +|Maximum concurrently active self-serve password reset requests per user|When a user has more than this number of concurrently active self-serve password reset requests for their account, prevent the user from creating any new self-serve password reset requests until the number of active requests for the user drops back below this number.|Number| +|OPAC Inactivity Timeout (in seconds)|Number of seconds of inactivity before OPAC accounts are automatically logged out.|Number| +|Obscure the Date of Birth field|When true, the Date of Birth column in patron lists will default to Not Visible, and in the Patron Summary sidebar the value will display as unless the field label is clicked.|True/False| +|Offline: Patron usernames allowed|During offline circulations, allow patrons to identify themselves with +usernames in addition to barcode. For this setting to work, a barcode format must also be defined|True/False| +|Patron opt-in boundary|This determines at which depth above which patrons must be opted in, and below which patrons will be assumed to be opted in.|Text| +|Patron opt-in default|This is the default depth at which a patron is opted in; it is calculated as an org unit relative to the current workstation.|Text| +|Patron: password from phone #|If true the last 4 digits of the patron's phone number is the password for new accounts (password must still be changed at first OPAC login)|True/false| +|Persistent login duration|How long a persistent login lasts, e.g. '2 weeks'|Duration| +|Self-serve password reset request time-to-live|Length of time (in seconds) a self-serve password reset request should remain active.|Duration| +|Staff login inactivity timeout (in seconds)|Number of seconds of inactivity before staff client prompts for login and password.|Number| +|======== + +((("Self Check", "Library Settings Editor"))) + +[[lse-selfcheck]] +.Self Check and Others +[options="header"] +|===================== +|Setting|Description|Data type|Notes +|Audio Alerts|Use audio alerts for selfcheck events.|True/false| +|Block copy checkout status|List of copy status IDs that will block checkout even if the generic COPY_NOT_AVAILABLE event is overridden.|Number|Look up copy status ID from Server Admin. +|Patron login timeout (in seconds)|Number of seconds of inactivity before the patron is logged out of the selfcheck interface.|Duration| +|Pop-up alert for errors|If true, checkout/renewal errors will cause a pop-up window in addition to the on-screen message.|True/False| +|Require Patron Password|If true, patrons will be required to enter their password in addition to their username/barcode to log into the selfcheck interface.|True/False|This replaced "Require patron password" +|Require patron password||True/False|This was replaced by "Require Patron Password" and is currently invalid. +|Selfcheck override events list|List of checkout/renewal events that the selfcheck interface should automatically override instead instead of alerting and stopping the transaction.|Text| +|Workstation Required|All selfcheck stations must use a workstation.|True/False| +|Default display grouping for serials distributions presented in the OPAC.|Default display grouping for serials distributions presented in the OPAC. This can be "enum" or "chron".|Text| +|Previous issuance copy location|When a serial issuance is received, copies (units) of the previous issuance will be automatically moved into the configured shelving location.|Selection List| +|Maximum redirect lookups|For URLs returning 3XX redirects, this is the maximum number of redirects we will follow before giving up.|Number| +|Maximum wait time (in seconds) for a URL to lookup|If we exceed the wait time, the URL is marked as a "timeout" and the system moves on to the next URL|Duration| +|Number of URLs to test in parallel|URLs are tested in batches. This number defines the size of each batch and it directly relates to the number of back-end processes performing URL verification.|Number| +|Number of seconds to wait between URL test attempts|Throttling mechanism for batch URL verification runs. Each running process will wait this number of seconds after a URL test before performing the next.|Duration| +|===================== + +((("Vandelay", "Library Settings Editor"))) + +[[lse-vandelay]] +.Vandelay +[options="header"] +|======== +|Setting|Description|Data type|Notes +|Default Record Match Set|Sets the Default Record Match set |Selection List|Populated by the Vandelay Record Match Sets +|Vandelay Default Barcode Prefix|Apply this prefix to any auto-generated item barcode|Text| +|Vandelay Default Call Number Prefix|Apply this prefix to any auto-generated item call numbers.|Text| +|Vandelay Default Circulation Modifier|Default circulation modifier value for imported items|Selection List|Populated by your Circulation Modifiers. +|Vandelay Default Copy Location|Default copy location value for imported items|Selection List|Populated from Shelving Locations +|Vandelay Generate Default Barcodes|Auto-generate default item barcodes when no item barcode is present|True/False| +|Vandelay Generate Default Call Numbers|Auto-generate default item call numbers when no item call number is present|True/False|These are pulled from the MARC Record. +|======== + +[#data_types] +=== Data Types === + +((("Data Types", "Library Settings Editor"))) + +Acceptable formats for each setting type are listed below. Quotation +marks are never required when updating settings in the staff client. + +.Data Types in the Library Settings Editor +[options="header"] +|============= +|Data type|Formatting +|True/False|Boolean True/False drop down +|Number|Enter a numerical value (decimals allowed in price settings) +|Duration|Enter a number followed by a space and any of the following units: minutes, hours, days, months (30 minutes, 2 days, etc) +|Selection list|Choose from a drop-down list of options (e.g. copy status, copy location) +|Text|Free text +|============= diff --git a/docs/modules/admin/pages/lsa-address_alert.adoc b/docs/modules/admin/pages/lsa-address_alert.adoc new file mode 100644 index 0000000000..c6e8d9e84c --- /dev/null +++ b/docs/modules/admin/pages/lsa-address_alert.adoc @@ -0,0 +1,129 @@ += Address Alert = +:toc: + +indexterm:[address alerts] + +The Address Alert module gives administrators the ability to notify staff with a custom message when +addresses with certain patterns are entered in patron records. + +This feature only serves to provide pertinent information to your library system's circulation staff during the registration process. An alert will not prevent the new patron account from being registered and the information will not be permanently associated with the patron account. + +To access the Address Alert module, select *Administration* -> *Local Administration* -> *Address Alerts*. + +[NOTE] +========== +You must have Local Administrator permissions or ADMIN_ADDRESS_ALERT permission to access the Address Alert module. +========== + +== General Usage Examples == + +- Alert staff when an address for a large apartment is entered to prompt them to ask for unit number. +- Alert staff when the address of a hotel or other temporary housing is entered. +- Alert staff when an address for a different country is entered. +- Alert staff when a specific city or zip code is entered if that city or zip code needs to be handled in a special way. If you have a neighboring city that you don't have a reciprocal relationship with, you could notify staff that a fee card is required for this customer. + +== Access Control and Scoping == + +Each address alert is tied to an Org Unit and will only be matched against staff client instances of that Org Unit and its children. + +When viewing the address alerts you will only see the alerts associated with the specific org unit selected in the *"Context Org Unit"* selection box. You won't see alerts associated with parent org units, so the list of alerts isn't a list of all alerts that may effect your org unit, only of the ones that you can edit. + +The specific permission that controls access to configuring this feature is ADMIN_ADDRESS_ALERT. Local Administrator level users will already have this permission. It is possible for the Local Administrator to grant this permission to other staff. + +== Adding a new Address Alert == + +How to add an address to the alert list: + +. Log into the Evergreen Staff Client using a Local Administrator account or another account that has been granted the proper permission. +. Click on Administration -> Local Administration -> Address Alerts. +. Click "New Address Alert." +. A form will open with the following fields to fill out: ++ +.New Address Alert Fields +|=== +|*Field* |*Description* +| Owner |Which Org Unit owns this alert. Set this to your system or branch. +| Active |Check-box that controls if the alert is active or not. Inactive alerts are not processed. +| Match All Fields |Check-box that controls if all the fields need to match to trigger the alert(checked), or only at least one field needs to match(unchecked). +| Alert Message |Message that will be displayed to staff when this alert is triggered. +| Street (1) |Street 1 field regular expression. +| Street (2) |Street 2 field regular expression. +| City |City regular expression. +| County |County regular expression. +| State |State regular expression. +| Country |County regular expression. +| Postal Code |Postal Code regular expression. +| Address Alert ID |Displays the internal database id for alert after the alert has been saved. +| Billing Address |Check-box that specifies that the alert will only match a billing address if checked. +| Mailing Address |Check-box that specifies that the alert will only match a mailing address if checked. +|=== ++ +. Click save once you have finished. + +== Editing an Address Alert == + +To make changes to an existing alert, double click on the alert in the list. The editing form will appear, make your changes and click save or cancel when you are done. + +If you don't see your alerts, make sure the *"Context Org Unit"* selection box has the correct Org Unit selected. + +== Deleting an Address Alert == + +To delete an alert or many alerts, click the selection check-box for all alerts you would like to delete. Then click the "Delete Selected" button at the top of the screen. + +== Staff View of Address Alerts == + +When an Address Alert is triggered by a matching address the staff will see the address block highlighted with a red dashed line, along with an *"Address Alert"* block which contains the alert message. + +Here is an example of what staff would see. + +image::media/lsa-address_alert_staff_view.png[Address Alert Staff View] + +== Regular Expressions / Wildcards == + +All of the patterns entered to match the various address fields are evaluated as case-insensitive regular expressions by default. + +[NOTE] +========== +Address Alerts use POSIX Regular Expressions included in the PostgreSQL database engine. See the PostgreSQL documentation for full details. +========== + +If you want to do a case-sensitive match you need to prepend the pattern with "(?c)" + +The simplest regular expression that acts as a wildcard is ".*", that matches any type of character zero or more times. + +== Examples == + +.Apartment address +Match an apartment address to prompt for unit number. + +. Choose *Owner* Org Unit. +. Active = Checked +. Match All Fields = Checked +. Alert Message = "This is a large apartment building, Please ask customer for unit number." +. Street (1) = "1212 Evergreen Lane.*" +. City = "mytown" + +.All addresses on street +Match all addresses on a certain street. Matches ave and avenue because of ending wildcard. + +. Choose *Owner* Org Unit. +. Active = Checked +. Match All Fields = Checked +. Alert Message = "This street is in a different county, please setup reciprocal card." +. Street (1) = ".* Evergreen Ave.*" +. City = "mytown" + +.Match list of cities +Match several different cities with one alert. Could be used if certain cities don't have reciprocal agreements. Note the use of parentheses and the | character to separate the different options. + +. Choose *Owner* Org Unit. +. Active = Checked +. Match All Fields = Checked +. Alert Message = "Customer must purchase a Fee card." +. City = "(Emeryville|San Jose|San Francisco)" + +== Development == + +Links to resources with more information on how and why this feature was developed and where the various source files are located. + +- Launchpad ticket for the feature request and development of address alerts - https://bugs.launchpad.net/evergreen/+bug/898248 diff --git a/docs/modules/admin/pages/lsa-barcode_completion.adoc b/docs/modules/admin/pages/lsa-barcode_completion.adoc new file mode 100644 index 0000000000..2f0e32c635 --- /dev/null +++ b/docs/modules/admin/pages/lsa-barcode_completion.adoc @@ -0,0 +1,248 @@ += Barcode Completion = +:toc: + +indexterm:[Barcode Completion,Lazy Circ] + +The Barcode Completion feature gives users the ability to only enter the +unique part of patron and item barcodes. This can significantly reduce the +amount of typing required for manual barcode input. + +This feature can also be used if there is a difference between what the +barcode scanner outputs and what is stored in the database, as long as the +barcode that is stored has more characters then what the scanner is +outputting. Barcode Completion is additive only; you cannot use it to match a +stored barcode that has less characters than what is entered. For example, if +your barcode scanners previously output *a123123b* and now exclude the prefix +and suffix, you could match both formats using Barcode Completion rules. + +Because this feature adds an extra database search for each enabled rule to +the process of looking up a barcode, it can add extra delays to the check-out +process. Please test in your environment before using in production. + +== Scoping and Permissions == + +*Local Administrator* permission is needed to access the admin interface of the +Barcode Completion feature. + +Each rule requires an owner org unit, which is how scoping of the rules is +handled. Rules are applied for staff users with the same org unit or +descendants of that org unit. + + +== Access Points == + +The admin interface for Barcode Completion is located under *Administration* +-> *Local Administration* -> *Barcode Completion*. + +image::media/lsa-barcode_completion_admin.png[Barcode Completion Admin List] + +The barcode completion functionality is available at the following interfaces. + +=== Check Out Step 1: Lookup Patron by Barcode === + +image::media/Barcode_Checkout_Patron_Barcode.png[Patron Barcode Lookup for Checking Out] + +=== Check Out Step 2: Scanning Item Barcodes === + +image::media/Barcode_Checkout_Item_Barcode.png[Item Barcode at Check Out] + +=== Staff Client Place Hold from Catalog === + +image::media/Barcode_OPAC_Staff_Place_Hold.png[Patron Barcode Lookup for Staff Placing Hold] + +=== Check In === + +image::media/Barcode_Check_In.png[Item Barcode at Check In] + +=== Item Status === + +image::media/Barcode_Item_Status.png[Item Barcode at Item Status screen] + + +NOTE: Barcode completion is also available during check out if library +setting "Load patron from Checkout" is set. +(Automatically detects if an actor/user barcode is scanned during +check out, and starts a new check out session using that user.) + +NOTE: Barcode Completion does not work in the + *Search for Patron [by Name]* interface. + + +== Multiple Matches == + +If multiple barcodes are matched, say if you have both "123" and "00000123" +as valid barcodes, you will receive a list of all the barcodes that match all +the rules that you have configured. It doesn't stop after the first rule +that matches, or after the first valid barcode is found. + +image::media/lsa-barcode_completion_multiple.png[Barcode Completion Multiple Matches] + +== Barcode Completion Data Fields == + +The following data fields can be set for each Barcode Completion rule. + +.Barcode Completion Fields +|======= +|*Active* | Check to indicate entry is active. *Required* +|*Owner* | Setting applies to this Org Unit and to all children. *Required* +|*Prefix* | Sequence that appears at the beginning of barcode. +|*Suffix* | Sequence that appears at the end of barcode. +|*Length* | Total length of barcode. +|*Padding* | Character that pads out non-unique characters in the barcode. +|*Padding At End* | Check if the padding starts at the end of the barcode. +|*Applies to Items*| Check if entry applies to item barcodes. +|*Applies to Users*| Check if entry applies to user barcodes. +|======= + + +.Length and Padding + +Length and Padding are related, you cannot use one without the other. If a barcode +has to be a certain length, then it needs to be able to be padded out to that length. +If a barcode has padding, then we need to know the max length that we need to pad out +to. If length is set to blank or zero, or padding is left blank then they are both +ignored. + + +.Applies to Items/Users +One or both of these options must be checked for the rule to have any effect. + +image::media/lsa-barcode_completion_fields.png[Barcode Completion Data Fields] + +== Create, Update, Filter, Delete/Disable Rules == + +image::media/lsa-barcode_completion_admin.png[Barcode Completion Admin] + +In the Barcode Completion admin interface at *Administration* -> *Local Administration* +-> *Barcode Completion* you can create, update and disable rules. + +=== Create Rules === +To create a new rule click on the *New* button in the upper right corner. +When you are are done with editing the new rule click the *Save* button. If +you want to cancel the new rule creation click the *Cancel* button. + +=== Update Rules === +To edit a rule double click on the rule in the main list. + +=== Filter Rules === +It may be useful to filter the rules list if there are a large number of +rules. Click on the *filter* link to bring up the *Filter Results* dialog +box. You can filter on any of the data fields and you can setup multiple +filter rules. Click *Apply* to enable the filter rules, only the rows that match +will now be displayed. + +To clear out the filter rules, delete all of the filter rules by clicking the +*X* next to each rule, and then click *Apply*. + +=== Delete/Disable Rules === +It isn't possible to delete a rule from the database from the admin interface. +If a rule is no longer needed set *Active* to "False" to disable it. To keep +the number of rules down, reuse inactive rules when creating new rules. + +== Examples == + +In all these examples, the unique part of the barcode is *123*. So that is +all that users will need to type to match the full barcode. + +=== Barcode With Prefix and Padding === + +Barcode: *4545000123* + +To match this 10 character barcode by only typing in *123* we need the +following settings. + + * *Active* - Checked + * *Owner* - Set to your org unit. + * *Prefix* - 4545 - This is the prefix that the barcode starts with. + * *Length* - 10 - Total length of the barcode. + * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. + * *Applies to Items* and/or *Applies to Users* - Checked + +The system takes the *123* that you entered and adds the prefix to the beginning +of it. Then adds zeros between the prefix and your number to pad it out to +10 characters. Then it searches the database for that barcode. + +=== Barcode With Suffix === + +Barcode: *123000book* + +To match this 10 character barcode by only typing in *123* we need the +following settings. + + * *Active* - Checked + * *Owner* - Set to your org unit. + * *Suffix* - book - This is the suffix that the barcode ends with. + * *Length* - 10 - Total length of the barcode. + * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. + * *Padding at End* - Checked + * *Applies to Items* and/or *Applies to Users* - Checked + +The system takes the *123* that you entered and adds the suffix to the end of it. +Then adds zeros between your number and the suffix to pad it out to 10 +characters. Then it searches the database for that barcode. + +=== Barcode With Left Padding === + +Barcode: *0000000123* + +To match this 10 character barcode by only typing in *123* we need the +following settings. + + * *Active* - Checked + * *Owner* - Set to your org unit. + * *Length* - 10 - Total length of the barcode. + * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. + * *Applies to Items* and/or *Applies to Users* - Checked + +The system takes the *123* that you entered, then adds zeros between your +number and the left to pad it out to 10 characters. Then it searches the +database for that barcode. + +=== Barcode With Right Padding === + +Barcode: *1230000000* + +To match this 10 character barcode by only typing in *123* we need the +following settings. + + * *Active* - Checked + * *Owner* - Set to your org unit. + * *Length* - 10 - Total length of the barcode. + * *Padding* - 0 - Zeros will be used to pad out non significant parts of the barcode. + * *Padding at End* - Checked + * *Applies to Items* and/or *Applies to Users* - Checked + +The system takes the *123* that you entered, then adds zeros between your +number and the right to pad it out to 10 characters. Then it searches the +database for that barcode. + +=== Barcode of any Length with Prefix and Suffix === + +Barcode: *a123b* + +To match this 5 character barcode by only typing in *123* we need the +following settings. This use of Barcode Completion doesn't save many +keystrokes, but it does allow you to handle the case where your barcode +scanners at one point were set to output a prefix and suffix which was stored +in the database. Now your barcode scanners no longer include the prefix and suffix. +These settings will simply add the prefix and suffix to any barcode entered and +search for that. + + * *Active* - Checked + * *Owner* - Set to your org unit. + * *Length/Padding* - 0/null - Set the length to 0 and/or leave the padding blank. + * *Prefix* - a - This is the prefix that the barcode starts with. + * *Suffix* - b - This is the suffix that the barcode starts with. + * *Applies to Items* and/or *Applies to Users* - Checked + +The system takes the *123* that you entered, then adds the prefix and suffix +specified. Then it searches the database for that barcode. Because no length +or padding was entered, this rule will add the prefix and suffix to any +barcode that is entered and then search for that valid barcode. + + +== Testing == + +To test this feature, setup the rules that you want, then setup items/users +with barcodes that should match. Then try scanning the short version of +those barcodes in the various supported access points. diff --git a/docs/modules/admin/pages/lsa-standing_penalties.adoc b/docs/modules/admin/pages/lsa-standing_penalties.adoc new file mode 100644 index 0000000000..59eb0b8acd --- /dev/null +++ b/docs/modules/admin/pages/lsa-standing_penalties.adoc @@ -0,0 +1,25 @@ += Standing Penalties = +:toc: + +In versions of Evergreen prior to 2.3, the following penalty types were +available by default. When applied to user accounts, these penalties prevented +users from completing the following actions: + +* *CIRC* - Users cannot check out items +* *HOLD* - Users cannot place holds on items +* *RENEW* - Users cannot renew items + +In version 2.3, two new penalty types are available in Evergreen: + +* *CAPTURE* - This penalty prevents a user's holds from being captured. If the +_HOLD_ penalty has not been applied to a user's account, then the patron can place a +hold, but the targeted item will not appear on a pull list and will not be +captured for a hold if it is checked in. +* *FULFILL* - This penalty prevents a user from checking out an item that is on +hold. If the _HOLD_ and _CAPTURE_ penalties have not been applied to a user's +account, then the user can place a hold on an item, and the item can be captured +for a hold. However, when he tries to check out the item, the circulator will +see a pop up box with the name of the penalty type, _FULFILL_. The circulator +must correct the problem with the account or must override the penalty to check +out the item. + diff --git a/docs/modules/admin/pages/lsa-statcat.adoc b/docs/modules/admin/pages/lsa-statcat.adoc new file mode 100644 index 0000000000..eb7f3a8632 --- /dev/null +++ b/docs/modules/admin/pages/lsa-statcat.adoc @@ -0,0 +1,88 @@ += Statistical Categories Editor = +:toc: + +This is where you configure your statistical categories (stat cats). Stat cats are a way to save and report on additional information that doesn't fit elsewhere in Evergreen's default records. It is possible to have stat cats for copies or patrons. + +1. Click *Administration -> Local Administration -> Statistical Categories Editor.* + +2. To create a new stat cat, enter the name of the category and select either _patron_ or _copy_ from the *Type* dropdown menu. Each category type has a number of options you may set. + +*Copy Statistical Categories* + +Copy stat cats appear in the _Holdings Editor_. You might use copy stat cats to track books you have bought from a specific vendor, or donations. + +An example of the _Create a new statistical category_ controls for copies: + +image::media/lsa-statcat-1.png[Create copy stat cat] + +* _OPAC Visibility_: Should the category be displayed in the OPAC? +* _Required_: Must the category be assigned a value when editing the item attributes? +* _Archive with Circs_: Should the category and its values for the copy be archived with aged circulation data? +* _SIP Field_: Select the SIP field identifier that will contain the category and its value +* _SIP Format_: Specify the SIP format string + +Some sample copy stat cats: + +image::media/lsa-statcat-2.png[Sample copy stat cats] + +To add an entry, select _Add_. Due to a known bug, individual entries for stat cats cannot be edited in the web client. + +Stat cats can be edited or deleted by clicking on _Edit_. + +This is how the copy stat cats appear in the _Holdings Editor_: + +image::media/lsa-statcat-3.png[Stat cats in Holdings Editor] + +You can use the _Filter by Library_ selector to display copy stat cats owned by a particular library: + +image::media/lsa-statcat-3a.png[Stat cat library selector] + +*Patron Statistical Categories* + +Patron stat cats can be used to keep track of information such as a patron's school affiliation, membership in a group like the Friends of the Library, or patron preferences. They appear in the fourth section of the _Patron Registration_ or _Edit Patron_ screen, under the label _Statistical Categories_. + +An example of the _Create a new statistical category_ controls for patrons: + +image::media/lsa-statcat-4.png[Create patron stat cat] + +* _OPAC Visibility_: Should the category be displayed in the OPAC? +* _Required_: Must the category be assigned a value when registering a new patron or editing an existing one? +* _Archive with Circs_: Should the category and its values for the patron be archived with aged circulation data? +* _Allow Free Text_: May the person registering/editing the patron information supply their own value for the category? +* _Show in Summary_: Display the category and its value in the patron summary view? +* _SIP Field_: Select the SIP field identifier that will contain the category and its value +* _SIP Format_: Specify the SIP format string + +[WARNING] +.WARNING +===================================== +If you make a category *required* and also *disallow free text*, make sure that you populate an entry list for the category so that the user may select a value. Failure to do so will result in an unsubmittable patron registration/edit form! +===================================== + +Some sample patron stat cats: + +image::media/lsa-statcat-5.png[Sample patron stat cats] + +To add an entry, click on _Add_ in the category row under the _Add Entry_ column: + +image::media/lsa-statcat-6.png[Add patron category entry] + +Stat cats can be edited or deleted by clicking on _Edit_. + +Due to a known bug, individual entries for stat cats cannot be edited in the web client. + +An *organizational unit* (consortium, library system branch library, sub library, etc.) may create their own categories and entries, or supplement categories defined by a higher-level org unit with their own entries. + +An entry can be set as the *default* entry for a category and for an org unit. If an entry is set as the default, it will be automatically selected in the patron edit screen, provided no other value has been previously set for the patron. Only one default may be set per category for any given org unit. + +Lower-level org unit defaults override defaults set for higher-level org units; but in the absence of a default set for a given org unit, the nearest parent org unit default will be selected. + +Default entries for the focus location org unit are marked with an asterisk in the entry dropdowns. + +This is how patron stat cats appear in the patron registration/edit screen: + +image::media/lsa-statcat-8.png[Patron stat cats in registration screen] + +The yellow highlight denotes a stat cat that is required, and you will not be allowed to save or create a patron unless a value is entered. + +To remove a stat cat value, select the text in the right-hand box and use your keyboard's backspace or delete key. diff --git a/docs/modules/admin/pages/lsa-work_log.adoc b/docs/modules/admin/pages/lsa-work_log.adoc new file mode 100644 index 0000000000..42e179d97f --- /dev/null +++ b/docs/modules/admin/pages/lsa-work_log.adoc @@ -0,0 +1,20 @@ += Work Log = +:toc: + +indexterm:[Work Log] +indexterm:[staff client, Work Log] +indexterm:[workstation, Work Log] + + +== Expanding the Work Log == + +The Work Log records checkins, checkouts, patron registration, patron editing, renewals, payments and holds placed from with the patron record for a given login. + +To access the Work Log go to *Administration* -> *Local Administration* -> *Work Log*. + +There are two seperate logs, *Most Recently Logged Staff Actions* and *Most Recently Affected patrons*. The *Most Recently Logged Staff Actions* logs the the transactions in order they have occured on the workstation. The *Most Recently Affected Patrons* log is a listing of the last patrons that transactions happened on. + +The Work Log can contain a maximum number of transactions, this number is set via the xref:admin:librarysettings.adoc[Library Settings Editor]. They are in the GUI group of settings. *Work Log: Maximum Actions Logged* effects the number of transactions listing under the *Most Recently Logged Staff Actions* and *Work Log: Maximum Patrons Logged* limits the number of patrons that are listed in the log. + +image::worklog.png[Work Log] + diff --git a/docs/modules/admin/pages/marc_templates.adoc b/docs/modules/admin/pages/marc_templates.adoc new file mode 100644 index 0000000000..cac1fb9210 --- /dev/null +++ b/docs/modules/admin/pages/marc_templates.adoc @@ -0,0 +1,63 @@ += MARC Templates = +:toc: + +MARC Templates make the cataloging process more efficient for catalogers. At this time, MARC Templates have to be +created on the server, rather than in the Web client. + +== Adding MARC Templates == + +. Create a marc template in the directory _/openils/var/templates/marc/_. It should be in xml format. Here is an + example file `k_book.xml`: ++ +[source,xml] +--------------------------------------------------------------------- + + 00620cam a2200205Ka 4500 + 070101s eng d + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +--------------------------------------------------------------------- ++ +. Add the template to the to the marctemplates list in the _open-ils.cat_ section of the Evergreen configuration + file `opensrf.xml`. +. Restart perl services for changes to take effect with the command + `/openils/bin/osrf_control -l --restart --service=open-ils.cat` diff --git a/docs/modules/admin/pages/multilingual_search.adoc b/docs/modules/admin/pages/multilingual_search.adoc new file mode 100644 index 0000000000..6dea7d67d9 --- /dev/null +++ b/docs/modules/admin/pages/multilingual_search.adoc @@ -0,0 +1,67 @@ += Multilingual Search in Evergreen = +:toc: + +It is now possible to search for items that contain multiple languages in the Evergreen catalog. This will help facilitate searching for bilingual and multilingual materials, including specific translations, alternative languages, and to exclude specific translations from a search. + +To identify the language of materials, Evergreen looks at two different fields in the MARC bibliographic record: + +* 008/35-37: the language code located in characters 35-37 of the 008 tag +* 041$abdefgm: the 041 tag, subfields $abdefgm, which contain additional language codes + +Multilingual searches can be conducted by constructing searches using specific language codes as a filter. To search using specific language codes, use the Record Attribute Definition name _item_lang_ followed by the appropriate MARC Code for Languages. For example, _item_lang(spa)_ will search only for Spanish language materials. + +The language filter can be appended to any search. For example, a title search for _pippi longstocking item_lang(eng,swe)_ will search for English or Swedish language publications of the title. + +image::media/multilingual_search1.png[] + +== Search Syntax == + +To search for materials that contain multiple languages (Boolean AND), the search filters can be constructed in the following ways: + +. Implicit Boolean filtering: _item_lang(eng) item_lang(spa)_ +.. Evergreen assumes a Boolean AND between the search filters +. Explicit Boolean filtering: _item_lang(eng) && item_lang(spa)_ +.. The double ampersands (&&) explicitly tell Evergreen to apply a Boolean AND to the search filters + +To search for materials that contain at least one of the searched languages (Boolean OR), the search filters can be constructed in the following ways: + +. List filtering: _item_lang(eng,spa)_ +.. Listing the language codes, separated by a comma, within the search filter, tells Evergreen to apply a Boolean OR to the search filters +. Explicit Boolean filtering: _item_lang(eng) || item_lang(spa)_ +.. The double pipes (||) explicitly tell Evergreen to apply a Boolean OR to the search filters + +To search for materials that contain a specific language and exclude another language from the search results (Boolean NOT), the search filters can be constructed as follows: + +. Boolean filtering: _item_lang(spa) -item_lang(eng)_ +.. The dash (-) explicitly tells Evergreen to apply a Boolean NOT to the english language search filter. Evergreen assumes a Boolean AND between the search filters. + +To exclude multiple languages from search results (Boolean NOT), the search filters can be constructed as follows: + +. Boolean filtering: _-item_lang(eng) -item_lang(spa)_ +.. The dash (-) explicitly tells Evergreen to apply a Boolean NOT to both search filters. Evergreen assumes a Boolean AND between the search filters. + +To conduct a search for materials that do not contain at least of the of the languages searched (Boolean “NOT” and “OR”), the search filters can be constructed in the following ways: + +. List filtering: _-item_lang(eng,spa)_ +.. Explicit Boolean filtering: _-item_lang(eng) || -item_lang(spa)_ + + +== Advanced Search == + +Within the Advanced Search interface, multiple languages can be selected from the Language filter by holding down the Ctrl key on the keyboard and selecting the desired languages. This will apply a Boolean OR operator to the language filters. + +image::media/multilingual_search2.PNG[] + + +== Adding Subfields to the Index == + +Additional subfields for the 041 tag, such as h, j, k, and n, can be added to the index through the Record Attribute Definitions interface. Any records containing the additional subfields will need to be reingested into the database after making changes to the Record Attribute Definition. + +. Go to *Administration>Server Administration>Record Attribute Definitions*. +. Click *Next* to locate the _item_lang_ record attribute definition. +. To edit the definition, double click on the item_lang row and the configuration window will appear. +. In the _MARC Subfields_ field, add the subfields you want included in the index. +. Click *Save*. + +image::media/multilingual_search3.PNG[] + diff --git a/docs/modules/admin/pages/patron_address_by_zip_code.adoc b/docs/modules/admin/pages/patron_address_by_zip_code.adoc new file mode 100644 index 0000000000..da53c8e79a --- /dev/null +++ b/docs/modules/admin/pages/patron_address_by_zip_code.adoc @@ -0,0 +1,158 @@ += Patron Address City/State/County Pre-Populate by ZIP Code = +:toc: + +indexterm:[zips.txt, Populate Address by ZIP Code, ZIP code] + +This feature saves staff time and increases accuracy when entering patron address information by +automatically filling in the City, State and County information based on the +ZIP code entered by the staff member. + +*Released:* Evergreen 0.1, available in all versions. + +Please be aware of the following when using this feature. + +* ZIP codes do not always match 1 to 1 with City, State and County. ZIP codes were designed for postal delivery and represent postal delivery zones that may cover more than one city, state or county. +** It is currently only possible to have one match per ZIP code, but you can add an alert to those entries to prompt staff to double check the entered data. +* Only the first 5 digits of the ZIP are used. ZIP+4 is not currently supported. +* The zips.txt data is loaded once at service startup and stored in memory, so changes to the zips.txt data file require that Evergreen be restarted. Specifically, you need to restart the "open-ils.search" OpenSRF service. + + +== Scoping and Permissions == + +There are no staff client permissions associated with this feature since there is no staff client interface. + +This feature affects all users of the system; there is no way to have separate settings per Org Unit. + +== Setup Steps == + +=== Step 1 - Setup Data File === + +The default location and name of the data file is /openils/var/data/zips.txt on your Evergreen server. You can choose a different location if needed. + +The file format of your zips.txt will look like this (delimited by the .): + +ID|*StateAbb*|*City*|*ZIP*|*IsDefault*|StateID|*County*|AreaCode|*AlertMesg* + +The only fields that are used are *StateAbb*, *City*, *ZIP*, *IsDefault*, *County* and *AlertMesg*. + +Most fields can be left blank if the information is not available and that data will not be entered. + +.Data Field Descriptions +. ID - ID field to uniquely identify this row. Not required, can be left blank. +. *StateAbb* - State abbreviation like "MN" or "ND". +. *City* - Name of city. +. *ZIP* - ZIP code, only first 5 digits used. +. *IsDefault* - Must be set to 1 for the row to be used. Easy way to disable/enable a row. +. StateID - Unknown and unused. +. *County* - County name. +. AreaCode - Phone number area code, unused. +. *AlertMesg* - Message to display to staff to alert them of any special circumstances. + +TIP: The Address Alerts feature -- described in the Staff Client Sysadmin manual -- can also be used to alert staff about certain addresses. + +Here is an example of what the data file should look like. + +.Example zips.txt +---- +|MN|Moorhead|56561|1||Clay|| +|MN|Moorhead|56562|1||Clay|| +|MN|Moorhead|56563|1||Clay|| +|MN|Sabin|56580|1||Clay|| +|MN|Ulen|56585|1||Clay|| +|MN|Lake Itasca|56460|1||Clearwater County|| +|MN|Bagley|56621|1||Clearwater|| +|MN|Clearbrook|56634|1||Clearwater|| +|MN|Gonvick|56644|1||Clearwater|| +---- + +=== Step 2 - Enable Feature === + +The next step is to tell the system to use the zips.txt file that you created. This is done by editing /openils/conf/opensrf.xml. Look about halfway into the file and you may very well see a commented section in the file that looks similar to this: + +---- + + + + +---- + +Uncomment the area by . .. Change the file path if you placed your file in a different location. The file should look like this after you are done. + +---- + + /openils/var/data/zips.txt + + +---- + +.Save and Restart +Save your changes to the opensrf.xml file, restart Evergreen and restart Apache. + +NOTE: The specific opensrf services you need to restart are "opensrf.setting" and "open-ils.search". + +=== Step 3 - Test === + +Open up the staff client and try to register a new patron. When you get to the address section, enter a ZIP code that you know is in your zips.txt file. The data from the file that matches your ZIP will auto fill the city, state and county fields. + +== ZIP Code Data == + +There are several methods you can use to populate your zips.txt with data. + +=== Manual Entry === + +If you only have a few communities that you serve, entering data manually may be the simplest approach. + +=== Geonames.org Data === + +Geonames.org provides free ZIP code to city, state and county information licensed under the Creative Commons Attribution 3.0 License, which means you need to put a link to them on your website. Their data includes primary city, state and county information only. It doesn't include info about which other cities are included in a ZIP code. Visit http://www.geonames.org for more info. + +The following code example shows you how to download and reformat the data into the zips.txt format. You have the option to filter the data to only include certain states also. + +[source,bash] +---- +## How to get a generic Evergreen zips.txt for free +wget http://download.geonames.org/export/zip/US.zip +unzip US.zip +cut -f2,3,5,6 US.txt \ +| perl -ne 'chomp; @f=split(/\t/); print "|" . join("|", (@f[2,1,0], "1", "", $f[3], "")), "|\n";' \ +> zips.txt + +##Optionally filter the data to only include certain states +egrep "^\|(ND|MN|WI|SD)\|" zips.txt > zips-mn.txt +---- + +=== Commercial Data === + +There are many vendors that sell databases that include ZIP code to city, state and county information. A web search will easily find them. Many of the commercial vendors will include more information on which ZIP codes cover multiple cities, counties and states, which you could use to populate the alert field. + +=== Existing Patron Database === + +Another possibility is to use your current patron database to build your zips.txt. Pull out the current ZIP, city, state, county unique rows and use them to form your zips.txt. + +.Small Sites + +For sites that serve a small geographic area (less than 30 ZIP codes), an sql query like the following will create a zips.txt for you. It outputs the number of matches as the first field and sorts by ZIP code and number of matches. You would need to go through the resulting file and deal with duplicates manually. + +[source,bash] +---- +psql egdb26 -A -t -F $'|' \ + -c "SELECT count(substring(post_code from 1 for 5)) as zipcount, state, \ + city, substring(post_code from 1 for 5) as pc, \ + '1', '', county, '', '' FROM actor.usr_address \ + group by pc, city, state, county \ + order by pc, zipcount DESC" > zips.txt +---- + +.Larger Sites +For larger sites Ben Ostrowsky at ESI created a pair of scripts that handles deduplicating the results and adding in county information. Instructions for use are included in the files. + +* http://git.esilibrary.com/?p=migration-tools.git;a=blob;f=elect_ZIPs +* http://git.esilibrary.com/?p=migration-tools.git;a=blob;f=enrich_ZIPs + + +== Development == + +If you need to make changes to how this feature works, such as to add support for other postal code formats, here is a list of the files that you need to look at. + +. *Zips.pm* - contains code for loading the zips.txt file into memory and replying to search queries. Open-ILS / src / perlmods / lib / OpenILS / Application / Search / Zips.pm +. *register.js* - This is where patron registration logic is located. The code that queries the ZIP search service and fills the address is located here. Open-ILS / web / js / ui / default / actor / user / register.js diff --git a/docs/modules/admin/pages/patron_registration.adoc b/docs/modules/admin/pages/patron_registration.adoc new file mode 100644 index 0000000000..974271e33d --- /dev/null +++ b/docs/modules/admin/pages/patron_registration.adoc @@ -0,0 +1,63 @@ +== Patron registration administration == + +indexterm:[new patron form] +indexterm:[edit patron form] +indexterm:[patron registration form] +indexterm:[forms,new patron] +indexterm:[forms,edit patron] +indexterm:[forms,patron registration] + +=== Email addresses === + +indexterm:[patrons,email addresses] +indexterm:[email] + + +It's possible to set up the patron registration form to +either allow or disallow users to enter multiple email +addresses for a single patron, separated by a comma. + +To do this, go to Administration -> Local Administration +-> Library Settings Editor. Search for the setting called +`ui.patron.edit.au.email.regex`. + +If you'd like to allow multiple email addresses, set this +value to `^(?:(?:\b[^@,\s]+@[^@,\s]+\.[^@.,\s]+\b)(?:,\s?(?!$)|$))*$` + +If you'd like to disallow multiple email addresses, set +this value to `^(?:\b[^@,\s]+@[^@,\s]+\.[^@.,\s]+\b)$` + +=== Parent/guardian field === + +indexterm:[patrons,parent/guardian field] +indexterm:[parent] +indexterm:[guardian] +indexterm:[juvenile] + + +In addition to the standard "show" and "suggest" visibility settings, +the guardian field has a library setting called +'ui.patron.edit.guardian_required_for_juv' ("GUI: Juvenile account +requires parent/guardian"). When this setting is set to true, a value +will be required in the patron editor when the juvenile flag is active. + +=== Privacy waiver === + +indexterm:[Allow others to use my account] +indexterm:[checking out,materials on another patron's account] +indexterm:[holds,picking up another patron's] +indexterm:[privacy waiver] + +Patrons who wish to authorize other people to use their account may +now do so via the OPAC. In the Search and History Preferences tab +under Account Preferences, a section labeled "Allow others to use +my account" allows patrons to enter a name and indicate that the +specified person is allowed to place holds, pickup holds, view +borrowing history, or check out items on their account. This +information is displayed to circulation staff in the patron account +summary in the web client. (Staff may also add, edit, and remove +entries via the patron editor.) + +You can use the library setting called "Allow others to use patron account (privacy +waiver)," to enable or disable this feature. + diff --git a/docs/modules/admin/pages/patron_self_registration.adoc b/docs/modules/admin/pages/patron_self_registration.adoc new file mode 100644 index 0000000000..96dc1e3ac5 --- /dev/null +++ b/docs/modules/admin/pages/patron_self_registration.adoc @@ -0,0 +1,51 @@ += Patron self-registration administration = +:toc: + +== Library Settings == + +Three Library Settings are specific to patron self-registration: + + * OPAC: Allow Patron Self-Registration must be set to `True` to enable use of this feature. + + * OPAC: Patron Self-Reg. Expire Interval allows each library to set the amount of time after which pending patron accounts should be deleted. + + * OPAC: Patron Self-Reg. Display Timeout allows each library to set the amount of time after which the patron self-registration screen will timeout in the OPAC. The default is 5 minutes. + +Several more Library Settings can be used to determine if a field should be required or hidden in the self-registration form: + + * GUI: Require day_phone field on patron registration + + * GUI: Show day_phone on patron registration + + * GUI: Require dob (date of birth) field on patron registration + + * GUI: Show dob field on patron registration + + * GUI: Require email field on patron registration + + * GUI: Show email field on patron registration + + * GUI: Require State field on patron registration + + * GUI: Show State field on patron registration + + * GUI: Require county field on patron registration + + * GUI: Show county field on patron registration [New Setting] + +Several more Library Settings can be used to verify values in certain fields and provide examples for data format on the registration form: + + * Global: Patron username format + + * GUI: Regex for phone fields on patron registration OR GUI: Regex for day_phone field on patron registration + + * GUI: Regex for email field on patron registration + + * GUI: Regex for post_code field on patron registration + + * GUI: Example for email field on patron registration + + * GUI: Example for post_code field on patron registration + + * GUI: Example for day_phone field on patron registration OR GUI: Example for phone fields on patron registration + diff --git a/docs/modules/admin/pages/permissions.adoc b/docs/modules/admin/pages/permissions.adoc new file mode 100644 index 0000000000..aff5dc8bdb --- /dev/null +++ b/docs/modules/admin/pages/permissions.adoc @@ -0,0 +1,87 @@ += User and Group Permissions = +:toc: + +It is essential to understand how user and group permissions can be used to allow +staff to fulfill their roles while ensuring that they only have access to the +appropriate level. + +Permissions in Evergreen are applied to a specific location and system depth +based on the home library of the user. The user will only have that permission +within the scope provided by the Depth field in relation to his/her working +locations. + +Evergreen provides group application permissions in order to restrict which +staff members have the ability to assign elevated permissions to a user, and +which staff members have the ability to edit users in particular groups. + +== Staff Accounts == + +New staff accounts are created in much the same way as patron accounts, using +_Circulation -> Register Patron_ or *Shift+F1*. Select one of the staff +profiles from the _Profile Group_ drop-down menu. + +image::media/permissions_1a.png[Permission Group dropdown in patron account] + +Each new staff account must be assigned a _Working Location_ which determines +its access level in staff client interfaces. + +. To assign a working location, open the newly created staff account using *F1* +(retrieve patron) or *F4* (patron search). +. Select _Other -> User Permission Editor_ ++ +image::media/permissions_1.png[Click User Permission Editor in the Patron's Other menu] ++ +. Place a check in the box next to the desired working location, then scroll to +the bottom of the display and click _Save_. ++ +NOTE: In multi-branch libraries it is possible to assign more than one working +location + +=== Staff Account Permissions === + +To view a detailed list of permissions for a particular Evergreen account go to +_Administration -> User Permission Editor_ in the staff client. + +=== Granting Additional Permissions === + +A _Local System Administrator (LSA)_ may selectively grant _LSA_ permissions to +other staff accounts. In the example below a _Circ +Full Cat_ account is granted +permission to process offline transactions, a function which otherwise requires +an _LSA_ login. + +. Log in as a Local System Administrator. +. Select _Administration -> User Permission Editor_ and enter the staff account +barcode when prompted ++ +OR ++ +Retrieve the staff account first, then select _Other -> User Permission +Editor_ ++ +. The User Permission Editor will load (this may take a few seconds). Greyed-out +permissions cannot be edited because they are either a) already granted to the +account, or b) not available to any staff account, including LSAs. ++ +image::media/profile-5.png[profile-5] ++ +1) List of permission names. ++ +2) If checked the permission is granted to this account. ++ +3) Depth limits application to the staff member's library and should be left at +the default. ++ +4) If checked this staff account will be able to grant the new privilege to +other accounts (not recommended). ++ +. To allow processing of offline transactions check the Applied column next to +_OFFLINE_EXECUTE_. ++ +image::media/profile-6.png[profile-6] ++ +. Scroll down and click Save to apply the changes. ++ +image::media/profile-7.png[profile-7] + + + diff --git a/docs/modules/admin/pages/phonelist.adoc b/docs/modules/admin/pages/phonelist.adoc new file mode 100644 index 0000000000..3969d41176 --- /dev/null +++ b/docs/modules/admin/pages/phonelist.adoc @@ -0,0 +1,186 @@ += Phonelist.pm Module = +:toc: + +== Introduction == + +PhoneList.pm is a mod_perl module for Apache that works with Evergreen +to generate callings lists for patron holds or overdues. It outputs a csv file +that can be fed into an auto-dialer script to call patrons with little +or no staff intervention. It is accessed and configured via a special +URL and passing any parameters as a `Query String` on the URL. The +parameters are listed in the table below. + +.Parameters for the phonelist program: +|===================================== +| user | Your Evergreen login. Typically your library's circ account. If you leave this off, you will be prompted to login. +| passwd | The password for your Evergreen login. If you leave this off you will be prompted to login. +| ws_ou | The ID of the system or branch you want to generate the list for (optional). If your account does not have the appropriate permissions for the location whose ID number you have entered, you will get an error. +| skipemail | If present, skip patrons with email notification (optional). +| addcount | Add a count of items on hold (optional). Only makes sense for holds. +| overdue | Makes a list of patrons with overdues instead of holds. If an additional, numeric parameter is supplied, it will be used as the number of days overdue. If no such extra parameter is supplied, then the default of 14 days is used. +|===================================== + +The URL is + +`https://your.evergreen-server.tld/phonelist` + +A couple of examples follow: + +`https://your.evergreen-server.tld/phonelist?user=circuser&passwd=password&skipemail` + +The above example would sign in as user circuser with password of +`password` and get a list of patrons with holds to call who do not +have email notification turned on. It would run at whatever branch is +normally associated with circuser. + +`https://your.evergreen-server.tld/phonelist?skipemail` + +The above example would do more or less the same, but you would be +prompted by your browser for the user name and password. + +If your browser or download script support it, you may also use +conventional HTTP authentication parameters. + +`https://user:password@your.evergreen-server.tld/phonelist?overdue&ws_ou=2` + +The above logs in as `user` with `password` and runs overdues for location ID 2. + +The following sections provide more information on getting what you want in your output. + +== Adding Parameters == + +If you are not familiar with HTTP/URL query strings, the format is +quite simple. + +You add parameters to the end of the URL, the first parameter is +separated from the URL page with a question mark (`?`) character. If +the parameter is to be given an extra value, then that value follows +the parameter name after an equals sign (`=`). Subsequent parameters +are separated from the previous parameter by an ampersand (`&`). + +Here is an example with 1 parameter that has no value: + +`https://your.evergreen-server.tld/phonelist?skipemail` + +An example of 1 argument with a value: + +`https://your.evergreen-server.tld/phonelist?overdue=21` + +An example of 2 arguments, 1 with a value and 1 without: + +`https://your.evergreen-server.tld/phonelist?overdue=21&skipemail` + +Any misspelled or parameters not listed in the table above will be +ignored by the program. + +== Output == + +On a successful run, the program will return a CSV file named +phone.csv. Depending on your browser or settings you will alternately +be prompted to open or save the file. Your browser may also +automatically save the file in your Downloads or other designated +folder. You should be able to open this CSV file in Excel, LibreOffice +Base, any other spread sheet program, or a text editor. + +If you have made a mistake and have mistyped your user name or +password, or if you supply a ws_ou parameter with an ID where your +user name does not have permission to look up holds or overdue +information, then you will get an error returned in your browser. + +Should your browser appear to do absolutely nothing at all. This is +normal. When there is no information for you to download, the server +will return a 200 NO CONTENT message to your browser. Most browsers +respond to this message by doing nothing at all. It is possible for +there to be no information for you to retrieve if you added the +`skipemail` option and all of your notices for that day were sent via +email, or if you ran this in the morning and then again in the +afternoon and there was no new information to gather. + +The program does indicate that it has already looked at a particular +hold or overdue and will skip it on later runs. This prevents +duplicates to the same patron in the same run. It will, however, +create a `duplicate` for the same patron if a different item is put +on hold for that patron in between two runs. + +The specific content of the CSV file will vary if you are looking at +holds or overdues. The specific contents are described in the +appropriate sections below. + +== Holds == + +The `phonelist` program will return a list of patrons with items on +hold by default, so long as you do not use the `overdue` +parameter. You may optionally get a number of items that patron +currently has on hold by adding the `addcount` parameter. + +As always, you can add the skipemail parameter to skip patrons with +email notifications of their overdues, see xref:#skipping_patrons_with_email_notification_of_holds[Skipping patrons with email notification of holds] as described below. + + +.Columns in the holds CSV file: +|===================================== +| Name | Patron's name first and last. +| Phone | Patron's phone number. +| Barcode | Patron's barcode. +| Count | Number of items on hold, if `addcount` parameter is used, otherwise this column is not present in the file. +|===================================== + +== Overdues == + +If you add the `overdue` parameter, you can get a list of patrons with +overdue items instead of a list of patrons with items on the hold +shelf. By default, this will give you a list of patrons with items +that are 14 days overdue. If you'd like to specify a different number +of days you can add the number after the parameter with an equals +sign: + +`https://your.evergreen-server.tld/phonelist?overdue=21&ws_ou=2` + +The above will retrieve a list of patrons who have items that are 21 +days overdue at the location with ID of 2. + +The number of days is an exact lookup. This means that the program +will look only at patrons who have items exactly 14 days or exactly +the number of days specified overdue. It does not pull up any that are +less than or greater than the number of days specified. + +As always, you can add the skipemail parameter to skip patrons with +email notifications of their overdues, see xref:#skipping_patrons_with_email_notification_of_holds[Skipping patrons with email notification of holds] as described below. + +.Columns in the overdues CSV file: +|================================= +| Name | Patron's name first and last. +| Phone | Patron's phone number. +| Barcode | Patron's barcode. +| Titles | A colon-separated list of titles that the patron has overdue. +|================================= + +[#skipping_patrons_with_email_notification_of_holds] +== Skipping patrons with email notification of holds == + +Skipping patrons who have email notification for their holds or +overdues is very simple. You just need to add the `skipemail` +parameter on the URL query string. Doing so will produce the list +without the patrons who have email notification for overdues, or for +all of their holds. Please note that if a patron has multiple holds +available, and even one of these holds requests a phone-only +notification, then that patron will still show on the list. For this +option to exclude a patron from the holds list, the patron must +request email notification on all of their current holds. In practice, +we find that this is usually the case. + +== Using the ws_ou parameter == + +Generally, you will not need to use the ws_ou parameter when using the +phonelist program. The phonelist will look up the branch where your +login account works and use that location when generating the list. +However, if you are part of a multi-branch systems in a consortium, +then the ws_ou parameter will be of interest to you. You can use it +to specify which branch, or the whole system, you wish to search when +running the program. + +== Automating the download == + +If you'd like to automate the download of these files, you should be +able to do so using any HTTP programming toolkit. Your client must +accept cookies and follow any redirects in order to function. diff --git a/docs/modules/admin/pages/physical_char_wizard_db.adoc b/docs/modules/admin/pages/physical_char_wizard_db.adoc new file mode 100644 index 0000000000..c84ddea9e0 --- /dev/null +++ b/docs/modules/admin/pages/physical_char_wizard_db.adoc @@ -0,0 +1,21 @@ += Administering the Physical Characteristics Wizard = +:toc: + +indexterm:[Physical characteristics wizard] +indexterm:[MARC editor,configuring] + +The MARC 007 Field Physical Characteristics Wizard enables catalogers to interact with a +database wizard that leads the user step-by-step through the MARC 007 field positions. +The wizard displays the significance of the current position and provides dropdown lists +of possible values for the various components of the MARC 007 field in a more +user-friendly way. + +The information driving the MARC 007 Field Physical Characteristics Wizard is already a +part of the Evergreen database. This data can be customized by individual sites and / or +updated when the Library of Congress dictates new values or positions in the 007 field. +There are three relevant tables where the information that drives the wizard is stored: + +. *config.marc21_physical_characteristic_type_map* contains the list of materials, or values, for the positions of the 007 field. +. *config.marc21_physical_characteristic_subfield_map* contains rows that list the meaning of the various positions in the 007 field for each Category of Material. +. *config.marc21_physical_characteristic_value_map* lists all of the values possible for all of the positions in the config.marc21_physical_characteristic_subfield_map table. + diff --git a/docs/modules/admin/pages/popularity_badges_web_client.adoc b/docs/modules/admin/pages/popularity_badges_web_client.adoc new file mode 100644 index 0000000000..4d0174eb27 --- /dev/null +++ b/docs/modules/admin/pages/popularity_badges_web_client.adoc @@ -0,0 +1,120 @@ += Statistical Popularity Badges = +:toc: + +Statistical Popularity Badges allow libraries to set popularity parameters that define popularity badges, which bibliographic records can earn if they meet the set criteria. Popularity badges can be based on factors such as circulation and hold activity, bibliographic record age, or material type. The popularity badges that a record earns are used to adjust catalog search results to display more popular titles (as defined by the badges) first. Within the OPAC there are two new sort options called "Most Popular" and "Popularity Adjusted Relevance" which will allow users to sort records based on the popularity assigned by the popularity badges. + +== Popularity Rating and Calculation == + +Popularity badge parameters define the criteria a bibliographic record must meet to earn the badge, as well as which bibliographic records are eligible to earn the badge. For example, the popularity parameter "Circulations Over Time" can be configured to create a badge that is applied to bibliographic records for DVDs. The badge can be configured to look at circulations within the last 2 years, but assign more weight or popularity to circulations from the last 6 months. + +Multiple popularity badges may be applied to a bibliographic record. For each applicable popularity badge, the record will be rated on a scale of 1-5, where a 5 indicates the most popular. Evergreen will then assign an overall popularity rating to each bibliographic record by averaging all of the popularity badge points earned by the record. The popularity rating is stored with the record and will be used to rank the record within search results when the popularity badge is within the scope of the search. The popularity badges are recalculated on a regular and configurable basis by a cron job. Popularity badges can also be recalculated by an administrator directly on the server. + +== Creating Popularity Badges == + +There are two main types of popularity badges: point-in-time popularity (PIT), which looks at the popularity of a record at a specific point in time—such as the number of current circulations or the number of open hold requests; and temporal popularity (TP), which looks at the popularity of a record over a period of time—such as the number of circulations in the past year or the number of hold requests placed in the last six months. + +The following popularity badge parameters are available for configuration: + +* Holds Filled Over Time (TP) +* Holds Requested Over Time (TP) +* Current Hold Count (PIT) +* Circulations Over Time (TP) +* Current Circulation Count (PIT) +* Out/Total Ratio (PIT) +* Holds/Total Ratio (PIT) +* Holds/Holdable Ratio (PIT) +* Percent of Time Circulating (Takes into account all circulations, not specific period of time) +* Bibliographic Record Age (days, newer is better) (TP) +* Publication Age (days, newer is better) (TP) +* On-line Bib has attributes (PIT) +* Bib has attributes and copies (PIT) +* Bib has attributes and copies or URIs (PIT) +* Bib has attributes (PIT) + +To create a new Statistical Popularity Badge: + +. Go to *Administration->Local Administration->Statistical Popularity Badges*. +. Click on *Actions->Add badge*. +. Fill out the following fields as needed to create the badge: ++ +NOTE: only Name, Scope, Weight, Recalculation Interval, Importance Interval, and Discard Value Count are required + + * *Name:* Library assigned name for badge. Each name must be unique. The name will show up in the OPAC record display. For example: Most Requested Holds for Books-Last 6 Months. Required field. + + * *Description*: Further information to provide context to staff about the badge. + + * *Scope:* Defines the owning organization unit of the badge. Badges will be applied to search result sorting when the Scope is equal to, or an ancestor, of the search location. For example, a branch specific search will include badges where the Scope is the branch, the system, and the consortium. A consortium level search, will include only badges where the Scope is set to the consortium. Item specific badges will apply only to records that have items owned at or below the Scope. Required field. + + * *Weight:* Can be used to indicate that a particular badge is more important than the other badges that the record might earn. The weight value serves as a multiplier of the badge rating. Required field with a default value of 1. + + * *Age Horizon:* Indicates the time frame during which events should be included for calculating the badge. For example, a popularity badge for Most Circulated Items in the Past Two Years would have an Age Horizon of '2 years'. The Age Horizon should be entered as a number followed by 'day(s)', 'month(s)', 'year(s)', such as '6 months' or '2 years'. Use with temporal popularity (TP) badges only. + + * *Importance Horizon:* Used in conjunction with Age Horizon, this allows more recent events to be considered more important than older events. A value of zero means that all events included by the Age Horizon will be considered of equal importance. With an Age Horizon of 2 years, an Importance Horizon of '6 months' means that events, such as checkouts, that occurred within the past 6 months will be considered more important than the circulations that occurred earlier within the Age Horizon. + + * *Importance Interval:* Can be used to further divide up the timeframe defined by the Importance Horizon. For example, if the Importance Interval is '1 month, Evergreen will combine all of the events within that month for adjustment by the Importance Scale (see below). The Importance Interval should be entered as a number followed by 'day(s)', 'week(s)', 'month(s)', 'year(s)', such as '6 months' or '2 years'. Required field. + + * *Importance Scale:* The Importance Scale can be used to assign additional importance to events that occurred within the most recent Importance Interval. For example, if the Importance Horizon is '6 months' and the Importance Interval is '1 month', the Importance Scale can be set to '6' to indicate that events that happened within the last month will count 6 times, events that happened 2 months ago will count 5 times, etc. The Importance Scale should be entered as a number followed by 'day(s)', 'week(s)', 'month(s)', 'year(s)', such as '6 months' or '2 years'. + + * *Percentile:* Can be used to assign a badge to only the records that score above a certain percentile. For example, it can be used indicate that you only want to assign the badge to records in the top 5% of results by setting the field to '95'. To optimize the popularity badges, percentile should be set between 95-99 to assign a badge to the top 5%-1% of records. + + * *Attribute Filter:* Can be used to assign a badge to records that contain a specific Record Attribute. Currently this field can be configured by running a report (see note below) to obtain the JSON data that identifies the Record Attribute. The JSON data from the report output can be copied and pasted into this field. A new interface for creating Composite Record Attributes will be implemented with future development of the web client. + ** To run a report to obtain JSON data for the Attribute Filter, use SVF Record Attribute Coded Value Map as the template Source. For Displayed Fields, add Code, ID, and/or Description from the Source; also display the Definition field from the Composite Definition linked table. This field will display the JSON data in the report output. Filter on the Definition from the Composite Definition liked table and set the Operator to 'Is not NULL'. + + * *Circ Mod Filter:* Apply the badge only to items with a specific circulation modifier. Applies only to item related badges as opposed to "bib record age" badges, for example. + + * *Bib Source Filter:* Apply the badge only to bibliographic records with a specific source. + + * *Location Group Filter:* Apply the badge only to items that are part of the specified Shelving Location Group. Applies only to item related badges. + + * *Recalculation Interval:* Indicates how often the popularity value of the badge should be recalculated for bibliographic records that have earned the badge. Recalculation is controlled by a cron job. Required field with a default value of 1 month. + + * *Fixed Rating:* Can be used to set a fixed popularity value for all records that earn the badge. For example, the Fixed Rating can be set to 5 to indicate that records earning the badge should always be considered extremely popular. + + * *Discard Value Count:* Can be used to prevent certain records from earning the badge to make Percentile more accurate by discarding titles that are below the value indicated. For example, if the badge looks at the circulation count over the past 6 months, Discard Value Count can be used to eliminate records that had too few circulations to be considered "popular". If you want to discard records that only had 1-3 circulations over the past 6 months, the Discard Value Count can be set to '3'. Required field with a default value of 0. + + * *Last Refresh Time:* Displays the last time the badge was recalculated based on the Recalculation Interval. + + * *Popularity Parameter:* Types of TP and PIT factors described above that can be used to create badges to assign popularity to bibliographic records. + +. Click *OK* to save the badge. + + +== New Global Flags == + +OPAC Default Sort: can be used to set a default sort option for the catalog. Users can always override the default by manually selecting a different sort option while searching. + +Maximum Popularity Importance Multiplier: used with the Popularity Adjusted Relevance sort option in the OPAC. Provides a scaled adjustment to relevance score based on the popularity rating earned by bibliographic records. See below for more information on how this flag is used. + +== Sorting by Popularity in the OPAC == + +Within the stock OPAC template there is a new option for sorting search results called "Most Popular". Selecting "Most Popular" will first sort the search results based on the popularity rating determined by the popularity badges and will then apply the default "Sort by Relevance". This option will maximize the popularity badges and ensure that the most popular titles appear higher up in the search results. + +There is a second new sort option called "Popularity Adjusted Relevance", which can be used to find a balance between popularity and relevance in search results. For example, it can help ensure that records that are popular, but not necessarily relevant to the search, do not supersede records that are both popular and relevant in the search results. It does this by sorting search results using an adjusted version of Relevance sorting. When sorting by relevance, each bibliographic record is assigned a baseline relevance score between 0 and 1, with 0 being not relevant to the search query and 1 being a perfect match. With "Popularity Adjusted Relevance" the baseline relevance is adjusted by a scaled version of the popularity rating assigned to the bibliographic record. The scaled adjustment is controlled by a Global Flag called "Maximum Popularity Importance Multiplier" (MPIM). The MPIM takes the average popularity rating of a bibliographic record (1-5) and creates a scaled adjustment that is applied to the baseline relevance for the record. The adjustment can be between 1.0 and the value set for the MPIM. For example, if the MPIM is set to 1.2, a record with an average popularity badge score of 5 (maximum popularity) would have its relevance multiplied by 1.2—in effect giving it the maximum increase of 20% in relevance. If a record has an average popularity badge score of 2.5, the baseline relevance of the record would be multiplied by 1.1 (due to the popularity score scaling the adjustment to half way between 1.0 and the MPIM of 1.2) and the record would receive a 10% increase in relevance. A record with a popularity badge score of 0 would be multiplied by 1.0 (due to the popularity score being 0) and would not receive a boost in relevance. + +== Popularity Badge Example == + +A popularity badge called "Long Term Holds Requested" has been created which has the following parameters: + +Popularity Parameter: Holds Requested Over Time +Scope: CONS +Weight: 1 (default) +Age Horizon: 5 years +Percentile: 99 +Recalculation Interval: 1 month (default) +Discard Value Count: 0 (default) + +This popularity badge will rate bibliographic records based on the number of holds that have been placed on it over the past 5 years and will only apply the badge to the top 1% of records (99th percentile). + +If a keyword search for harry potter is conducted and the sort option "Most Popular" is selected, Evergreen will apply the popularity rankings earned from badges to the search results. + +image::media/popbadge1_web_client.PNG[popularity badge search] + +Title search: harry potter. Sort by: Most Popular. + +image::media/popbadge2_web_client.PNG[popularity badge search results] + +The popularity badge also appears in the bibliographic record display in the catalog. The name of the badge earned by the record and the popularity rating are displayed in the Record Details. + +A popularity badge of 5.0/5.0 has been applied to the most popular bibliographic records where the search term "harry potter" is found in the title. In the image above, the popularity badge has identified records from the Harry Potter series by J.K. Rowling as the most popular titles matching the search and has listed them first in the search results. + +image::media/popbadge3_web_client.PNG[popularity badge bib record display] diff --git a/docs/modules/admin/pages/purge_holds.adoc b/docs/modules/admin/pages/purge_holds.adoc new file mode 100644 index 0000000000..bb201cf0d8 --- /dev/null +++ b/docs/modules/admin/pages/purge_holds.adoc @@ -0,0 +1,15 @@ +== Purging holds == + +Similar to purging circulations one may wish to purge old (filled or canceled) hold information. This feature adds a database function and +settings for doing so. + +Purged holds are moved to the _action.aged_hold_request_ table with patron identifying information scrubbed, much like circulations are moved +to _action.aged_circulation_. + +The settings allow for a default retention age as well as distinct retention ages for holds filled, holds canceled, and holds canceled by +specific cancel causes. The most specific one wins unless a patron is retaining their hold history. In the latter case the patron's holds +are retained either way. + +Note that the function still needs to be called, which could be set up as a cron job or done more manually, say after statistics collection. +You can use the _purge_holds.srfsh_ script to purge holds from cron. + diff --git a/docs/modules/admin/pages/purge_user_activity.adoc b/docs/modules/admin/pages/purge_user_activity.adoc new file mode 100644 index 0000000000..bd39954229 --- /dev/null +++ b/docs/modules/admin/pages/purge_user_activity.adoc @@ -0,0 +1,36 @@ +== Purge User Activity == + +User activity types are now set to transient by default for new +Evergreen installs. This means only the most recent activity entry per +user per activity type is retained in the database. + +.Use case +**** + +Setting more user activity types collects less patron data, which helps +protect patron privacy. Additionally, the _actor.usr_activity_ table +gets really big really fast if all event types are non-transient. + +**** + +This change does not affect existing activity types, which were set to +non-transient by default. To make an activity type transient, modify the +'Transient' field of the desired type in the staff client under Admin -> +Server Administration -> User Activity Types. + +Setting an activity type to transient means data for a given user will +be cleaned up automatically if and when the user performs the activity +in question. However, administrators can also force an activity +cleanup via SQL. This is useful for ensuring that all old activity +data is deleted and for controlling when the cleanup occurs, which +may be useful on very large actor.usr_activity tables. + +To force clean all activity types: + +[source,sql] +------------------------------------------------------------ +SELECT actor.purge_usr_activity_by_type(etype.id) + FROM config.usr_activity_type etype; +------------------------------------------------------------ + +NOTE: This could take hours to run on a very large actor.usr_activity table. diff --git a/docs/modules/admin/pages/qstore_service.adoc b/docs/modules/admin/pages/qstore_service.adoc new file mode 100644 index 0000000000..62829869db --- /dev/null +++ b/docs/modules/admin/pages/qstore_service.adoc @@ -0,0 +1,5 @@ +== QStore service == + +The QStore service is used by the user buckets feature +in the Web client. + diff --git a/docs/modules/admin/pages/receipt_template_editor.adoc b/docs/modules/admin/pages/receipt_template_editor.adoc new file mode 100644 index 0000000000..d88b249249 --- /dev/null +++ b/docs/modules/admin/pages/receipt_template_editor.adoc @@ -0,0 +1,246 @@ += Print (Receipt) Templates = +:toc: + +indexterm:[web client, receipt template editor] +indexterm:[print templates] +indexterm:[web client, print templates] +indexterm:[receipt template editor] +indexterm:[receipt template editor, macros] +indexterm:[receipt template editor, checkout] + +The print templates follow W3C HTML standards (see +http://w3schools.com/html/default.asp) and can make use of CSS and +https://angularjs.org[Angular JS] to a certain extent. + +The Receipt Template Editor can be found at: *Administration -> Workstation -> +Print Templates* + +The Editor can also be found on the default home page of the staff client. + +Receipts come in various types: Bills, checkout, items, holds, transits and +Payments. + +== Receipt Templates == +This is a complete list of the receipts currently in use in Evergreen. + +[horizontal] +.List of Receipts +*Bills, Current*:: Listing of current bills on an account. +*Bills, Historic*:: Listing of bills that have had payments made on them. This + used on the Bill History Transaction screen. +*Bills, Payment*:: Patron payment receipt +*Checkin*:: List of items that have been entered in to the check-in screen. +*Checkout*:: List of items currently checked out by a patron during the transaction. +*Hold Transit Slip*:: This is printed when a hold goes in-transit to another library. +*Hold Shelf Slip*:: This prints when a hold is fulfilled. +*Holds for Bib Record*:: Prints a list of holds on a Title record. +*Holds for Patron*:: Prints a list of holds on a patron record. +*Hold Pull List*:: Prints the Holds Pull List. +*Hold Shelf List*:: Prints a list of hold that are waiting to be picked up. +*In-House Use List*:: Prints a list of items imputed into In-house use. +*Item Status*:: Prints a list of items imputed into Item Status. +*Items Out*:: Prints the list of items a patron has checked out. +*Patron Address*:: Prints the current patrons address. +*Patron Note*:: Prints a note on a patron's record. +*Renew*:: List of items that have been renewed using the Renew Item Screen. +*Transit List*:: Prints the list of items in-transit from the Transit List. +*Transit Slip*:: This is printed when an items goes in-transit to another location. + + +== Editing Receipts == + +To edit a Receipt: + +. Select *Administration -> Workstation -> Print Templates*. + +. Choose the Receipt in the drop down list. +. If you are using Hatch, you can choose different printers for different types + of receipts with the Force Content field. If not, leave that field blank. + Printer Settings can be set at *Administration -> Workstation -> Printer + Settings*. ++ +image::media/receipt1.png[select checkout] ++ +. Make edits to the Receipt on the right hand side. ++ +image::media/receipt2.png[receipt screen] ++ +. Click out of the section you are editing to see what your changes will look + right on the Left hand side. +. Click *Save Locally* in the Upper right hand corner. + + +=== Formatting Receipts === + +Print templates use variables for various pieces of information coming from the +Evergreen database. These variables deal with everything from the library name +to the due date of an item. Information from the database is entered in the +templates with curly brackets `{{term}}`. + +Example: `{{checkout.title}}` + +Some print templates have sections that are repeated for each item in a list. +For example, the portion of the Checkout print template below repeats every item +that is checked out in HTML list format by means of the 'ng-repeat' in the li +tag. + +------ +
    +
  1. +{{checkout.title}}
    +Barcode: {{checkout.copy.barcode}}
    +Due: {{checkout.circ.due_date | date:"short"}}
    +
  2. +
+------ + +=== Text Formatting === + +General text formatting +|======================================================================================== +| Goal | Original | Code | Result +| Bold (HTML) | hello | hello | *hello* +| Bold (CSS) | hello | hello | *hello* +| Capitalize | circulation | circulation | Circulation +| Currency | 1 | {{1 \| currency}} | $1.00 +|======================================================================================== + +=== Date Formatting === + +If you do not format dates, they will appear in a system format which isn't +easily readable. + +|=================================================== +| Code | Result +|{{today}} | 2017-08-01T14:18:51.445Z +|{{today \| date:'short'}} | 8/1/17 10:18 AM +|{{today \| date:'M/d/yyyy'}} | 8/1/2017 +|=================================================== + +=== Currency Formatting === + +Add " | currency" after any dollar amount that you wish to display as currency. + +Example: +`{{xact.summary.balance_owed | currency}}` prints as `$2.50` + + +=== Conditional Formatting === + +You can use Angular JS to only print a line if the data matches. For example: + +`
Notify by email: {{patron.email}}
` + +This will only print the "Notify by email:" line if email notification is +enabled for that hold. + +Example for checkout print template that will only print the amount a patron +owes if there is a balance: + +`You owe the library +${{patron_money.balance_owed}}` + +See also: https://docs.angularjs.org/api/ng/directive/ngIf + +=== Substrings === + +To print just a sub-string of a variable, you can use a *limitTo* function. +`{{variable | limitTo:limit:begin}}` where *limit* is the number of characters +you are wanting, and *begin* (optional) is where you want to start printing +those characters. To limit the variable to the first four characters, you can +use `{{variable | limitTo:4}}` to get "vari". To limit to the last five +characters you can use `{{variable | limitTo:-5}}` to get "iable". And +`{{variable | limitTo:3:3}}` will produce "ria". + +|======================================================================================== +| Original | Code | Result +| The Sisterhood of the Traveling Pants | {{checkout.title \| limitTo:10}} | The Sisterhood of th +| 123456789 | {{patron.card.barcode \| limitTo:-5}} | 56789 +| Roberts | {{patron.family_name \| limitTo:3:3}} | ber +|======================================================================================== + + +=== Images === + +You can use HTML and CSS to add an image to your print template if you have the +image uploaded onto a publicly available web server. (It will currently only +work with images on a secure (https) site.) For example: + +`` + +=== Sort Order === + +You can sort the items in an ng-repeat block using orderBy. For example, the +following will sort a list of holds by the shelving location first, then by the +call number: + +`` + +=== Subtotals === + +You can use Angular JS to add information from each iteration of a loop together +to create a subtotal. This involves setting an initial variable before the +ng-repeat loop begins, adding an amount to that variable from within each loop, +and then displaying the final amount at the end. + +------ +
You checked out the following items:
+
+
+
    +
    +
  1. + {{checkout.title}}
    + Barcode: {{checkout.copy.barcode}}
    + Due: {{checkout.circ.due_date | date:"M/d/yyyy"}} +
  2. +
    +
+
Total Amount Owed: {{patron_money.balance_owed | currency}}
+
+You Saved
+{{transactions.subtotal | currency}}
+by borrowing from the library!
+------ +<1> This line sets the variable. +<2> This adds the list item's price to the variable. +<3> This prints the total of the variable. + +== Exporting and importing Customized Receipts == + +Once you have your receipts set up on one machine you can export your receipts, +and then load them on to another machine. Just remember to *Save Locally* +once you import the receipts on the new machine. + +=== Exporting templates === +As you can only save a template on to the computer you are working on you will +need to export the template if you have more than one computer that prints out +receipts (i.e., more than one computer on the circulation desk, or another +computer in the workroom that you use to checkin items or capture holds with) + +. Export. +. Select the location to save the template to, name the template, and click +*Save*. +. Click OK. + +=== Importing Templates === + +. Click Import. +. Navigate to and select the template that you want to import. Click Open. +. Click OK. +. Click *Save Locally*. +. Click OK. + + +WARNING: Clearing your browser's cache/temporary files will clear any print +template customizations that you make unless you are using Hatch to store your +customizations. Be sure to export a copy of your customizations as a backup so +that you can import it as needed. + +TIP: If you are modifying your templates and you do not see the updates appear +on your printed receipt, you may need to go into *Administration -> Workstation +-> Stored Preferences* and delete the stored preferences related to the print +template that you modified (for example, eg.print.template_context.bills_current). diff --git a/docs/modules/admin/pages/restrict_Z39.50_sources_by_perm_group.adoc b/docs/modules/admin/pages/restrict_Z39.50_sources_by_perm_group.adoc new file mode 100644 index 0000000000..d8f0ff5789 --- /dev/null +++ b/docs/modules/admin/pages/restrict_Z39.50_sources_by_perm_group.adoc @@ -0,0 +1,67 @@ += Z39.50 Servers = +:toc: + +== Restrict Z39.50 Sources by Permission Group == + +In Evergreen versions preceding 2.2, all users with cataloging privileges could view all of the Z39.50 servers that were available for use in the staff client. In Evergreen version 2.2, you can use a permission to restrict users' access to Z39.50 servers. You can apply a permission to the Z39.50 servers to restrict access to that server, and then assign that permission to users or groups so that they can access the restricted servers. + +=== Administrative Settings === + +You can add a permission to limit use of Z39.50 servers, or you can use an existing permission. + +NOTE: You must be authorized to add permission types at the database level to add a new permission. + +Add a new permission: + +1) Create a permission at the database level. + +2) Click *Administration -> Server Administration -> Permissions* to add a permission to the staff client. + +3) In the *New Permission* field, enter the text that describes the new permission. + +image::media/Restrict_Z39_50_Sources_by_Permission_Group2.png[Create new permission to limit use of Z39.50 servers] + +4) Click *Add*. + +5) The new permission appears in the list of permissions. + + + +=== Restrict Z39.50 Sources by Permission Group === + +1) Click *Administration -> Server Administration -> Z39.50 Servers* + +2) Click *New Z39.50 Server*, or double click on an existing Z39.50 server to restrict its use. + +3) Select the permission that you added to restrict Z39.50 use from the drop down menu. + +image::media/Restrict_Z39_50_Sources_by_Permission_Group1.jpg[] + +4) Click *Save*. + +5) Add the permission that you created to a user or user group so that they can access the restricted server. + + +image::media/Restrict_Z39_50_Sources_by_Permission_Group3.jpg[] + +6) Users that log in to the staff client and have that permission will be able to see the restricted Z39.50 server. + +NOTE: As an alternative to creating a new permission to restrict use, you can use a preexisting permission. For example, your library uses a permission group called SuperCat, and only members in this group should have access to a restricted Z39.50 source. Identify a permission that is unique to the SuperCat group (e.g. CREATE_MARC) and apply that permission to the restricted Z39.50 server. Because these users are in the only group with the permission, they will be the only group w/ access to the restricted server. + + +== Storing Z39.50 Server Credentials == + +Staff have the option to apply Z39.50 login credentials to each Z39.50 server at different levels of the organizational unit hierarchy. Credentials can be set at the library branch or system level, or for an entire consortium. When credentials are set for a Z39.50 server, searches of the Z39.50 server will use the stored credentials. If a staff member provides alternate credentials in the Z39.50 search interface, the supplied credentials will override the stored ones. Staff have the ability to apply new credentials or clear existing ones in this interface. For security purposes, it is not possible for staff to retrieve or report on passwords. + + +To set up stored credentials for a Z39.50 server: + +1) Go to *Administration -> Server Administration -> Z39.50 Servers*. + +2) Select a *Z39.50 Source* by clicking on the hyperlinked source name. This will take you the Z39.50 Attributes for the source. + +3) At the top of the screen, select the *org unit* for which you would like to configure the credentials. + +4) Enter the *Username* and *Password*, and click *Apply Credentials*. + +image::media/storing_z3950_credentials.jpg[Storing Z39.50 Credentials] diff --git a/docs/modules/admin/pages/schema_bibliographic.adoc b/docs/modules/admin/pages/schema_bibliographic.adoc new file mode 100644 index 0000000000..dad062326f --- /dev/null +++ b/docs/modules/admin/pages/schema_bibliographic.adoc @@ -0,0 +1,14 @@ += Notes about the Bibliographic Schema in the Database = +:toc: + +== Bibliographic fingerprint == + +Evergreen creates a fingerprint for each bib record, which can be found in the `fingerprint` column of the `biblio.record_entry` table. +This fingerprint is used to group together different bib records in a Group Formats & Editions search in the public catalog. + +The bibliographic fingerprint incorporates several subfields to distinguish between different items, including: + +* $n and $p from MARC title fields to better distinguish among records of the same series that may share the same title but have a different part. + +The bibliographic fingerprint distinguishes among the fields contributing to the fingerprint. This helps the system distinguish between a record +for the movie _Blue Steel_ and another record for the book _Blue_ written by Danielle _Steel_. diff --git a/docs/modules/admin/pages/search_interface.adoc b/docs/modules/admin/pages/search_interface.adoc new file mode 100644 index 0000000000..225aec3b36 --- /dev/null +++ b/docs/modules/admin/pages/search_interface.adoc @@ -0,0 +1,118 @@ += Designing the patron search experience = +:toc: + +== Editing the formats select box options in the search interface == + +You may wish to remove, rename or organize the options in the formats select +box. This can be accomplished from the staff client. + +. From the staff client, navigate to *Administration -> Server Administration -> Marc Coded +Value Maps* +. Select _Type_ from the *Record Attribute Type* select box. +. Double click on the format type you wish to edit. + +image::media/coded-value-1.png[Coded Value Map Format Editor] + +To change the label for the type, enter a value in the *Search Label* field. + +To move the option to a top list separated by a dashed line from the others, +check the *Is Simple Selector* check box. + +To hide the type so that it does not appear in the search interface, uncheck the +*OPAC Visible* checkbox. + +Changes will be immediate. + +== Adding and removing search fields in advanced search == + +It is possible to add and remove search fields on the advanced search page by +editing the _opac/parts/config.tt2_ file in your template directory. Look for +this section of the file: + +---- +search.adv_config = [ + {adv_label => l("Item Type"), adv_attr => ["mattype", "item_type"]}, + {adv_label => l("Item Form"), adv_attr => "item_form"}, + {adv_label => l("Language"), adv_attr => "item_lang"}, + {adv_label => l("Audience"), adv_attr => ["audience_group", "audience"], adv_break => 1}, + {adv_label => l("Video Format"), adv_attr => "vr_format"}, + {adv_label => l("Bib Level"), adv_attr => "bib_level"}, + {adv_label => l("Literary Form"), adv_attr => "lit_form", adv_break => 1}, + {adv_label => l("Search Library"), adv_special => "lib_selector"}, + {adv_label => l("Publication Year"), adv_special => "pub_year"}, + {adv_label => l("Sort Results"), adv_special => "sort_selector"}, +]; +---- + +For example, if you delete the line: + +---- +{adv_label => l("Language"), adv_attr => "item_lang"}, +---- + +the language field will no longer appear on your advanced search page. Changes +will appear immediately after you save your changes. + +You can also add fields based on Search Facet Groups that you create in the +staff client's Local Administration menu. This can be helpful if you want to +simplify your patrons' experience by presenting them with only certain +limiters (e.g. the most commonly used languages in your area). To do this, + +. Click *Administration -> Local Administration -> Search Filter Groups*. +. Click *New*. +. Enter descriptive values into the code and label fields. The owner needs to +be set to your consortium. +. Once the Facet Group is created, click on the blue hyperlinked code value. +. Click the *New* button to create the necessary values for your field. +. Go to the _opac/parts/config.tt2_ file, and add a line like the following, +where *Our Library's Field* is the name you'd like to be displayed next to +your field, and *facet_group_code* is the code you've added using the staff +client. ++ +---- + {adv_label => l("Our Library's Field"), adv_filter => "facet_group_code"}, +---- + +== Changing the display of facets and facet groups == + +Facets can be reordered on the search results page by editing the +_opac/parts/config.tt2_ file in your template directory. + +Edit the following section of _config.tt2_, changing the order of the facet +categories according to your needs: + +---- + +facet.display = [ + {facet_class => 'author', facet_order => ['personal', 'corporate']}, + {facet_class => 'subject', facet_order => ['topic']}, + {facet_class => 'series', facet_order => ['seriestitle']}, + {facet_class => 'subject', facet_order => ['name', 'geographic']} +]; + +---- + +You may also change the default number of facets appearing under each category +by editing the _facet.default_display_count_ value in _config.tt2_. The default +value is 5. + +== Facilitating search scope changes == + +Users often search in a limited scope, such as only searching items in their +local library. When they aren't able find materials that meet their needs in +a limited scope, they may wish to repeat their search in a system-wide or +consortium-wide scope. Evergreen provides an optional button and checkbox +to alter the depth of the search to a defined level. + +The button and checkbox are both enabled by default and can be configured +in the Depth Button/Checkbox section of config.tt2. + +Noteworthy settings related to these features include: + +* `ctx.depth_sel_checkbox` -- set this to 1 to display the checkbox, 0 to hide it. +* `ctx.depth_sel_button` -- set this to 1 to display the button, 0 to hide it. +* `ctx.depth_sel_depth` -- the depth that should be applied by the button and +checkbox. A value of 0 would typically search the entire consortium, and 1 would +typically search the library's system. + + diff --git a/docs/modules/admin/pages/search_settings_web_client.adoc b/docs/modules/admin/pages/search_settings_web_client.adoc new file mode 100644 index 0000000000..d454ec8e57 --- /dev/null +++ b/docs/modules/admin/pages/search_settings_web_client.adoc @@ -0,0 +1,60 @@ +== Adjusting Relevance Ranking and Indexing == + +=== Metabib Class FTS Config Maps === + +NOTE: These settings will apply to all libraries in your +consortium. There is no way to apply these settings to +only one library or branch. + +* _Field Class_ - Reference to a field defined in + Administration > Server Administration > MARC + Search/Facet Classes. +* _Text Search Config_ - Which Text Search config to use +* _Active_ - Check this checkbox to use this configuration + for searching and indexing. +* _Index Weight_ - The FTS index weight to use for this + FTS config. Should be A, B, C, or D, defaults to C. + You can see the exact numeric values for A, B, C, and + D in Administration > Server Administration > MARC + Search/Facet Classes. +* _Index Language_ - An optional 3-letter code + representing the language the record should be set to + in order for this FTS config to be used for indexing. + should be set to in order for this FTS config to be used for indexing +* _Search Language_ - An optional 3-letter code representing + what preferred language search should be selected by the + end-user in order for this FTS config to be applied to + their search. +* _Always Use_ - Check this checkbox to override the + configuration for a more specific field. For example, + if you check this box when entering a setting for the + _author_ metabib class, it will override any settings + you have made for the _author|personal_ field in + the Administration > Server Administration > Metabib + Field FTS Config Maps screen. + +=== Metabib Field FTS Config Maps === + +NOTE: These settings will apply to all libraries in your +consortium. There is no way to apply these settings to +only one library or branch. + +* _Metabib Field_ - Reference to a field defined in + Administration > Server Administration > MARC + Search/Facet Fields. +* _Text Search Config_ - Which Text Search config to use +* _Active_ - Check this checkbox to use this configuration + for searching and indexing. +* _Index Weight_ - The FTS index weight to use for this + FTS config. Should be A, B, C, or D, defaults to C. + You can see the exact numeric values for A, B, C, and + D in Administration > Server Administration > MARC + Search/Facet Classes. +* _Index Language_ - An optional 3-letter code + representing the language the record should be set to + in order for this FTS config to be used for indexing. + should be set to in order for this FTS config to be used for indexing +* _Search Language_ - An optional 3-letter code representing + what preferred language search should be selected by the + end-user in order for this FTS config to be applied to + their search. diff --git a/docs/modules/admin/pages/security.adoc b/docs/modules/admin/pages/security.adoc new file mode 100644 index 0000000000..35414d58cf --- /dev/null +++ b/docs/modules/admin/pages/security.adoc @@ -0,0 +1,32 @@ += Keeping Evergreen Current and Secure = +:toc: + +== Introduction == + +When it comes to running an Evergreen system, there are two special areas of concern: + +* How and when you decide to upgrade Evergreen software or apply fixes +* How to take care of the actual server(s) that your Evergreen system uses + +The following hints to help you cope with these challenges. + +== Upgrading the Evergreen software == + +The Evergreen community at large have agreed upon an upgrade cycle that produces new major releases twice a year, in Spring and Fall. Major releases can contain new features. The community supports each major release with 12 subsequent monthly minor releases that contain only bug fixes, and continues to provide security fixes if necessary for an additional three months after the end of the regular minor bug fix support, for a total of 15 months of support for each major release. + +As a general rule, as the Evergreen community releases each new version of the Evergreen software, they also provide a guideline on how to upgrade from the previous release as part of the official Evergreen documentation at http://docs.evergreen-ils.org. Follow the instructions exactly and in the order that they are given--and if you run into a problem, report it to the community with as much detail about the error message or symptoms of the problem as you can. + +Keep the Evergreen release schedule in mind when planning your own testing and upgrade schedules. If you participate in testing new Evergreen releases during the release candidate stages, you will prepare your own library for the upgrade process and help flush out any remaining bugs before the major release of the software. This also gives you time to prepare the members of your library for the upcoming changes by giving them the chance, when possible, to familiarize themselves with new features on your test system. You also have the chance to prepare supporting materials, like handouts and other kinds of documentation, to help your users before, during and after each upgrade cycle. + +== Securing the server(s) on which your Evergreen installation runs == + +An Evergreen installation requires interaction between many different components and, depending on the size of your consortium and how many servers you have, it can range from quite complex to extremely. That said, there are a number of standard guidelines that you can follow to secure your server. + +* Keep your server up-to-date. Apply security updates as soon as possible when they come out to prevent your system from being exposed to a known vulnerability. +* Pay close attention to account administration on the server. Do not give any user on the server more power than they need. +* Disable services that you do not need. +* Pay attention to your system's log files to see what kind of activity is happening and notice anything unusual. +* A central idea to server security is to make it unreasonably difficult for anyone who tries to compromise your system. Let them choose targets more vulnerable than yours. + +This topic is very rich and there are many resources available, both in print and on the web. It is worth your time to learn more. + diff --git a/docs/modules/admin/pages/sip_server.adoc b/docs/modules/admin/pages/sip_server.adoc new file mode 100644 index 0000000000..1e8479baa3 --- /dev/null +++ b/docs/modules/admin/pages/sip_server.adoc @@ -0,0 +1,721 @@ += SIP Server = +:toc: + +== About the SIP Protocol == + +indexterm:[Automated Circulation System] +indexterm:[SelfCheck] +indexterm:[Automated Material Handling] + ++SIP+, standing for +Standard Interchange Protocol+, was developed by the +3M corporation+ to be a common +protocol for data transfer between ILS' (referred to in +SIP+ as an _ACS_, or _Automated Circulation System_) and a +third party device. Originally, the protocol was developed for use with _3M SelfCheck_ (often abbreviated SC, not to +be confused with Staff Client) systems, but has since expanded to other companies and devices. It is now common +to find +SIP+ in use in several other vendors' SelfCheck systems, as well as other non-SelfCheck devices. Some +examples include: + +* Patron Authentication (computer access, subscription databases) +* Automated Material Handling (AMH) +** The automated sorting of items, often to bins or book carts, based on shelving location or other programmable +criteria + +== Installing the SIP Server == + + + +This is a rough intro to installing the +SIP+ server for Evergreen. + +=== Getting the code === + +Current +SIP+ server code lives at in the Evergreen git repository: + + cd /opt + git clone git://git.evergreen-ils.org/SIPServer.git SIPServer + + +=== Configuring the Server === + +indexterm:[configuration files, oils_sip.xml] + +. Type the following commands from the command prompt: + + $ sudo su opensrf + $ cd /openils/conf + $ cp oils_sip.xml.example oils_sip.xml + +. Edit oils_sip.xml. Change the commented out section to this: + + + +. max_servers will directly correspond to the number of allowed +SIP+ clients. Set the number accordingly, but +bear in mind that too many connections can exhaust memory. On a 4G RAM/4 CPU server (that is also running +evergreen), it is not recommended to exceed 100 +SIP+ client connections. + +==== Setting the encoding ==== + +SIPServer looks for the encoding in the following +places: + +1. An +encoding+ attribute on the +account+ element for the currently active SIP account. +2. The +encoding+ element that is a child of the +institution+ element of the currently active SIP account. +3. The +encoding+ element that is a child of the +implementation_config+ element that is itself a child of the +institution+ element of the currently active SIP account. +4. If none of the above exist, then the default encoding (ASCII) is used. + +Option 3 is a legacy option. It is recommended that you alter your configuration to +move this element out of the +implementation_config+ element and into +its parent +institution+ element. Ideally, SIPServer should *not* look into +the implementation config, and this check may be removed at some time +in the future. + +==== Datatypes ==== + +The `msg64_hold_datatype` setting is similar to `msg64_summary_datatype`, but affects holds instead of circulations. +When set to `barcode`, holds information will be delivered as a set of copy barcodes instead of title strings for +patron info requests. With barcodes, SIP clients can both find the title strings for display (via item info requests) +and make subsequent hold-related action requests, like holds cancellation. + + +=== Adding SIP Users === + +indexterm:[configuration files, oils_sip.xml] + +. Type the following commands from the command prompt: + + $ sudo su opensrf + $ cd /openils/conf + +. In the ++ section, add +SIP+ client login information. Make sure that all ++ use the same +institution attribute, and make sure the institution is listed in ++. All attributes in the +++ section will be used by the +SIP+ client. + +. In Evergreen, create a new profile group called +SIP+. This group should be a sub-group of +Users+ (not +Staff+ +or +Patrons+). Set _Editing Permission_ as *group_application.user.sip_client* and give the group the following +permissions: ++ + COPY_CHECKIN + COPY_CHECKOUT + CREATE_PAYMENT + RENEW_CIRC + VIEW_CIRCULATIONS + VIEW_COPY_CHECKOUT_HISTORY + VIEW_PERMIT_CHECKOUT + VIEW_USER + VIEW_USER_FINES_SUMMARY + VIEW_USER_TRANSACTIONS ++ +OR use SQL like: ++ + + INSERT INTO permission.grp_tree (name,parent,description,application_perm) + VALUES ('SIP', 1, 'SIP2 Client Systems', 'group_application.user.sip_client'); + + INSERT INTO + permission.grp_perm_map (grp, perm, depth, grantable) + SELECT + g.id, p.id, 0, FALSE + FROM + permission.grp_tree g, + permission.perm_list p + WHERE + g.name = 'SIP' AND + p.code IN ( + 'COPY_CHECKIN', + 'COPY_CHECKOUT', + 'RENEW_CIRC', + 'VIEW_CIRCULATIONS', + 'VIEW_COPY_CHECKOUT_HISTORY', + 'VIEW_PERMIT_CHECKOUT', + 'VIEW_USER', + 'VIEW_USER_FINES_SUMMARY', + 'VIEW_USER_TRANSACTIONS' + ); ++ +Verify: ++ + + SELECT * + FROM permission.grp_perm_map pgpm + INNER JOIN permission.perm_list ppl ON pgpm.perm = ppl.id + INNER JOIN permission.grp_tree pgt ON pgt.id = pgpm.grp + WHERE pgt.name = 'SIP'; + + + +. For each account created in the ++ section of oils_sip.xml, create a user (via the staff client user +editor) that has the same username and password and put that user into the +SIP+ group. + +[NOTE] +=================== +The expiration date will affect the +SIP+ users' connection so you might want to make a note of this +somewhere. +=================== + +=== Running the server === + +To start the +SIP+ server type the following commands from the command prompt: + + + $ sudo su opensrf + + $ oils_ctl.sh -a [start|stop|restart]_sip + +indexterm:[SIP] + + +=== Logging-SIP === + +==== Syslog ==== + +indexterm:[syslog] + + +It is useful to log +SIP+ requests to a separate file especially during initial setup by modifying your syslog config file. + +. Edit syslog.conf. + + $ sudo vi /etc/syslog.conf # maybe /etc/rsyslog.conf + + +. Add this: + + local6.* -/var/log/SIP_evergreen.log + +. Syslog expects the logfile to exist so create the file. + + $ sudo touch /var/log/SIP_evergreen.log + +. Restart sysklogd. + + $ sudo /etc/init.d/sysklogd restart + + +==== Syslog-NG ==== + +indexterm:[syslog-NG] + +. Edit logging config. + + sudo vi /etc/syslog-ng/syslog-ng.conf + +. Add: + + # +SIP2+ for Evergreen + filter f_eg_sip { level(warn, err, crit) and facility(local6); }; + destination eg_sip { file("var/log/SIP_evergreen.log"); }; + log { source(s_all); filter(f_eg_sip); destination(eg_sip); }; + +. Syslog-ng expects the logfile to exist so create the file. + + $ sudo touch /var/log/SIP_evergreen.log + +. Restart syslog-ng + + $ sudo /etc/init.d/syslog-ng restart + + +indexterm:[SIP] + + +=== Testing Your SIP Connection === + +* In the root directory of the SIPServer code: + + $ cd SIPServer/t + +* Edit SIPtest.pm, change the $instid, $server, $username, and $password variables. This will be +enough to test connectivity. To run all tests, you'll need to change all the variables in the _Configuration_ section. + + $ PERL5LIB=../ perl 00sc_status.t ++ +This should produce something like: ++ + + 1..4 + ok 1 - Invalid username + ok 2 - Invalid username + ok 3 - login + ok 4 - SC status + +* Don't be dismayed at *Invalid Username*. That's just one of the many tests that are run. + +=== More Testing === + +Once you have opened up either the +SIP+ OR +SIP2+ ports to be accessible from outside you can do some testing +via +telnet+. In the following tests: + +* Replace +$server+ with your server hostname (or +localhost+ if you want to + skip testing external access for now); +* Replace +$username+, +$password+, and +$instid+ with the corresponding values + in the ++ section of your SIP configuration file; +* Replace the +$user_barcode+ and +$user_password+ variables with the values + for a valid user. +* Replace the +$item_barcode+ variable with the values for a valid item. + +/////////////// +Comments because we don't want to indent these numbered bullets! +/////////////// + +. Start by testing your ability to log into the SIP server: ++ +[NOTE] +====================== +We are using 6001 here which is associated with +SIP2+ as per our configuration. +====================== ++ + $ telnet $server 6001 + Connected to $server. + Escape character is '^]'. + 9300CN$username|CO$password|CP$instid ++ +If successful, the SIP server returns a +941+ result. A result of +940+, +however, indicates an unsuccessful login attempt. Check the ++ +section of your SIP configuration and try again. + +. Once you have logged in successfully, replace the variables in the following +line and paste it into the telnet session: ++ + 2300120080623 172148AO$instid|AA$user_barcode|AC$password|AD$user_password ++ +If successful, the SIP server returns the patron information for $user_barcode, +similar to the following: ++ + 24 Y 00120100113 170738AEFirstName MiddleName LastName|AA$user_barcode|BLY|CQY + |BHUSD|BV0.00|AFOK|AO$instid| ++ +The response declares it is a valid patron BLY with a valid password CQY and shows the user's +$name+. + +. To test the SIP server's item information response, issue the following request: ++ + 1700120080623 172148AO$instid|AB$item_barcode|AC$password ++ +If successful, the SIP server returns the item information for $item_barcode, +similar to the following: ++ + 1803020120160923 190132AB30007003601852|AJRégion de Kamouraska|CK001|AQOSUL|APOSUL|BHCAD + |BV0.00|BGOSUL|CSCA2 PQ NR46 73R ++ +The response declares it is a valid item, with the title, owning library, +permanent and current locations, and call number. + +indexterm:[SIP] + +== SIP Communication == + +indexterm:[SIP Server, SIP Communication] + ++SIP+ generally communicates over a +TCP+ connection (either raw sockets or over +telnet+), but can also +communicate via serial connections and other methods. In Evergreen, the most common deployment is a +RAW+ socket +connection on port 6001. + ++SIP+ communication consists of strings of messages, each message request and response begin with a 2-digit +``command'' - Requests usually being an odd number and responses usually increased by 1 to be an even number. The +combination numbers for the request command and response is often referred to as a _Message Pair_ (for example, +a 23 command is a request for patron status, a 24 response is a patron status, and the message pair 23/24 is patron +status message pair). The table in the next section shows the message pairs and a description of them. + +For clarification, the ``Request'' is from the device (selfcheck or otherwise) to the ILS/ACS. The response is… the +response to the request ;). + +Within each request and response, a number of fields (either a fixed width or separated with a | [pipe symbol] and +preceded with a 2-character field identifier) are used. The fields vary between message pairs. + +|=========================================================================== +| *Pair* | *Name* | *Supported?* |*Details* +| 01 | Block Patron | Yes |<> - ACS responds with 24 Patron Status Response +| 09-10 | Checkin | Yes (with extensions) |<> +| 11-12 | Checkout | Yes (no renewals) |<> +| 15-16 | Hold | Partially supported |<> +| 17-18 | Item Information | Yes (no extensions) |<> +| 19-20 | Item Status Update | No |<> - Returns Patron Enable response, but doesn't make any changes in EG +| 23-24 | Patron Status | Yes |<> - 63/64 ``Patron Information'' preferred +| 25-26 | Patron Enable | No |<> - Used during system testing and validation +| 29-30 | Renew | Yes |<> +| 35-36 | End Session | Yes |<> +| 37-38 | Fee Paid | Yes |<> +| 63-64 | Patron Information | Yes (no extensions) |<> +| 65-66 | Renew All | Yes |<> +| 93-94 | Login | Yes |<> - Must be first command to Evergreen ACS (via socket) or +SIP+ will terminate +| 97-96 | Resend last message | Yes |<> +| 99-98 | SC-ACS Status | Yes |<> +|=========================================================================== + +[#sip_01_block_patron] + +=== 01 Block Patron === + +indexterm:[SelfCheck] + +A selfcheck will issue a *Block Patron* command if a patron leaves their card in a selfcheck machine or if the +selfcheck detects tampering (such as attempts to disable multiple items during a single item checkout, multiple failed +pin entries, etc). + +In Evergreen, this command does the following: + +* User alert message: _CARD BLOCKED BY SELF-CHECK MACHINE_ (this is independent of the AL _Blocked +Card Message_ field). + +* Card is marked inactive. + +The request looks like: + + 01[fields AO, AL, AA, AC] + +_Card Retained_: A single character field of Y or N - tells the ACS whether the SC has retained the card (ex: left in +the machine) or not. + +_Date_: An 18 character field for the date/time when the block occurred. + +_Format_: YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, ``Z'' (3 blanks and a Z) +represents UTC(GMT/Zulu) + +_Fields_: See <> for more details. + +The response is a 24 ``Patron Status Response'' with the following: + +* Charge privileges denied +* Renewal privileges denied +* Recall privileges denied (hard-coded in every 24 or 64 response) +* hold privileges denied +* Screen Message 1 (AF): _blocked_ +* Patron + +[#sip_09-10_checkin] + +=== 09/10 Checkin === + +~The request looks like: + + 09[Fields AP,AO,AB,AC,CH,BI] + +_No Block (Offline)_: A single character field of _Y_ or _N_ - Offline transactions are not currently supported so send _N_. + +_xact date_: an 18 character field for the date/time when the checkin occurred. Format: +YYYYMMDDZZZZHHMMSS (ZZZZ being zone - 4 blanks when local time, ``Z'' (3 blanks and a Z) represents +UTC(GMT/Zulu) + +_Fields_: See <> for more details. + +The response is a 10 ``Checkin Response'' with the following: + + 10[Fields AO,AB,AQ,AJ,CL,AA,CK,CH,CR,CS,CT,CV,CY,DA,AF,AG] + +Example (with a remote hold): + + 09N20100507 16593720100507 165937APCheckin Bin 5|AOBR1|AB1565921879|ACsip_01| + + 101YNY20100623 165731AOBR1|AB1565921879|AQBR1|AJPerl 5 desktop reference|CK001|CSQA76.73.P33V76 1996 + |CTBR3|CY373827|DANicholas Richard Woodard|CV02| + +Here you can see a hold alert for patron CY _373827_, named DA _Nicholas Richard Woodard_, to be picked up at CT +``BR3''. Since the transaction is happening at AO ``BR1'', the alert type CV is 02 for _hold at remote library_. The +possible values for CV are: + +* 00: unknown + +* 01: local hold + +* 02: remote hold + +* 03: ILL transfer (not used by EG) + +* 04: transfer + +* 99: other + +indexterm:[magnetic media] + +[NOTE] +=============== +The logic for Evergreen to determine whether the content is magnetic_media comes from +or search_config_circ_modifier. The default is non-magnetic. The same is true for media_type (default +001). Evergreen does not populate the collection_code because it does not really have any, but it will provide +the call_number where available. + +Unlike the +item_id+ (barcode), the +title_id+ is actually a title string, unless the configuration forces the +return of the bib ID. + +Don't be confused by the different branches that can show up in the same response line. + +* AO is where the transaction took place, + +* AQ is the ``permanent location'', and + +* CT is the _destination location_ (i.e., pickup lib for a hold or target lib for a transfer). +=============== + +[#sip_11-12_checkout] + +=== 11/12 Checkout === + + +[#sip_15-16_hold] + +=== 15/16 Hold === + +Evergreen supports the Hold message for the purpose of canceling +holds. It does not currently support creating hold requests via SIP2. + + +[#sip_17-18_item_information] + +=== 17/18 Item Information === + +The request looks like: + + 17[fields: AO,AB,AC] + +The request is very terse. AC is optional. + +The following response structure is for +SIP2+. (Version 1 of the protocol had only 6 total fields.) + + 18 + [fields: CF,AH,CJ,CM,AB,AJ,BG,BH,BV,CK,AQ,AP,CH,AF,AG,+CT,+CS] + +Example: + + 1720060110 215612AOBR1|ABno_such_barcode| + + 1801010120100609 162510ABno_such_barcode|AJ| + + 1720060110 215612AOBR1|AB1565921879| + + 1810020120100623 171415AB1565921879|AJPerl 5 desktop reference|CK001|AQBR1|APBR1|BGBR1 + |CTBR3|CSQA76.73.P33V76 1996| + +The first case is with a bogus barcode. The latter shows an item with a circulation_status of _10_ for _in transit between +libraries_. The known values of +circulation_status+ are enumerated in the spec. + +indexterm:[Automated Material Handling (AMH)] + +EXTENSIONS: The CT field for _destination location_ and CS _call number_ are used by Automated Material Handling +systems. + + +[#sip_19-20_item_status_update] + +=== 19/20 Item Status Update === + + +[#sip_23-24_patron_status] + +=== 23/24 Patron Status === + +Example: + + 2300120060101 084235AOUWOLS|AAbad_barcode|ACsip_01|ADbad_password| + + 24YYYY 00120100507 013934AE|AAbad_barcode|BLN|AOUWOLS| + + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADbad_password| + + 24 Y 00120100507 022318AEDoug Fiander|AA999999|BLY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + + 2300120060101 084235AOCONS|AA999999|ACsip_01|ADuserpassword|LY|CQN|BHUSD|BV0.00|AFOK|AOCONS| + + 24 Y 00120100507 022803AEDoug Fiander|AA999999|BLY|CQY|BHUSD|BV0.00|AFOK|AOCONS| + +. The BL field (+SIP2+, optional) is _valid patron_, so the _N_ value means _bad_barcode_ doesn't match a patron, the +_Y_ value means 999999 does. + +. The CQ field (+SIP2+, optional) is _valid password_, so the _N_ value means _bad_password_ doesn't match 999999's +password, the _Y_ means _userpassword_ does. + +So if you were building the most basic +SIP2+ authentication client, you would check for _|CQY|_ in the response to +know the user's barcode and password are correct (|CQY| implies |BLY|, since you cannot check the password +unless the barcode exists). However, in practice, depending on the application, there are other factors to consider in +authentication, like whether the user is blocked from checkout, owes excessive fines, reported their card lost, etc. +These limitations are reflected in the 14-character _patron status_ string immediately following the _24_ code. See the +field definitions in your copy of the spec. + + +[#sip_25-26_patron_enable] + +=== 25/26 Patron Enable === + +Not yet supported. + + +[#sip_29-30_renew] + +=== 29/30 Renew === + +Evergreen supports the Renew message. Evergreen checks whether a penalty is specifically configured to block +renewals before blocking any SIP renewal. + + +[#sip_35-36_end_session] + +=== 35/36 End Session === + + 3520100505 115901AOBR1|AA999999| + + 36Y20100507 161213AOCONS|AA999999|AFThank you!| + +The _Y/N_ code immediately after the 36 indicates _success/failure_. Failure is not particularly meaningful or important +in this context, and for evergreen it is hardcoded _Y_. + + + +[#sip_37-38_fee_paid] + +=== 37/38 Fee Paid === + +Evergreen supports the Fee Paid message. + + +[#sip_63-64_patron_information] + +=== 63/64 Patron Information === + +Attempting to retrieve patron info with a bad barcode: + + 6300020060329 201700 AOBR1|AAbad_barcode| + + 64YYYY 00020100623 141130000000000000000000000000AE|AAbad_barcode|BLN|AOBR1| + +Attempting to retrieve patron info with a good barcode (but bad patron password): + + 6300020060329 201700 AOBR1|AA999999|ADbadpwd| + + 64 Y 00020100623 141130000000000000000000000000AA999999|AEDavid J. Fiander|BHUSD|BV0.00 + |BD2 Meadowvale Dr. St Thomas, ON Canada + + 90210|BEdjfiander@somemail.com|BF(519) 555 1234|AQBR1|BLY|CQN|PB19640925|PCPatrons + |PIUnfiltered|AFOK|AOBR1| + +See <> for info on +BL+ and +CQ+ fields. + + + +[#sip_65-66_renew_all] + +=== 65/66 Renew All === + +Evergreen supports the Renew All message. + + +[#sip_93-94_login] + +=== 93/94 Login === + +Example: + + 9300CNsip_01|CObad_value|CPBR1| + + [Connection closed by foreign host.] + ... + + 9300CNsip_01|COsip_01|CPBR1| + + 941 + +_941_ means successful terminal login. _940_ or getting dropped means failure. + +When using a version of SIPServer that supports the feature, the Location (CP) field of the Login (93) message will be used as the workstation name if supplied. Blank or missing location fields will be ignored. This allows users or reports to determine which selfcheck performed a circulation. + + +[#sip_97-96_resend] + +=== 97/96 Resend === + + +[#sip_99-98_sc_and_acs_status] + +=== 99/98 SC and ACS Status === + + 99 + +All 3 fields are required: + +* 0: SC is OK + +* 1: SC is out of paper + +* 2: SC shutting down + +* status code - 1 character + +* max print width - 3 characters - the integer number of characters the client can print + +* protocol version - 4 characters - x.xx + + 98 + + + + + +Example: + + 9910302.00 + + 98YYYYNN60000320100510 1717202.00AOCONS|BXYYYYYYYYYNYNNNYN| + +The Supported Messages field +BX+ appears only in +SIP2+, and specifies whether 16 different +SIP+ commands are +supported by the +ACS+ or not. + + +[#fields] + +=== Fields === + +All fixed-length fields in a communication will appear before the first variable-length field. This allows for simple +parsing. Variable-length fields are by definition delimited, though there will not necessarily be an initial delimiter +between the last fixed-length field and the first variable-length one. It would be unnecessary, since you should know +the exact position where that field begins already. + + +== Patron privacy and the SIP protocol == + +SIP traffic includes a lot of patron information, and is not +encrypted by default. It is strongly recommended that you +encrypt any SIP traffic. + +=== SIP server configuration === + +On the SIP server, use `iptables` or `etc/hosts` to allow SSH connections on port 22 from the SIP client machine. You will probably want to have very restrictive rules +on which IP addresses can connect to this server. + + +=== SSH tunnels on SIP clients === + +SSH tunnels are a good fit for use cases like self-check machines, because it is relatively easy to automatically open the connection. Using a VPN is another option, +but many VPN clients require manual steps to open the VPN connection. + +. If the SIP client will be on a Windows machine, install cygwin on the SIP client. +. On the SIP client, use `ssh-keygen` to generate an SSH key. +. Add the public key to /home/my_sip_user/.ssh/authorized_keys on your SIP server to enable logins without using the UNIX password. +. Configure an SSH tunnel to open before every connection. You can do this in several ways: +.. If the SIP client software allows you to run an arbitrary command before + each SIP connection, use something like this: ++ +[source,bash] +---- +ssh -f -L 6001:localhost:6001 my_sip_user@my_sip_server.com sleep 10 +---- ++ +.. If you feel confident that the connection won't get interrupted, you can have something like this run at startup: ++ +[source,bash] +---- +ssh -f -N -L 6001:localhost:6001 my_sip_user@my_sip_server.com +---- ++ +.. If you want to constantly poll to make sure that the connection is still running, you can do something like this as a cron job or scheduled task on the SIP client machine: +[source,bash] +---- +#!/bin/bash +instances=`/bin/ps -ef | /bin/grep ssh | /bin/grep -v grep | /bin/wc -l` +if [ $instances -eq 0 ]; then + echo "Restarting ssh tunnel" + /usr/bin/ssh -L 6001:localhost:6001 my_sip_user@my_sip_server.com -f -N +fi +---- + diff --git a/docs/modules/admin/pages/sitemap_admin.adoc b/docs/modules/admin/pages/sitemap_admin.adoc new file mode 100644 index 0000000000..ded320bb3a --- /dev/null +++ b/docs/modules/admin/pages/sitemap_admin.adoc @@ -0,0 +1,39 @@ +=== Running the sitemap generator === +The `sitemap_generator` script must be invoked with the following argument: + +* `--lib-hostname`: specifies the hostname for the catalog (for example, + `--lib-hostname https://catalog.example.com`); all URLs will be generated + appended to this hostname + +Therefore, the following arguments are useful for generating multiple sitemaps +per Evergreen instance: + +* `--lib-shortname`: limit the list of record URLs to those which have copies + owned by the designated library or any of its children; +* `--prefix`: provides a prefix for the sitemap index file names + +Other options enable you to override the OpenSRF configuration file and the +database connection credentials, but the default settings are generally fine. + +Note that on very large Evergreen instances, sitemaps can consume hundreds of +megabytes of disk space, so ensure that your Evergreen instance has enough room +before running the script. + +=== Sitemap details === + +The sitemap generator script includes located URIs as well as items + listed in the `asset.opac_visible_copies` materialized view, and checks + the children or ancestors of the requested libraries for holdings as well. + +=== Scheduling === +To enable search engines to maintain a fresh index of your bibliographic +records, you may want to include the script in your cron jobs on a nightly or +weekly basis. + +Sitemap files are generated in the same directory from which the script is +invoked, so a cron entry will look something like: + +------------------------------------------------------------------------ +12 2 * * * cd /openils/var/web && /openils/bin/sitemap_generator +------------------------------------------------------------------------ + diff --git a/docs/modules/admin/pages/staff_client-column_picker.adoc b/docs/modules/admin/pages/staff_client-column_picker.adoc new file mode 100644 index 0000000000..4d047a31aa --- /dev/null +++ b/docs/modules/admin/pages/staff_client-column_picker.adoc @@ -0,0 +1,44 @@ += Column Picker = +:toc: + +indexterm:[Column Picker] + +From many screens and lists, you can click on the column picker +drop-down menu to change which columns are displayed. + +image::media/column_picker_web.png[Column picker menu options] + + +To show or hide a column, simply click the column name in the menu. For +more advanced control of column visibility and their position in the +grid, choose *Manage Columns* from the menu. The popup saves changes +as they are made. + +Columns at the top of the list will appear at the left end of the grid. + +image::media/column_picker_popup.png[Column picker popup window] + + +To adjust the width of columns, choose *Manage Column Widths* from +the menu, then click the "Expand" or "Shrink" icons in each column. +These can be clicked multiple times to reach the desired width. + +image::media/column_picker_config_widths.png[Column picker manage widths] + + +After customizing the display you may save your changes by choosing +*Save Columns* from the drop-down menu. These settings are stored in the +browser and are not connected with a specific login or registered +workstation. Each computer will need to be configured separately. + +image::media/column_picker_web_save.png[column_picker_web_save] + + +Some lists have a different design, and some of them can also be customized. +Simply right-click the header row of any of the columns, and the column +picker will appear. When you are finished customizing the display, scroll +to the bottom of the Column Picker window and click *Save*. + +image::media/column_picker_dojo.png[column_picker_dojo] + + diff --git a/docs/modules/admin/pages/staff_client-recent_searches.adoc b/docs/modules/admin/pages/staff_client-recent_searches.adoc new file mode 100644 index 0000000000..880ffd1a6f --- /dev/null +++ b/docs/modules/admin/pages/staff_client-recent_searches.adoc @@ -0,0 +1,40 @@ += Recent Staff Searches = +:toc: + +This feature enables you to view your recent searches as you perform them in the staff client. The number of searches that you can view is configurable. This feature is only available through the staff client; it is not available to patrons in the OPAC. + +== Administrative Settings == + +By default, ten searches will be saved as you search the staff client. If you want to change the number of saved searches, then you can configure the number of searches that you wish to save through the *Library Settings Editor* in the *Admin* module. + +To configure the number of recent staff searches: + +. Click *Administration -> Local Administration -> Library Settings Editor.* +. Scroll to *OPAC: Number of staff client saved searches to display on left side of results and record details pages* +. Click *Edit*. +. Select a *Context* from the drop down menu. +. Enter the number of searches that you wish to save in the *Value* field. +. Click *Update Setting* + +image::media/Saved_Catalog_Searches_2_21.jpg[Saved_Catalog_Searches_2_21] + + +NOTE: To retain this setting, the system administrator must restart the web server. + +If you do not want to save any searches, then you can turn off this feature. + +To deactivate this feature: + +. Follow steps 1-4 (one through four) as listed in the previous section. +. In the *value* field, enter 0 (zero). +. Click *Update Setting.* This will prevent you from viewing any saved searches. + + +== Recent Staff Searches == + +Evergreen will save staff searches that are entered through either the basic or advanced search fields. To view recent staff searches: + +. Enter a search term in either the basic or advanced search fields. +. Your search results for the current search will appear in the middle of the screen. The most recent searches will appear on the left side of the screen. + +image::media/Saved_Catalog_Searches_2_22.jpg[Saved_Catalog_Searches_2_22] diff --git a/docs/modules/admin/pages/staff_client-return_to_results_from_marc.adoc b/docs/modules/admin/pages/staff_client-return_to_results_from_marc.adoc new file mode 100644 index 0000000000..b7f6d1139f --- /dev/null +++ b/docs/modules/admin/pages/staff_client-return_to_results_from_marc.adoc @@ -0,0 +1,7 @@ += Return to Search Results from MARC Record = +:toc: + +This feature enables you to return to your title search results directly from any view of the MARC record, including the OPAC View, MARC Record, MARC Edit, and Holdings Maintenance. You can use this feature to page through records in the MARC Record View or Edit interfaces. You do not have to return to the OPAC View to access title results, simply click the button marked _Back To Results_. + + +image::media/back_to_results.png[Search_Results1] diff --git a/docs/modules/admin/pages/staff_from_command_line.adoc b/docs/modules/admin/pages/staff_from_command_line.adoc new file mode 100644 index 0000000000..cae11e25af --- /dev/null +++ b/docs/modules/admin/pages/staff_from_command_line.adoc @@ -0,0 +1,24 @@ += Managing Staff from the Command Line = + +== Changing passwords == + +If you need to change a patron or staff account password without using the staff client, here is how you can reset it with SQL. + +Connect to your Evergreen database using _psql_ or similar tool, and retrieve and verify your admin username: + +[source, sql] +------------------------------------------------------------------------------ +psql -U -h -d + +SELECT id, usrname, passwd from actor.usr where usrname = 'admin'; +------------------------------------------------------------------------------ + +If you do not remember the username that you set, search for it in the _actor.usr_ table, and then reset the password. + +[source, sql] +------------------------------------------------------------------------------ +UPDATE actor.usr SET passwd = WHERE id=; +------------------------------------------------------------------------------ + +The new password will automatically be hashed. + diff --git a/docs/modules/admin/pages/template_toolkit.adoc b/docs/modules/admin/pages/template_toolkit.adoc new file mode 100644 index 0000000000..ac474487ab --- /dev/null +++ b/docs/modules/admin/pages/template_toolkit.adoc @@ -0,0 +1,284 @@ += TPac Configuration and Customization = +:toc: + +== Template toolkit documentation == + +For more general information about template toolkit see: http://template-toolkit.org/docs/index.html[official +documentation]. + +The purpose of this chapter is to focus on the +Evergreen-specific uses of Template Toolkit ('TT') in the OPAC. + +== TPAC URL == + +The URL for the TPAC on a default Evergreen system is +http://localhost/eg/opac/home (adjust `localhost` to match your hostname or IP +address, naturally!) + +== Perl modules used directly by TPAC == + + * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader.pm` + * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Account.pm` + * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Container.pm` + * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Record.pm` + * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Search.pm` + * `Open-ILS/src/perlmods/lib/OpenILS/WWW/EGCatLoader/Util.pm` + +== Default templates == + +The source template files are found in `Open-ILS/src/templates/opac`. + +These template files are installed in `/openils/var/templates/opac`. + +.NOTE +You should generally avoid touching the installed default template files, +unless you are contributing changes that you want Evergreen to adopt as a new +default. Even then, while you are developing your changes, consider using +template overrides rather than touching the installed templates until you are +ready to commit the changes to a branch. See below for information on template +overrides. + +== Apache configuration files == + +The base Evergreen configuration file on Debian-based systems can be found in +`/etc/apache2/sites-enabled/eg.conf`. This file defines the basic virtual host +configuration for Evergreen (hostnames and ports), then single-sources the +bulk of the configuration for each virtual host by including +`/etc/apache2/eg_vhost.conf`. + +== TPAC CSS and media files == + +The CSS files used by the default TPAC templates are stored in the repo in +`Open-ILS/web/css/skin/default/opac/` and installed in +`/openils/var/web/css/skin/default/opac/`. + +The media files--mostly PNG images--used by the default TPAC templates are +stored in the repo in `Open-ILS/web/images/` and installed in +`/openils/var/web/images/`. + +== Mapping templates to URLs == + +The mapping for templates to URLs is straightforward. Following are a few +examples, where `` is a placeholder for one or more directories +that will be searched for a match: + + * `http://localhost/eg/opac/home` => `/openils/var//opac/home.tt2` + * `http://localhost/eg/opac/advanced` => `/openils/var//opac/advanced.tt2` + * `http://localhost/eg/opac/results` => `/openils/var//opac/results.tt2` + +The template files themselves can process, be wrapped by, or include other +template files. For example, the `home.tt2` template currently involves a +number of other template files to generate a single HTML file: + +.Example Template Toolkit file: opac/home.tt2 +[source, html] +------------------------------------------------------------------------------ +[% PROCESS "opac/parts/header.tt2"; + WRAPPER "opac/parts/base.tt2"; + INCLUDE "opac/parts/topnav.tt2"; + ctx.page_title = l("Home") %] +
+ [% INCLUDE "opac/parts/searchbar.tt2" %] +
+
+
+
+ [% INCLUDE "opac/parts/homesearch.tt2" %] +
+
+
+[% END %] +------------------------------------------------------------------------------ + +We will dissect this example in some more detail later, but the important +thing to note is that the file references are relative to the top of the +template directory. + +[#how_to_override_templates] +== How to override templates == + +Overrides for templates go in a directory that parallels the structure of the +default templates directory. The overrides then get pulled in via the Apache +configuration. + +In the following example, we demonstrate how to create a file that overrides +the default "Advanced search page" (`advanced.tt2`) by adding a new templates +directory and editing the new file in that directory. + +.Adding an override for the Advanced search page (example) +[source, bash] +------------------------------------------------------------------------------ +bash$ mkdir -p /openils/var/templates_custom/opac +bash$ cp /openils/var/templates/opac/advanced.tt2 \ + /openils/var/templates_custom/opac/. +bash$ vim /openils/var/templates_custom/opac/advanced.tt2 +------------------------------------------------------------------------------ + +We now need to teach Apache about the new templates directory. Open `eg.conf` +and add the following `` element to each of the `` +elements in which you want to include the overrides. The default Evergreen +configuration includes a `VirtualHost` directive for port 80 (HTTP) and another +one for port 443 (HTTPS); you probably want to edit both, unless you want the +HTTP user experience to be different from the HTTPS user experience. + +.Configuring the custom templates directory in Apache's eg.conf +[source,xml] +------------------------------------------------------------------------------ + + # + + # - absorb the shared virtual host settings + Include eg_vhost.conf + + PerlAddVar OILSWebTemplatePath "/openils/var/templates_algoma" + + + # + +------------------------------------------------------------------------------ + +Finally, reload the Apache configuration to pick up the changes: + +.Reloading the Apache configuration +[source,bash] +------------------------------------------------------------------------------ +bash# /etc/init.d/apache2 reload +------------------------------------------------------------------------------ + +You should now be able to see your change at http://localhost/eg/opac/advanced + +=== Defining multiple layers of overrides === + +You can define multiple layers of overrides, so if you want every library in +your consortium to have the same basic customizations, and then apply +library-specific customizations, you can define two template directories for +each library. + +In the following example, we define the `template_CONS` directory as the set of +customizations to apply to all libraries, and `template_BR#` as the set of +customizations to apply to library BR1 and BR2. + +As the consortial customizations apply to all libraries, we can add the +extra template directory directly to `eg_vhost.conf`: + +.Apache configuration for all libraries (eg_vhost.conf) +[source,xml] +------------------------------------------------------------------------------ +# Templates will be loaded from the following paths in reverse order. +PerlAddVar OILSWebTemplatePath "/openils/var/templates" +PerlAddVar OILSWebTemplatePath "/openils/var/templates_CONS" +------------------------------------------------------------------------------ + +Then we define a virtual host for each library to add the second layer of +customized templates on a per-library basis. Note that for the sake of brevity +we only show the configuration for port 80. + +.Apache configuration for each virtual host (eg.conf) +[source,xml] +------------------------------------------------------------------------------ + + ServerName br1.concat.ca + DocumentRoot /openils/var/web/ + DirectoryIndex index.html index.xhtml + Include eg_vhost.conf + + PerlAddVar OILSWebTemplatePath "/openils/var/templates_BR1" + + + + + ServerName br2.concat.ca + DocumentRoot /openils/var/web/ + DirectoryIndex index.html index.xhtml + Include eg_vhost.conf + + PerlAddVar OILSWebTemplatePath "/openils/var/templates_BR2" + + +------------------------------------------------------------------------------ + +== Changing some text in the TPAC == + +Out of the box, the TPAC includes a number of placeholder text and links. For +example, there is a set of links cleverly named 'Link 1', 'Link 2', and so on +in the header and footer of every page in the TPAC. Let's customize that for +our `templates_BR1` skin. + +To begin with, we need to find the page(s) that contain the text in question. +The simplest way to do that is with the handy utility `ack`, which is much +like `grep` but with built-in recursion and other tricks. On Debian-based +systems, the command is `ack-grep` as `ack` conflicts with an existing utility. +In the following example, we search for files that contain the text "Link 1": + +.Searching for text matching "Link 1" +[source,bash] +------------------------------------------------------------------------------ +bash$ ack-grep "Link 1" /openils/var/templates/opac +/openils/var/templates/opac/parts/topnav_links.tt2 +4: [% l('Link 1') %] +------------------------------------------------------------------------------ + +Next, we copy the file into our overrides directory and edit it with `vim`: + +.Copying the links file into the overrides directory +[source,bash] +------------------------------------------------------------------------------ +bash$ cp /openils/var/templates/opac/parts/topnav_links.tt2 \ + /openils/var/templates_BR1/opac/parts/topnav_links.tt2 +bash$ vim /openils/var/templates_BR1/opac/parts/topnav_links.tt2 +------------------------------------------------------------------------------ + +Finally, we edit the link text in `opac/parts/header.tt2`. + +.Content of the opac/parts/header.tt2 file +[source,html] +------------------------------------------------------------------------------ + +------------------------------------------------------------------------------ + +For the most part, the page looks like regular HTML, but note the `[%_("` +`")%]` that surrounds the text of each link. The `[% ... %]` signifies a TT +block, which can contain one or more TT processing instructions. `l(" ... ");` +is a function that marks text for localization (translation); a separate +process can subsequently extract localized text as GNU gettext-formatted PO +files. + +.NOTE +As Evergreen supports multiple languages, any customizations to Evergreen's +default text must use the localization function. Also, note that the +localization function supports placeholders such as `[_1]`, `[_2]` in the text; +these are replaced by the contents of variables passed as extra arguments to +the `l()` function. + +Once we have edited the link and link text to our satisfaction, we can load +the page in our Web browser and see the live changes immediately (assuming +we are looking at the BR1 overrides, of course). + +== Troubleshooting == + +If there is a problem such as a TT syntax error, it generally shows up as a +an ugly server failure page. If you check the Apache error logs, you will +probably find some solid clues about the reason for the failure. For example, +in the following example the error message identifies the file in which the +problem occurred as well as the relevant line numbers: + +.Example error message in Apache error logs +[source,bash] +------------------------------------------------------------------------------ +bash# grep "template error" /var/log/apache2/error_log +[Tue Dec 06 02:12:09 2011] [warn] [client 127.0.0.1] egweb: template error: + file error - parse error - opac/parts/record/summary.tt2 line 112-121: + unexpected token (!=)\n [% last_cn = 0;\n FOR copy_info IN + ctx.copies;\n callnum = copy_info.call_number_label;\n +------------------------------------------------------------------------------ + diff --git a/docs/modules/admin/pages/user_activity_type.adoc b/docs/modules/admin/pages/user_activity_type.adoc new file mode 100644 index 0000000000..46732c6ec3 --- /dev/null +++ b/docs/modules/admin/pages/user_activity_type.adoc @@ -0,0 +1,30 @@ += User Activity Types = +:toc: + +The User Activity Types feature enables you to specify the user activity that you want to record in the database. You can use this feature for reporting purposes. This function will also display a last activity date in a user's account. + +== Enabling this Feature == + +Click *Administration* -> *Server Administration* -> *User Activity Types* to access the default set of user activity types and to add new ones. The default set of user activity types records user logins to the Evergreen ILS and to third party products that communicate with Evergreen. + +The *Label* is a free text field that enables you to describe the activity that you are tracking. + +The *Event Caller* describes the third party software or Evergreen interface that interacts with the Evergreen database and is responsible for managing the communication between the parties. + +The *Event Type* describes the type of activity that Evergreen is tracking. Currently, this feature only tracks user authentication. + +The *Event Mechanism* describes the framework for communication between the third party software or OPAC and the database. Enter an event mechanism if you want to track the means by which the software communicates with the database. If you do not want to track how the softwares communicate, then leave this field empty. + +The *Enabled* field allows you to specify which types of user activity that you would like to track. + +The *Transient* column enables you to decide how many actions you want to track. If you want to track only the last activity, then enter *True.* If you want to trace all activity by the user, enter *False*. + +image::media/User_Activity_Types1A.jpg[User_Activity_Types1A] + + +== Using this Feature == + +The last activity date for user logins appears in the patron's summary. + +image::media/User_Activity_Types2A.jpg[User_Activity_Types2A] + diff --git a/docs/modules/admin/pages/virtual_index_defs.adoc b/docs/modules/admin/pages/virtual_index_defs.adoc new file mode 100644 index 0000000000..6b20276319 --- /dev/null +++ b/docs/modules/admin/pages/virtual_index_defs.adoc @@ -0,0 +1,53 @@ += Virtual Index Definitions = +:toc: + +Virtual index definitions can be configured in Evergreen to create customized search indexes that make use of data collected by other (real) index definitions. Real index definitions use an XPath expression to indicate the bibliographic data that should be included in the index. Virtual index definitions bring together data collected by other index definitions to create a new, virtual index. They can also use an XPath expression to collect data directly for an index, but they are not required to. + +All index definitions can be modified by having other indexes map to them. For example, Genre could be added to the All Subjects field definition in the Subject index. This would allow users to search Genre as part of a Subject search. + +== Keyword Virtual Index Definition == + +Evergreen now uses a virtual index definition for the Keyword index. This allows libraries to customize the keyword search index by specifying which fields are included in the keyword index, as well as how each field should be weighted for relevance ranking in search results. By default, the keyword index contains all of the search fields other than the keyword definition itself. Each field is assigned a weight of 1, with the exception of Title Proper, which is assigned a weight of 8. A match on the Title Proper within a keyword search will be given the higher weight and therefore a higher relevance ranking within search results. + +. To view the stock virtual index definition for keyword searches, go to *Administration>Server Administration>MARC Search/Facet Fields* and select the *Keyword* Search Class. +. Locate the field labeled "All searchable fields". This is the general keyword index. +. The weight of a field can be modified by selecting the field and going to *Actions>Edit Record* or right-clicking and selecting *Edit Record*. +.. The Metabib Field Virtual Map modal will appear. Increase the weight of the field and click *Save*. + +== Configuring Virtual Index Definitions == + +. To configure a virtual index definition, go to *Administration>Server Administration>MARC Search/Facet Fields*. +.. This interface now has a _Search Class_ filter that allows users to easily select which search class they want to view. +. Next, locate the field for which you want to create a virtual index definition and click *Manage* under the column labeled _Data Suppliers_. + +image::media/vid1.PNG[] + +. A new tab will open that contains the interface for configuring a virtual index definition. This interface can be used to map real index definitions for inclusion in the virtual index. + +image::media/vid2.PNG[] + +. To create a mapping, click *New Record*. A modal called _Metabib Field Virtual Map_ will appear. +. Select the _Real_ index definition and the _Virtual_ index definition to which it should be mapped. +. Assign a _Weight_ to the mapping. This allows Evergreen to calculate the weight that should be applied to each field when searched using the virtual index. +.. The weight assigned to a field within a virtual index can be different than the weight assigned when searching that field directly. For example, the Title Proper field can have a weight of 2 when a user performs a Title search, but a weight of 5 when a user performs a Keyword search (using the virtual index). This can help move title matches on keyword searches higher up in the search results list. +. Click *Save*. +. Repeat steps 4-7 until all desired fields are mapped to the virtual index definition. + +image::media/vid3.PNG[] + +Note: A service restart is required after definitions and mapping are changed. Changes to weight only do not require a restart as they are calculated in real time. + +== Search Term Highlighting in Search Results == + +Search terms are now highlighted on the main OPAC search results page, the bibliographic record detail page, and the metarecord grouped results page. This will help users discern why a certain record was included in the search result set, as well as its relevance to the search. Search terms will be highlighted in both real and virtual fields that were searched. Terms that were stemmed or normalized during searching will also be highlighted. Search term highlighting can be turned off within the OPAC by selecting the checkbox to "Disable Highlighting" in the search results interface. + +A keyword search for "piano" returns a set of search results: + +image::media/vid4.PNG[] + +The search term is highlighted in the search results and indicates why the records were included in the search result set. In this example, the search results interface shows the first three records had matching terms in the title field. + +Within the record detail page for "The five piano concertos", we can see the search term also matched on the General Note and Subject fields within the bibliographic record. + +image::media/vid5.PNG[] + diff --git a/docs/modules/admin/pages/web-client-browser-best-practices.adoc b/docs/modules/admin/pages/web-client-browser-best-practices.adoc new file mode 100644 index 0000000000..cd14a827d7 --- /dev/null +++ b/docs/modules/admin/pages/web-client-browser-best-practices.adoc @@ -0,0 +1,66 @@ += Best Practices for Using the Browser = +:toc: + +== Pop-up Blockers == + +Before using the web client, be sure to disable pop-up blockers for your +Evergreen system's domain. + +- In Chrome, select _Settings_ from the Chrome menu and click on _Content +settings_ in the advanced section. Select _Popups_ and then add your domain to +the _Allowed_ list. +- In Firefox, select _Preferences_ from the Firefox menu and then select the +_Content_ panel. Click the _Exceptions_ button and add your domain to the +_Allowed Sites_ list. + + +== Setting Browser Defaults for Web Client == + +To ensure that staff can easily get to the web client portal page on login +without additional steps, you can set the browser's home page to default to the +web client. + +=== Setting the Web Client as the Home Page in Chrome === + +. In the top-right corner of your browser, click the Chrome menu. +. Select *Settings*. +. In the _On startup_ section, select _Open a specific page or set of pages._ +. Click the *Set Pages* link. +. Add _https://localhost/eg/staff/_ to the _Enter URL_ box and click *OK*. + +=== Setting the Web Client as the Home Page in Firefox === + +. In the top-right corner of your browser, click the menu button. +. Click *Options*. +. In the _When Firefox starts:_ dropdown menu, select _Show my home page_. +. In the _Home Page_ box, add _https://localhost/eg/staff/_ and click *OK*. + +include::partial$turn-off-print-headers-firefox.adoc[] + +include::partial$turn-off-print-headers-chrome.adoc[] + +== Tab Buttons and Keyboard Shortcuts == + +Now that the client will be loaded in a web browser, users can use browser-based +tab controls and keyboard shortcuts to help with navigation. Below are some +tips for browser navigation that can be used in Chrome and Firefox on Windows +PCs. + +- Use CTRL-T or click the browser's new tab button to open a new tab. +- Use CTRL-W or click the x in the tab to close the tab. +- Undo closing a tab by hitting CTRL-Shift-T. +- To open a link from the web client in a new tab, CTRL-click the link or +right-click the link and select *Open Link in New Tab*. Using this method, you +can also open options from the web client's dropdown menus in a new tab +- Navigate to the next tab using CTRL-Tab. Go to the previous tab with CTRL-Shift-Tab. + +=== Setting New Tab Behavior === + +Some users may want to automatically open the web client's portal page in a new +tab. Neither Chrome nor Firefox will open your home page by default when you +open a new tab. However, both browsers have optional add-ons that will allow you +to set the browsers to automatically open the home page whenever open opening a +new tab. These add-ons may be useful for those libraries that want the new tab +to open to the web client portal page. + + diff --git a/docs/modules/admin/pages/web_client-login.adoc b/docs/modules/admin/pages/web_client-login.adoc new file mode 100644 index 0000000000..65f4ceca05 --- /dev/null +++ b/docs/modules/admin/pages/web_client-login.adoc @@ -0,0 +1,52 @@ += Logging into Evergreen = +:toc: + +== Registering a Workstation == +[#register_workstation] +indexterm:[staff client, registering a workstation] + +Before logging into Evergreen, you must first register a workstation from your +browser. + +[NOTE] +=============== +You will need the permissions to add workstations to your network. If you do +not have these permissions, ask your system administrator for assistance. +=============== + +. When you login for the first time, you will arrive at a screen asking that you +register your workstation ++ +image::media/web_client_workstation_registration.png[] ++ +. Create a unique workstation name. +. Click _Register_ +. After confirming the new workstation is listed in the _Workstations Registered +With This Browser_ menu, click _Use Now_ to return to the login page. Your +newly-registered workstation should be selected by default on the login page. + +== Basic Login == + +indexterm:[staff client, logging in] + +. The default URL to log into the client is _https://localhost/eg/staff/login_ +. Enter your _Username_ and _Password_. +. Verify that the correct workstation is selected and click *Sign In*. + +[[browser_defaults]] + + +== Logging Out == + +indexterm:[staff client, logging out] + +To log out of the client: + +. Click the menu button to the right of your user name in the top-right corner +of the window. +. Select *Log Out* + +[CAUTION] +Exiting all browser windows will automatically log you out of the web client. If +you only close the tab where the web client is loaded, you will remain logged in. + diff --git a/docs/modules/admin/pages/workstation_admin.adoc b/docs/modules/admin/pages/workstation_admin.adoc new file mode 100644 index 0000000000..162f222961 --- /dev/null +++ b/docs/modules/admin/pages/workstation_admin.adoc @@ -0,0 +1,128 @@ += Workstation Administration = +:toc: + +indexterm:[staff client, configuration] +indexterm:[workstation, configuration] +indexterm:[configuration] + +== Copy Editor: Copy Location Name First == + +indexterm:[copy editor, shelving location] + +By default, when editing item records, library code is displayed in front of +shelving location in _Shelving Location_ field. You may reverse the order by going +to *Administration -> Workstation Administration -> Copy Editor: Copy Location Name +First*. +Simply click it to make copy location name displayed first. The setting is saved +on the workstation. + +== Font and Sound Settings == + +indexterm:[staff client, fonts, zooming] +indexterm:[staff client, sounds] + +=== In the Staff Client === + +You may change the size of displayed text or turn staff client sounds on +and off. These settings are specific to each workstation and stored on +local hard disk. They do not affect OPAC font sizes. + +. Select *Administration -> Workstation Administration -> Global Font and Sound +Settings*. +. To turn off the system sounds, like the noise that happens when a patron +with a block is retrieved, check the _disable sound_ box and click _Save +to Disk_. ++ +image::media/workstation_admin-1.jpg[disable sound] ++ +. To change the size of the font, pick the desired option and click _Save +to Disk_. + +image::media/workstation_admin-2.jpg[font size] + +=== In the OPAC === + +It is also possible to zoom in and zoom out when viewing the OPAC in the +staff client, making the font appear larger or smaller. (This will not +affect other screens.) Use *CTRL + +* (plus sign, to zoom in), *CTRL + -* +(minus sign, to zoom out), and *CTRL + 0* (to restore default). The +workstation will remember the setting. + +== Select Hotkeys == + +indexterm:[staff client, hotkeys] + +All or partial hotkeys can be turned on or off. It can be done for a particular +workstation: + +. Navigate to *Administration -> Workstation Administration -> Hotkeys -> Current*. +. Select _Default_, _Minimal_, and _None_. ++ +image::media/workstation_admin-3.png[select hotkeys] ++ +* *Default*: including all hotkeys +* *Minimal*: including those hotkeys using CTRL key +* *None*: excluding all hotkeys ++ +. Go back to the above menu. +. Click *Set Workstation Default to Current*. + +To clear the existing default click *Clear Workstation Default*. + +You can use the *Toggle Hotkeys* button, included in some toolbars, on top right +corner, to switch your selected Hotkeys _on_ or +_off_ for the current login session. +It has the same effect as when you click *Disable Hotkeys* on the _Hotkeys_ menu. + +== Configure Printers == + +indexterm:[staff client, printers] + +Use the Printer Settings Editor to configure printer output for each +workstation. If left unconfigured Evergreen will use the default printer set in +the workstation's operating system (Windows, OSX, Ubuntu, etc). + +Evergreen printing works best if you are using recent, hardware-specific printer +drivers. + +. Select *Administration -> Workstation Administration -> Printer Settings Editor*. +. Select the _Printer Context_. At a minimum set the _Default_ context on each +Evergreen workstation. Repeat the procedure for other contexts if they differ +from the default (e.g. if spine labels should output to a different printer. ++ +image::media/workstation_admin-4.png[printer context] ++ +* *Default*: Default settings for staff client print functions (set for each +workstation). +* *Receipt*: Settings for printing receipts. +* *Label*: Printer settings for spine and pocket labels. +* *Mail*: Settings for printing mailed notices (not yet active). +* *Offline*: Applies to all printing from the Offline Interface. ++ +. After choosing _Printer Context_ click *Set Default Printer* and *Print Test +Page* and follow the prompts. If successful, test output will print to your chosen +printer. ++ +image::media/workstation_admin-5.png[set default printer] ++ +. (optional) To further format or customize printed output click *Page Settings* and +adjust settings. When finished click *OK* and print another test page to view +changes. + +image::media/workstation_admin-6.jpg[page setup] + +=== Advanced Settings === + +If you followed the steps above and still cannot print there are two alternate +print strategies: + +* DOS LPTI Print (sends unformatted text directly to the parallel port) +* Custom/External Print (configuration required) + +[NOTE] +==================================== +Evergreen cannot print using the Windows Generic/Text Only driver. If this +driver is the only one available try one of the alternate print strategies +instead. +==================================== + diff --git a/docs/modules/admin/partials/turn-off-print-headers-chrome.adoc b/docs/modules/admin/partials/turn-off-print-headers-chrome.adoc new file mode 100644 index 0000000000..32dda5d0d8 --- /dev/null +++ b/docs/modules/admin/partials/turn-off-print-headers-chrome.adoc @@ -0,0 +1,16 @@ +=== Turning off print headers and footers in Chrome === + +indexterm:[printing,headers] +indexterm:[printing,footers] + +If you are not using Hatch for printing, you will probably want to configure +your browser so that Chrome does not add headers and footers to items printed +on certain printers. For example, if you are printing spine labels, you likely +will not want Chrome to add a date or URL to the margins of your label. + +You can turn off these headers and footers using the following steps: + +. In the Chrome menu, click _Print..._ to open the print preview screen. +. Click _More Settings_. +. Uncheck _Headers and Footers_. + diff --git a/docs/modules/admin/partials/turn-off-print-headers-firefox.adoc b/docs/modules/admin/partials/turn-off-print-headers-firefox.adoc new file mode 100644 index 0000000000..44bdd2fcd9 --- /dev/null +++ b/docs/modules/admin/partials/turn-off-print-headers-firefox.adoc @@ -0,0 +1,30 @@ +=== Turning off print headers and footers in Firefox === + +indexterm:[printing,headers] +indexterm:[printing,footers] + +If you are not using Hatch for printing, you will probably want to configure +your browser so that Firefox does not add headers and footers to items printed +on certain printers. For example, if you are printing spine labels, you likely +will not want Firefox to add a date or URL to the margins of your label. + +You can turn off these headers and footers using the following steps: + +. In the Firefox menu, click _Print..._ to open the print preview screen. +. Click the _Page Setup..._ button. +. Go to the _Margins & Header/Footer_ tab. +. Make sure that all dropdown menus are set to _--blank--_. + +If you only want to turn off those headers and footers for a specific +printer, use these steps: + +. In the Firefox address bar, type link:about:config[]. +. If a warning appears, click _I accept the risk_. +. Type _print_header_ into this screen's search box. +. Double-click on the relevant _print_headerleft_, _print_headerright_, and +_print_headercenter_ entries in the grid. +. Delete any existing data for that setting and click OK. +. Type _print_footer_ into the screen's search box and repeat these steps +for the footer settings. + + diff --git a/docs/modules/admin_initial_setup/_attributes.adoc b/docs/modules/admin_initial_setup/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/admin_initial_setup/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/admin_initial_setup/assets/images/carousel1.png b/docs/modules/admin_initial_setup/assets/images/carousel1.png new file mode 100644 index 0000000000..6ec0e1455f Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel1.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel2.png b/docs/modules/admin_initial_setup/assets/images/carousel2.png new file mode 100644 index 0000000000..c2570ec127 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel2.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel3.png b/docs/modules/admin_initial_setup/assets/images/carousel3.png new file mode 100644 index 0000000000..44eee8b549 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel3.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel4.png b/docs/modules/admin_initial_setup/assets/images/carousel4.png new file mode 100644 index 0000000000..8c6fa31a37 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel4.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel5.png b/docs/modules/admin_initial_setup/assets/images/carousel5.png new file mode 100644 index 0000000000..a49288640c Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel5.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel6.png b/docs/modules/admin_initial_setup/assets/images/carousel6.png new file mode 100644 index 0000000000..e4106c0673 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel6.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel7.png b/docs/modules/admin_initial_setup/assets/images/carousel7.png new file mode 100644 index 0000000000..5e71110c9d Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel7.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/carousel8.png b/docs/modules/admin_initial_setup/assets/images/carousel8.png new file mode 100644 index 0000000000..85e15412e8 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/carousel8.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/batch_import_profile.png b/docs/modules/admin_initial_setup/assets/images/media/batch_import_profile.png new file mode 100644 index 0000000000..748d36b285 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/batch_import_profile.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/circ_duration_rules.jpg b/docs/modules/admin_initial_setup/assets/images/media/circ_duration_rules.jpg new file mode 100644 index 0000000000..f9d3962b6d Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/circ_duration_rules.jpg differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/circ_example1.png b/docs/modules/admin_initial_setup/assets/images/media/circ_example1.png new file mode 100644 index 0000000000..265d05d59a Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/circ_example1.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/circ_example2.png b/docs/modules/admin_initial_setup/assets/images/media/circ_example2.png new file mode 100644 index 0000000000..652eeb34f5 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/circ_example2.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/circ_example3.png b/docs/modules/admin_initial_setup/assets/images/media/circ_example3.png new file mode 100644 index 0000000000..fcb62fbf29 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/circ_example3.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/circ_max_fine_rules.jpg b/docs/modules/admin_initial_setup/assets/images/media/circ_max_fine_rules.jpg new file mode 100644 index 0000000000..f8f9a32025 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/circ_max_fine_rules.jpg differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/circ_recurring_fine_rules.jpg b/docs/modules/admin_initial_setup/assets/images/media/circ_recurring_fine_rules.jpg new file mode 100644 index 0000000000..280325e371 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/circ_recurring_fine_rules.jpg differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-1.png b/docs/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-1.png new file mode 100644 index 0000000000..14f32e49ff Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-1.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-2.jpg b/docs/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-2.jpg new file mode 100644 index 0000000000..ec154836aa Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/clear-added-content-cache-2.jpg differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/copy_locations_editor.png b/docs/modules/admin_initial_setup/assets/images/media/copy_locations_editor.png new file mode 100644 index 0000000000..18b91ad1da Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/copy_locations_editor.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/create_match_sets.png b/docs/modules/admin_initial_setup/assets/images/media/create_match_sets.png new file mode 100644 index 0000000000..1b92a17620 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/create_match_sets.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/order_record_loading.png b/docs/modules/admin_initial_setup/assets/images/media/order_record_loading.png new file mode 100644 index 0000000000..160af6a5fd Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/order_record_loading.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/record_quality_metrics.png b/docs/modules/admin_initial_setup/assets/images/media/record_quality_metrics.png new file mode 100644 index 0000000000..fd7b80c3a8 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/record_quality_metrics.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-1_web_client.png b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-1_web_client.png new file mode 100644 index 0000000000..1f270dc3e6 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-1_web_client.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-2_web_client.png b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-2_web_client.png new file mode 100644 index 0000000000..f99a481a1f Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-2_web_client.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-3.png b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-3.png new file mode 100644 index 0000000000..271d3c11f0 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-3.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-4_web_client.png b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-4_web_client.png new file mode 100644 index 0000000000..b69f9c4a26 Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-4_web_client.png differ diff --git a/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-5_web_client.png b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-5_web_client.png new file mode 100644 index 0000000000..fed27cebaa Binary files /dev/null and b/docs/modules/admin_initial_setup/assets/images/media/sup-permissions-5_web_client.png differ diff --git a/docs/modules/admin_initial_setup/nav.adoc b/docs/modules/admin_initial_setup/nav.adoc new file mode 100644 index 0000000000..e475ef4f4e --- /dev/null +++ b/docs/modules/admin_initial_setup/nav.adoc @@ -0,0 +1,27 @@ +* xref:admin_initial_setup:introduction.adoc[System Configuration and Customization] +** xref:admin_initial_setup:describing_your_organization.adoc[Describing your organization] +** xref:admin_initial_setup:describing_your_people.adoc[Describing your people] +** xref:admin_initial_setup:migrating_patron_data.adoc[Migrating Patron Data] +** xref:admin_initial_setup:migrating_your_data.adoc[Migrating from a legacy system] +** xref:admin_initial_setup:importing_via_staff_client.adoc[Importing materials in the staff client] +** xref:admin_initial_setup:ordering_materials.adoc[Ordering materials] +** xref:admin_initial_setup:designing_your_catalog.adoc[Designing your catalog] +** xref:admin:search_interface.adoc[Designing the patron search experience] +** xref:admin_initial_setup:borrowing_items.adoc[Borrowing items: who, what, for how long] +** xref:admin:autorenewals.adoc[Autorenewals in Evergreen] +** xref:admin_initial_setup:hard_due_dates.adoc[Hard due dates] +** xref:admin:template_toolkit.adoc[TPac Configuration and Customization] +** xref:admin_initial_setup:carousels.adoc[Carousels] +** xref:opac:new_skin_customizations.adoc[Creating a New Skin: the Bare Minimum] +** xref:admin:auto_suggest_search.adoc[Auto Suggest in Catalog Search] +** xref:admin:authentication_proxy.adoc[Authentication Proxy] +** xref:admin_initial_setup:KidsOPAC.adoc[Kid's OPAC Configuration] +** xref:admin:patron_address_by_zip_code.adoc[Patron Address City/State/County Pre-Populate by ZIP Code] +** xref:admin:phonelist.adoc[Phonelist.pm Module] +** xref:admin:sip_server.adoc[SIP Server] +** xref:admin:apache_rewrite_tricks.adoc[Apache Rewrite Tricks] +** xref:admin:apache_access_handler.adoc[Apache Access Handler Perl Module] +** xref:admin:ebook_api_service.adoc[ebook_api service] +** xref:admin:hold_targeter_service.adoc[hold-targeter service] +** xref:admin:backups.adoc[Backing up your Evergreen System] + diff --git a/docs/modules/admin_initial_setup/pages/KidsOPAC.adoc b/docs/modules/admin_initial_setup/pages/KidsOPAC.adoc new file mode 100644 index 0000000000..0c572e46be --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/KidsOPAC.adoc @@ -0,0 +1,132 @@ += Kid's OPAC Configuration = +:toc: + +== Configuration == + +=== Apache === + +The KPAC is already included and ready to be used with new Evergreen installs. So you only need to change the apache config +if you need to change template locations or if you want to use a different *kpac.xml* config file. The defaults for the KPAC are set +in */etc/apache2/eg_vhosts.conf*. + +------------------------------------------------------------------------------ + + PerlSetVar OILSWebContextLoader "OpenILS::WWW::EGKPacLoader" + PerlSetVar KPacConfigFile "/openils/conf/kpac.xml.example" + +------------------------------------------------------------------------------ + +=== XML Configuration File === + + * The XML configuration file defines the layout of the kid's OPAC. + * It is read with each restart/reload of the Apache web server. + * The file lives by default at /openils/conf/kpac.xml.example + * There are two top-level elements: and . + * The layout defines the owning org unit and the start page, both by ID. + * At runtime, the layout is determined by the context org unit. If no + configuration is defined for the context org unit, the layout for the + closest ancestor is used. + +[source, xml] +------------------------------------------------------------------------------ + +------------------------------------------------------------------------------ + + * The pages section is a container for elements. + * Each page defines an ID, the number of columns to display for the page, + the page name, and an icon. + +[source, xml] +------------------------------------------------------------------------------ + +------------------------------------------------------------------------------ + + * Each page is a container of cells + * Each cell defines + ** type (topic, search, link) + ** name + ** icon + ** content + * The content for type="topic" cells is the ID of the page this topic + jumps to. The name and img for the referenced page is used as the + display content. + +[source, xml] +------------------------------------------------------------------------------ +12 +------------------------------------------------------------------------------ + + * The content for type="search" cells is the search query. The name and + img are used for the display content. + +[source, xml] +------------------------------------------------------------------------------ +su:piano +------------------------------------------------------------------------------ + + * The content for type="link" cells is the URL. The name and img are used + for the display content. + +[source, xml] +------------------------------------------------------------------------------ +http://en.wikipedia.org/wiki/Clarinet +------------------------------------------------------------------------------ + + +=== Skin Configuration === + +The following example enables you to configure the alternate skin (Monster Skin, kpac2) for the Kids +Catalog. + +You should be familiar with how the xref:admin:template_toolkit.adoc#how_to_override_templates[Evergreen TPAC handles template folders] +before you make these changes. + +If you already have a custom template directory setup you can copy the *Open-ILS/examples/web/templates/kpac* +files to that directory instead, and then skip any Apache config changes. + +[source, bash] +------------------------------------------------------------------------------ +% cp -r Open-ILS/examples/web/css/skin/kpac2 /openils/var/web/css/skin/ +% cp -r Open-ILS/examples/web/images/kpac/* /openils/var/web/images/kpac/ #does not clobber +% mkdir /openils/var/templates_kpac2 +% cp -r Open-ILS/examples/web/templates/kpac /openils/var/templates_kpac2/ +% cp -r /openils/var/web/css/skin/default/kpac/fonts /openils/var/web/css/skin/kpac2/kpac +------------------------------------------------------------------------------ + +Then set up 443/80 vhosts for serving the alternate skin in eg.conf, something +along the lines of: + +------------------------------------------------------------------------------ + + ServerName xyz.dev198.esilibrary.com:80 + DocumentRoot /openils/var/web/ + DirectoryIndex index.html index.xhtml + Include eg_vhost.conf + + #Point to a different kpac.xml config file if needed + #PerlSetVar KPacConfigFile "/openils/conf/kpac.xml.example" + PerlAddVar OILSWebTemplatePath "/openils/var/templates_kpac2" + + +------------------------------------------------------------------------------ + +== Considerations for Community Adoption == + +The templates for the Kid's OPAC were developed long before the TPAC was +integrated into Evergreen and it has many of the same limitations that +were part of the TPAC. + + * Fixed width elements (divs, images, etc.), which complicates the + addition of new features and local customizations. + * Images with text, which prevents l10n/i18n. + * While the KPAC does not attempt to match the color scheme of any one + institution, it's inconsistent with the standard Evergreen color + palette. Creating an additional skin to act as the Evergreen default + my be necessary. + +== Outstanding Development (Unsponsored) == + + ** Port the XML configuration file to a DB structure, complete with UI for + managing the various components and upgrade path. + diff --git a/docs/modules/admin_initial_setup/pages/_attributes.adoc b/docs/modules/admin_initial_setup/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/admin_initial_setup/pages/borrowing_items.adoc b/docs/modules/admin_initial_setup/pages/borrowing_items.adoc new file mode 100644 index 0000000000..4ed0bc72b5 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/borrowing_items.adoc @@ -0,0 +1,243 @@ += Borrowing items: who, what, for how long = +:toc: + +Circulation policies pull together user, library, and item data to determine how +library materials circulate, such as: which patrons, from what libraries can +borrow what types of materials, for how long, and with what overdue fines. + +Individual elements of the circulation policies are configured using specific +interfaces, and should be configured prior to setting up the circulation +policies. + +== Data elements that affect your circulation policies == + +There are a few data elements which must be considered when setting up your +circulation policies. + +=== Copy data === + +Several fields set via the holdings editor are commonly used to affect the +circulation of an item. + +* *Circulation modifier* - Circulation modifiers are fields used to control +circulation policies on specific groups of items. They can be added to items +during the cataloging process. New circulation modifiers can be created in the +staff client by navigating to *Administration -> Server Administration -> Circulation +Modifiers*. +* *Circulate?* flag - The circulate? flag in the holdings editor can be set to False +to disallow an item from circulating. +* *Reference?* flag - The reference? flag in the holdings editor can also be used as +a data element in circulation policies. + +=== Shelving location data === + +* To get to the Shelving Locations Editor, navigate to *Administration -> +Local Administration -> Shelving Locations Editor*. +* Set _OPAC Visible_ to "No" to hide all items in a shelving location from the +public catalog. (You can also hide individual items using the Copy Editor.) +* Set _Hold Verify_ to "Yes" if when an item checks in you want to always ask for +staff confirmation before capturing a hold. +* Set _Checkin Alert_ to "Yes" to allow routing alerts to display when items +are checked in. +* Set _Holdable_ to "No" to prevent items in an entire shelving location from +being placed on hold. +* Set _Circulate_ to "No" to disallow circulating items in an entire shelving +location. +* If you delete a shelving location, it will be removed from display in the staff +client and the catalog, but it will remain in the database. This allows you to +treat a shelving location as deleted without losing statistical information for +circulations related to that shelving location. + +image::media/copy_locations_editor.png[screenshot of Shelving Location Editor] + +* Shelving locations can also be used as a data element in circulation policies. + +=== User data === + +Finally, several characteristics of specific patrons can affect circulation +policies. You can modify these characteristics in a patron's record (*Search -> +Search for Patrons*, select a patron, choose *Edit* tab) or when registering a +new patron (*Circulation -> Register Patron*). + +* The user permission group is also commonly used as a data element in +circulation policies. +* Other user data that can be used for circulation policies include the +*juvenile* flag in the user record. + +== Circulation Rules == + +*Loan duration* describes the length of time for a checkout. You can also +identify the maximum renewals that can be placed on an item. + +You can find Circulation Duration Rules by navigating to *Administration +-> Server Administration -> Circulation Duration Rules*. + +image::media/circ_duration_rules.jpg[] + +*Recurring fine* describes the amount assessed for daily and hourly fines as +well as fines set for other regular intervals. You can also identify any grace +periods that should be applied before the fine starts accruing. + +You can find Recurring Fine Rules by navigating to *Administration -> Server +Administration -> Circulation Recurring Fine Rules*. + +image::media/circ_recurring_fine_rules.jpg[] + +*Max fine* describes the maximum amount of fines that will be assessed for a +specific circulation. Set the *Is Percent* field to True if the maximum fine +should be a percentage of the item's price. + +You can find Circ Max Fine Rules by navigating to *Administration -> Server +Administration -> Circulation Max Fine Rules*. + +image::media/circ_max_fine_rules.jpg[] + +These rules generally cause the most variation between organizational units. + +Loan duration and recurring fine rate are designed with 3 levels: short, normal, +and extended loan duration, and low, normal, and high recurring fine rate. These +values are applied to specific items, when item records are created. + +When naming these rules, give them a name that clearly identifies what the rule +does. This will make it easier to select the correct rule when creating your +circ policies. + +=== Circulation Limit Sets === + +Circulation Limit Sets allow you to limit the maximum number of items for +different types of materials that a patron can check out at one time. Evergreen +supports creating these limits based on circulation modifiers, shelving locations, +or circulation limit groups, which allow you to create limits based on MARC data. +The below instructions will allow you to create limits based on circulation +modifiers. + +* Configure the circulation limit sets by selecting *Administration -> Local +Administration -> Circulation Limit Sets*. +* *Items Out* - The maximum number of items circulated to a patron at the same +time. +* *Min Depth* - Enter the minimum depth in the org tree that +Evergreen will consider as valid circulation libraries for counting items out. +The min depth is based on org unit type depths. For example, if you want the +items in all of the circulating libraries in your consortium to be eligible for +restriction by this limit set when it is applied to a circulation policy, then +enter a zero (0) in this field. +* *Global* - Check the box adjacent to Global if you want all of the org +units in your consortium to be restricted by this limit set when it is applied +to a circulation policy. Otherwise, Evergreen will only apply the limit to the +direct ancestors and descendants of the owning library. +* *Linked Limit Groups* - Add any circulation modifiers, shelving locations, or circ +limit groups that should be part of this limit set. + +*Example* +Your library (BR1) allows patrons to check out up to 5 videos at one time. This +checkout limit should apply when your library's videos are checked out at any +library in the consortium. Items with DVD, BLURAY, and VHS circ modifiers should +be included in this maximum checkout count. + +To create this limit set, you would add 5 to the *Items Out* field, 0 to the +*Min Depth* field and select the *Global* flag. Add the DVD, BLURAY and VHS circ +modifiers to the limit set. + +== Creating Circulation Policies == + +Once you have identified your data elements that will drive circulation policies +and have created your circulation rules, you are ready to begin creating your +circulation policies. + +If you are managing a small number of rules, you can create and manage +circulation policies in the staff client via *Administration -> Local Administration -> +Circulation Policies*. However, if you are managing a large number of policies, +it is easier to create and locate rules directly in the database by updating +*config.circ_matrix_matchpoint*. + +The *config.circ_matrix_matchpoint* table is central to the configuration of +circulation parameters. It collects the main set of data used to determine what +rules apply to any given circulation. It is useful for us to think of their +columns in terms of 'match' columns, those that are used to match the +particulars of a given circulation transaction, and 'result' columns, those that +return the various parameters that are applied to the matching transaction. + +* Circulation policies by checkout library or owning library? + - If your policies should follow the rules of the library that checks out the +item, select the checkout library as the *Org Unit (org_unit)*. + - If your policies should follow the rules of the library that owns the item, +select the consortium as the *Org Unit (org_unit)* and select the owning library +as the *Item Circ Lib (copy_circ_lib)*. +* Renewal policies can be created by setting *Renewals? (is_renewal)* to True. +* You can apply the duration rules, recurring fine rules, maximum fine rules, +and circulation sets created in the above sets when creating the circulation +policy. + +=== Best practices for creating policies === + +* Start by replacing the default consortium-level circ policy with one that +contains a majority of your libraries' duration, recurring fine, and max fine +rules. This first rule will serve as a default for all materials and permission +groups. +* If many libraries in your consortium have rules that differ from the default +for particular materials or people, set a consortium-wide policy for that circ +modifier or that permission group. +* After setting these consortium defaults, if a library has a circulation rule +that differs from the default, you can then create a rule for that library. You +only need to change the parameters that are different from the default +parameters. The rule will inherit the values for the other parameters from that +default consortium rule. +* Try to avoid unnecessary repetition. +* Try to get as much agreement as possible among the libraries in your +consortium. + +*Example 1* + +image::media/circ_example1.png[] + +In this example, the consortium has decided on a 21_day_2_renew loan rule for +general materials, i.e. books, etc. Most members do not charge overdue fines. +System 1 charges 25 cents per day to a maximum of $3.00, but otherwise uses the +default circulation duration. + +*Example 2* + +image::media/circ_example2.png[] + +This example includes a basic set of fields and creates a situation where items +with a circ modifier of "book" or "music" can be checked out, but "dvd" items +will not circulate. The associated rules would apply during checkouts. + +*Example 3* + +image::media/circ_example3.png[] + +This example builds on the earlier example and adds some more complicated +options. + +It is still true that "book" and "music" items can be checked out, while "dvd" +is not circulated. However, now we have added new rules that state that "Adult" +patrons of "SYS1" can circulate "dvd" items. + +=== Settings Relevant to Circulation === + +The following circulation settings, available via *Administration +-> Local Administration -> Library Settings Editor*, can +also affect your circulation duration, renewals and fine policy. + +* *Auto-Extend Grace Periods* - When enabled, grace periods will auto-extend. +By default this will be only when they are a full day or more and end on a +closed date, though other options can alter this. +* *Auto-Extending Grace Periods extend for all closed dates* - If enabled and +Grace Periods auto-extending is turned on, grace periods will extend past all +closed dates they intersect, within hard-coded limits. +* *Auto-Extending Grace Periods include trailing closed dates* - If enabled and +Grace Periods auto-extending is turned on, grace periods will include closed +dates that directly follow the last day of the grace period. +* *Checkout auto renew age* - When an item has been checked out for at least +this amount of time, an attempt to check out the item to the patron that it is +already checked out to will simply renew the circulation. +* *Cap Max Fine at Item Price* - This prevents the system from charging more +than the item price in overdue fines. +* *Lost Item Billing: New Min/Max Price Settings* - Patrons will be billed +at least the Min Price and at most the Max price, even if the item's price +is outside that range. To set a fixed price for all lost items, set min and +max to the same amount. +* *Charge fines on overdue circulations when closed* - Normally, fines are not +charged when a library is closed. When set to True, fines will be charged during +scheduled closings and normal weekly closed days. diff --git a/docs/modules/admin_initial_setup/pages/carousels.adoc b/docs/modules/admin_initial_setup/pages/carousels.adoc new file mode 100644 index 0000000000..26351a0d4d --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/carousels.adoc @@ -0,0 +1,256 @@ += Adding Carousels to Your Public Catalog = +:toc: + +This feature fully integrates the creation and management of book carousels into Evergreen, allowing for the display of book cover images on a library’s public catalog home page. Carousels may be animated or static. They can be manually maintained by staff or automatically maintained by Evergreen. Titles can appear in carousels based on newly cataloged items, recent returns, popularity, etc. Titles must have copies that are visible to the public catalog, be circulating, and holdable to appear in a carousel. Serial titles cannot be displayed in carousels. + +image::carousel1.png[Book carousel on public catalog front screen] + +There are three administrative interfaces used to create and manage carousels and their components: + +* <> - used to define different types of carousels +* <> - used to create and manage specific carousel definitions +* <> - used to manage which libraries will display specific carousels, as well as the default display order on a library’s public catalog home page + +Each of these interfaces are detailed below. + +[[carousel_types]] +== CAROUSEL TYPES == + +The Carousel Types administrative interface is used to create, edit, or delete carousel types. Carousel Types define the attributes of a carousel, such as whether it is automatically managed and how it is filtered. A carousel must be associated with a carousel type to function properly. + +There are five stock Carousel Types: + +* *Newly Cataloged Items* - titles appear automatically based on the active date of the title’s copies +* *Recently Returned Items* - titles appear automatically based on the mostly recently circulated copy’s check-in scan date and time +* *Top Circulated Titles* - titles appear automatically based on the most circulated copies in the Item Libraries identified in the carousel definition; titles are chosen based on the number of action.circulation rows created during an interval specified in the carousel definition and includes both circulations and renewals +* *Newest Items by Shelving Location* - titles appear automatically based on the active date and shelving location of the title’s copies +* *Manual* - titles are added and managed manually by library staff + +Additional types can be created in the Carousel Types Interface. Types can also be modified or deleted. Access the interface by going to Administration > Server Administration > Carousel Types. + +The interface displays the list of carousel types in a grid format. The grid displays the Carousel Type ID, name of the carousel type, and the characteristics of each type by default. The Actions Menu is used to edit or delete a carousel type. + +image::carousel2.png[Carousel Types configuration screen] + +=== Attributes of Carousel Types === + +Each Carousel Type defines attributes used to add titles to the carousels associated with the type. Filters apply only to automatically managed carousels. + +* *Automatically Managed* - when set to true, Evergreen uses a cron job to add titles to a carousel automatically based on a set of criteria established in the carousel definition. When set to false, library staff must enter the contents of a carousel manually. +* *Filter by Age* - when set to true, the type includes or excludes titles based on the age of their attached items +* *Filter by Item Owning Library* - when set to true, the type includes or excludes titles based the owning organizational unit of their attached items +* *Filter by Item Location* - when set to true, the type includes or excludes titles based on the shelving locations of their attached items + +=== Creating a Carousel Type === + +. Go to Administration > Server Administration > Carousel Types +. Select the *New Carousel Type* button +. Enter a name for the carousel type +. Use the checkboxes to apply filtering characteristics to the carousel type; filters for age, item owning library, and location are applied only to automatically managed carousels + .. Automatically Managed? + .. Filter by Age? + .. Filter by Item Owning Library? + .. Filter by Item Location? + +image::carousel3.png[Carousel Types Editor screen] + +=== Editing a Carousel Type === + +Users can rename a carousel type or change the characteristics of existing types. + +. Go to Administration > Server Administration > Carousel Types +. Select the type you wish to edit with the checkbox at the beginning of the row for that type +. Select the Actions Button (or right-click on the type’s row) and choose Edit Type + +=== Deleting a Carousel Type === + +Carousel types can be deleted with the Actions Menu + +. Go to Administration > Server Administration > Carousel Types +. Select the type you wish to delete with the checkbox at the beginning of the row for that type +. Select the Actions button (or right-click on the type’s row) and choose Delete Type; carousel types cannot be deleted if there are carousels attached + +[[carousel_definitions]] +== CAROUSEL DEFINITIONS == + +The Carousels administration page is used to define the characteristics of the carousel, such as the carousel type, which libraries will be able to display the carousel, and which shelving locations should be used to populate the carousel. + +The Carousels administration page is accessed through Administration > Server Administration > Carousels. (Please note that in the community release, this page will eventually move to Local Administration.) The interface displays existing carousels in a grid format. The grid can be filtered by organizational unit, based on ownership. The filter may include ancestor or descendent organization units, depending on the scope chosen. The columns displayed correspond to attributes of the carousel. The following are displayed by default: Carousel ID, Carousel Type, Owner, Name, Last Refresh Time, Active, Maximum Items. + +image::carousel4.png[Carousels configuration screen] + +Additional columns may be added to the display with the column picker, including the log in of the creator and/or editor, the carousel’s creation or edit time, age limit, item libraries, shelving locations, or associated record bucket. + +=== Attributes of a Carousel Definition === + +* *Carousel ID* - unique identifier assigned by Evergreen when the carousel is created +* *Carousel Type* - identifies the carousel type associated with the carousel +* *Owner* - identifies the carousel’s owning library organizational unit +* *Name* - the name or label of the carousel +* *Bucket* - once the carousel is created, this field displays a link to the carousel’s corresponding record bucket +* *Age Limit* - defines the age limit for the items (titles) that are displayed in the carousel +* *Item Libraries* - identifies which libraries should be used for locating items/titles to add to the carousel; this attribute does not check organizational unit inheritance, so include all libraries that should be used +* *Shelving Locations* - sets which shelving locations can/should be used to find titles for the carousel +* *Last Refresh Time* - identifies the last date when the carousel was refreshed, either automatically or manually. This is currently read-only value. +* *Is Active* - when set to true, the carousel is visible to the public catalog; automatically-maintained carousels are refreshed regularly (inactive automatic carousels are not refreshed) +* *Maximum Items* - defines the maximum number of titles that should appear in the carousel; this attribute is enforced only for automatically maintained carousels + + +=== Creating a Carousel from the Carousels Administration Page === + +. Go to Administration > Server Administration > Carousels +. Select the *New Carousels* button +. A popup will open where you will enter information about the carousel +. Choose the Carousel Type from the drop-down menu +. Choose the Owning Library from the drop-down +. Enter the Name of the carousel +. Enter the Age limit - this field accepts values such as “6 mons or months,” “21 days,” etc. +. Choose the Item Libraries - this identifies the library from which items are pulled to include in the carousel + .. Click the field. A list of available organizational units will appear. + .. Select the organizational unit(s) + ... The owning and circulating libraries must be included on this list for titles/items to appear in the carousel. For libraries with items owned at one organizational unit (e.g., the library system), but circulating at a different organizational unit (e.g., a branch), both would need to be included in the list. + .. Click Add +. Shelving Locations - this identifies the shelving locations from which items are pulled to include in the carousel. Please note that this field is not applicable when creating a carousel of the Newly Cataloged carousel type. For creating a carousel of newly cataloged items with shelving location filters, use the Newest Items by Shelving Location type instead. + .. Click the field. A list of available shelving locations will appear. + .. Select the shelving location - the library that “owns” the shelving location does not have to be included in the list of Item Libraries + .. Click Add +. Last Refresh Time - not used while creating carousels - display the date/time when the carousel was most recently refreshed +. Is Active - set to true for the carousel to be visible to the public catalog +. Enter the Maximum Number of titles to display in the carousel +. Click Save + +image::carousel5.png[Carousel editor screen] + +=== Carousels and Record Buckets === + +When a carousel is created, a corresponding record bucket is also created. The bucket is owned by the staff user who created the carousel; however, access to the carousel is controlled by the carousel’s owning library. The bucket is removed if the carousel is deleted. + +=== View a Carousel Bucket from Record Buckets === + +A record bucket linked to a carousel can be displayed in the Record Bucket interface through the Shared Bucket by ID action. + +. Go to Cataloging > Record Buckets +. Select the Buckets button +. Enter the bucket number of the carousel’s bucket; this can be found on the Carousels administration page. “Bucket” is one of the column options for the grid. It displays the bucket number. +. The contents of the carousel and bucket will be displayed + +Users can add or remove records from the bucket. If the associated carousel is automatically maintained, any changes to the bucket’s contents are subject to being overwritten by the next automatic update. Users are warned of this when making changes to the bucket contents. + +=== Create a Carousel from a Record Bucket === + +A carousel can be created from a record bucket. + +. Go to Cataloging > Record Buckets +. The Bucket View tab opens. Select the Buckets button and choose one of the existing buckets to open. The list of titles in the bucket will display on the screen. +. Select the Buckets button and choose Create Carousel from Bucket + +image::carousel6.png[Record Bucket Actions button - Create Carousel from Bucket] + +TIP: The Create Carousel from Bucket option is visible in both Record Query and Pending Buckets; however, initiating the creation of a carousel from either of these two tabs creates an empty bucket only. It will not pull titles from either to add contents to the carousel. + +=== Manually Adding Contents to a Carousel from Record Details Page === + +Titles can be added to a manually maintained carousel through the record details page. + +. Go to the details page for a title record +. Select the Other Actions button +. Choose Add to Carousel ++ +image::carousel7.png[Actions button on Record Summary page - Add to Carousel] ++ +. A drop-down with a list of manually maintained carousels that have been shared to at least one of the user’s working locations will appear +. Choose the carousel from the list +. Click Add to Selected Carousel + +TIP: The Add to Carousel menu item is disabled if no qualifying carousels are available + +[[carousel_mapping]] +== CAROUSEL LIBRARY MAPPING == + +The Carousel Library Mapping administration page is used to manage which libraries will display specific carousels, as well as the default display order on a library’s public catalog. + +The visibility of a carousel at a given organizational unit is not automatically inherited by the descendants of that unit. The carousel’s owning organizational unit is automatically added to the list of display organizational units. + +The interface is accessed by going to Administration > Server Administration > Carousel Library Mapping. (Please note that in the community release, this page will eventually move to Local Administration.) The interface produces a grid display with a list of the current mapping. The grid can be filtered by organizational unit, based on ownership. The filter may include ancestor or descendent organizational units, depending on the scope chosen. + +WARNING: If a carousel is deleted, its mappings are deleted. + +=== Attributes of Carousel Library Mapping === + +* *ID* - this is a unique identifier automatically generated by the database +* *Carousel* - this is the carousel affected by the mapping +* *Override Name* - this creates a name for automatically managed carousels that will be used in the public catalog display of the carousel instead of the carousel’s name +* *Library* - this is the organizational unit associated with the particular mapping; excludes descendent units +* *Sequence Number* - this is the order in which carousels will be displayed, starting with “0” (Example: Carousel 0 at consortial level will display first. Carousel 1 set at the consortial level will appear just below Carousel 0.) + +=== Create a New Carousel Mapping === + +. Go to Administration > Server Administration > Carousel Library Mapping +. Select *New Carousels Visible at Library* +. Choose the Carousel you wish to map from the Carousel drop-down menu +. If you want the title of the carousel on the public catalog home screen to be different from the carousel’s name, enter your desired name in the Override Name field +. Click on the Library field to choose on which library organizational unit’s public catalog home screen the carousel will appear +. Enter a number in sequence number to indicate in which order the carousel should appear on the library public catalog home screen. “0” is the top level. “1” is the subsequent level, etc. + +image::carousel8.png[Carousel mapping editor screen] + + +== CAROUSELS - OTHER ADMINISTRATIVE FEATURES == + +=== New Staff Permissions === + +Includes new staff permissions: + +* ADMIN_CAROUSEL_TYPES - allows users to create, edit, or delete carousel types +* ADMIN_CAROUSELS - allows users to create, edit, or delete carousels +* REFRESH_CAROUSEL - allows users to perform a manual refresh of carousels + +=== New Database Tables === + +A new table was added to the database to specify the carousel and how it is to be populated, including the name, owning library, details about the most recent refresh, and a link to the Record Bucket and its contents. + +Another new table defines carousel types and includes the name, whether the carousel is manually or automatically maintained, and a link to the QStore query specifying the foundation database query used to populate the carousel. + +A third new table defines the set of organizational units at which the carousel is visible and the display order in which carousels should be listed at each organizational unit. + +=== OPAC Templates === + +Carousels display on the public catalog home page by default. Administrators can modify the public catalog templates to display carousels where desired. + +A new Template Toolkit macro called “carousels” allows the Evergreen administrator to inject the contents of one or more carousels into any point in the OPAC. The macro will accept the following parameters: + +* carousel_id +* dynamic (Boolean, default value false) +* image_size (small, medium, or large) +* width (number of titles to display on a “pane” of the carousel) +* animated (Boolean to specify whether the carousel should automatically cycle through its panes) +* animation_interval (the interval (in seconds) to wait before advancing to the next pane) + +If the carousel_id parameter is supplied, the carousel with that ID will be displayed. If carousel_id is not supplied, all carousels visible to the public catalog's physical_loc organizational unit is displayed. + +The dynamic parameter controls whether the entire contents of the carousel should be written in HTML (dynamic set to false) or if the contents of the carousel should be asynchronously fetched using JavaScript. + +A set of CSS classes for the carousels and their contents will be exposed in style.css.tt2. Lightweight JavaScript was used for navigating the carousels, based either on jQuery or native JavaScript. The carousels are responsive. + +=== Accessibility Features === + +* Users can advance through the carousel using only a keyboard +* Users can navigate to a title from the carousel using only a keyboard +* Users pause animated carousels +* Changes in the state of the carousel are announced to screen readers. + +=== OpenSRF === + +Several Evergreen APIs are used to support the following operations: + +* refreshing the contents of an individual carousel +* refreshing the contents of all automatically-maintained carousels that are overdue for refresh +* retrieving the names and contents of a carousel or all visible ones +* creating a carousel by copying and existing record bucket + +The retrieval APIs allow for anonymous access to permit Evergreen admins to create alternative implementation of the carousel display or to share the carousels with other systems. + +=== Cron Job === + +The carousels feature includes a cronjob added to the example crontab to perform automatic carousel refreshes. It is implemented as a srfsh script that invokes open-ils.storage.carousel.refresh_all. + diff --git a/docs/modules/admin_initial_setup/pages/describing_your_organization.adoc b/docs/modules/admin_initial_setup/pages/describing_your_organization.adoc new file mode 100644 index 0000000000..444dccc4e3 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/describing_your_organization.adoc @@ -0,0 +1,99 @@ += Describing your organization = +:toc: + +Your Evergreen system is almost ready to go. You'll need to add each of the +libraries that will be using your Evergreen system. If you're doing this for a +consortium, you'll have to add your consortium as a whole, and all the +libraries and branches that are members of the consortium. In this chapter, +we'll talk about how to get the Evergreen system to see all your libraries, how +to set each one up, and how to edit all the details of each one. + +== Organization Unit Types == + +The term _Organization Unit Types_ refers to levels in the hierarchy of your +library system(s). Examples could include: All-Encompassing Consortium, Library +System, Branch, Bookmobile, Sub-Branch, etc. + +You can add or remove organizational unit types, and rename them as needed to +match the organizational hierarchy that matches the libraries using your +installation of Evergreen. Organizational unit types should never have proper +names since they are only generic types. + +When working with configuration, settings, and permissions, it is very +important to be careful of the Organization Unit *Context Location* - this is the +organizational unit to which the configuration settings are being applied. If, +for example, a setting is applied at the Consortium context location, all child +units will inherit that setting. If a specific branch location is selected, +only that branch and its child units will have the setting applied. The levels +of the hierarchy to which settings can be applied are often referred to in +terms of "depth" in various configuration interfaces. In a typical hierarchy, +the consortium has a depth of 0, the system is 1, the branch is 2, and any +bookmobiles or sub-branches is 3. + +=== Create and edit Organization Unit Types === + +. Open *Administration > Server Administration > Organization Types*. +. In the left panel, expand the *Organization Unit Types* hierarchy. +. Click on a organization type to edit the existing type or to add a new + organization unit. +. A form opens in the right panel, displaying the data for the selected + organization unit. +. Edit the fields as required and click *Save*. + +To create a new dependent organization unit, click *New Child*. The new child +organization unit will appear in the left panel list below the parent. +Highlight the new unit and edit the data as needed, click *Save* + +== Organizational Units == + +'Organizational Units' are the specific instances of the organization unit types +that make up your library's hierarchy. These will have distinctive proper names +such as Main Street Branch or Townsville Campus. + +=== Remove or edit default Organizational Units === + +After installing the Evergreen software, the default CONS, SYS1, BR1, etc., +organizational units remain. These must be removed or edited to reflect actual +library entities. + +=== Create and edit Organizational Units === + +. Open *Administration > Server Administration > Organizational Units*. +. In the left panel, expand the the Organizational Units hierarchy, select a + unit. +. A form opens in the right panel, displaying the data for the selected + organizational unit. +. To edit the existing, default organizational unit, enter system or library + specific data in the form; complete all three tabs: Main Settings, Hours + of Operation, Addresses. +. Click *Save*. + +To create a new dependent organizational unit, click *New Child*. The new child +will appear in the hierarchy list below the parent unit. Click on the new unit +and edit the data, click *Save* + +=== Organizational Unit data === + +The *Addresses* tab allows you to enter library contact information. Library +Phone number, email address, and addresses are used in patron email +notifications, hold slips, and transit slips. The Library address tab is broken +out into four address types: Physical Address, Holds Address, Mailing Address, +ILL Address. + +The *Hours of Operation* tab is where you enter regular, weekly hours. Holiday +and other closures are set in the *Closed Dates Editor*. Hours of operation and +closed dates impact due dates and fine accrual. + +=== After Changing Organization Unit Data === + +After you change Org Unit data, you must run the autogen.sh script. +This script updates the Evergreen organization tree and fieldmapper IDL. +You will get unpredictable results if you don't run this after making changes. + +Run this script as the *opensrf* Linux account. + +[source, bash] +------------------------------------------------------------------------------ +autogen.sh +------------------------------------------------------------------------------ + diff --git a/docs/modules/admin_initial_setup/pages/describing_your_people.adoc b/docs/modules/admin_initial_setup/pages/describing_your_people.adoc new file mode 100644 index 0000000000..2d8b476bc0 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/describing_your_people.adoc @@ -0,0 +1,368 @@ += Describing your people = +:toc: + +Many different members of your staff will use your Evergreen system to perform +the wide variety of tasks required of the library. + +When the Evergreen installation was completed, a number of permission groups +should have been automatically created. These permission groups are: + +* Users +* Patrons +* Staff +* Catalogers +* Circulators +* Acquisitions +* Acquisitions Administrator +* Cataloging Administrator +* Circulation Administrator +* Local Administrator +* Serials +* System Administrator +* Global Administrator +* Data Review +* Volunteers + +Each of these permission groups has a different set of permissions connected to +them that allow them to do different things with the Evergreen system. Some of +the permissions are the same between groups; some are different. These +permissions are typically tied to one or more working location (sometimes +referred to as a working organizational unit or work OU) which affects where a +particular user can exercise the permissions they have been granted. + +== Setting the staff user's working location == +To grant a working location to a staff user in the staff client: + +. Search for the patron. Select *Search > Search for Patrons* from the top menu. +. When you retrieve the correct patron record, select *Other > User Permission + Editor* from the upper right corner. The permissions associated with this + account appear in the right side of the client, with the *Working Location* + list at the top of the screen. +. The *Working Location* list displays the Organizational Units in your + consortium. Select the check box for each Organization Unit where this user + needs working permissions. Clear any other check boxes for Organization Units + where the user no longer requires working permissions. +. Scroll all the way to the bottom of the page and click *Save*. This user + account is now ready to be used at your library. + +As you scroll down the page you will come to the *Permissions* list. These are +the permissions that are given through the *Permission Group* that you assigned +to this user. Depending on your own permissions, you may also have the ability +to grant individual permissions directly to this user. + +== Comparing approaches for managing permissions == +The Evergreen community uses two different approaches to deal with managing +permissions for users: + +* *Staff Client* ++ +Evergreen libraries that are most comfortable using the staff client tend to +manage permissions by creating different profiles for each type of user. When +you create a new user, the profile you assign to the user determines their +basic set of permissions. This approach requires many permission groups that +contain overlapping sets of permissions: for example, you might need to create +a _Student Circulator_ group and a _Student Cataloger_ group. Then if a new +employee needs to perform both of these roles, you need to create a third +_Student Cataloger / Circulator_ group representing the set of all of the +permissions of the first two groups. ++ +The advantage to this approach is that you can maintain the permissions +entirely within the staff client; a drawback to this approach is that it can be +challenging to remember to add a new permission to all of the groups. Another +drawback of this approach is that the user profile is also used to determine +circulation and hold rules, so the complexity of your circulation and hold +rules might increase significantly. ++ +* *Database Access* ++ +Evergreen libraries that are comfortable manipulating the database directly +tend to manage permissions by creating permission groups that reflect discrete +roles within a library. At the database level, you can make a user belong to +many different permission groups, and that can simplify your permission +management efforts. For example, if you create a _Student Circulator_ group and +a _Student Cataloger_ group, and a new employee needs to perform both of these +roles, you can simply assign them to both of the groups; you do not need to +create an entirely new permission group in this case. An advantage of this +approach is that the user profile can represent only the user's borrowing +category and requires only the basic _Patrons_ permissions, which can simplify +your circulation and hold rules. + +Permissions and profiles are not carved in stone. As the system administrator, +you can change them as needed. You may set and alter the permissions for each +permission group in line with what your library, or possibly your consortium, +defines as the appropriate needs for each function in the library. + +== Managing permissions in the staff client == +In this section, we'll show you in the staff client: + +* where to find the available permissions +* where to find the existing permission groups +* how to see the permissions associated with each group +* how to add or remove permissions from a group + +We also provide an appendix with a listing of suggested minimum permissions for +some essential groups. You can compare the existing permissions with these +suggested permissions and, if any are missing, you will know how to add them. + +=== Where to find existing permissions and what they mean === +In the staff client, in the upper right corner of the screen, click on +*Administration > Server Administration > Permissions*. + +The list of available permissions will appear on screen and you can scroll down +through them to see permissions that are already available in your default +installation of Evergreen. + +There are over 500 permissions in the permission list. They appear in two +columns: *Code* and *Description*. Code is the name of the permission as it +appear in the Evergreen database. Description is a brief note on what the +permission allows. All of the most common permissions have easily +understandable descriptions. + +=== Where to find existing Permission Groups === +In the staff client, in the upper right corner of the screen, navigate to +*Administration > Server Administration > Permission Groups*. + +Two panes will open on your screen. The left pane provides a tree view of +existing Permission Groups. The right pane contains two tabs: Group +Configuration and Group Permissions. + +In the left pane, you will find a listing of the existing Permission Groups +which were installed by default. Click on the + sign next to any folder to +expand the tree and see the groups underneath it. You should see the Permission +Groups that were listed at the beginning of this chapter. If you do not and you +need them, you will have to create them. + +=== Adding or removing permissions from a Permission Group === +First, we will remove a permission from the Staff group. + +. From the list of Permission Groups, click on *Staff*. +. In the right pane, click on the *Group Permissions* tab. You will now see a + list of permissions that this group has. +. From the list, choose *CREATE_CONTAINER*. This will now be highlighted. +. Click the *Delete Selected* button. CREATE_CONTAINER will be deleted from the + list. The system will not ask for a confirmation. If you delete something by + accident, you will have to add it back. +. Click the *Save Changes* button. + +You can select a group of individual items by holding down the _Ctrl_ key and +clicking on them. You can select a list of items by clicking on the first item, +holding down the _Shift_ key, and clicking on the last item in the list that +you want to select. + +Now, we will add the permission we just removed back to the Staff group. + +. From the list of Permission Groups, click on *Staff*. +. In the right pane, click on the *Group Permissions* tab. +. Click on the *New Mapping* button. The permission mapping dialog box will + appear. +. From the Permission drop down list, choose *CREATE_CONTAINER*. +. From the Depth drop down list, choose *Consortium*. +. Click the checkbox for *Grantable*. +. Click the *Add Mapping* button. The new permission will now appear in the + Group Permissions window. +. Click the *Save Changes* button. + +If you have saved your changes and you don't see them, you may have to click +the Reload button in the upper left side of the staff client screen. + +== Managing role-based permission groups in the staff client == + +Main permission groups are granted in the staff client through Edit in the patron record using the Main (Profile) Permission Group field. Additional permission +groups can be granted using secondary permission groups. + +[[secondaryperms]] +=== Secondary Group Permissions === + +The _Secondary Groups_ button functionality enables supplemental permission +groups to be added to staff accounts. The *CREATE_USER_GROUP_LINK* and +*REMOVE_USER_GROUP_LINK* permissions are required to display and use this +feature. + +In general when creating a secondary permission group do not grant the +permission to login to Evergreen. + +==== Granting Secondary Permissions Groups ==== + + +. Open the account of the user you wish to grant secondary permission group to. +. Click _Edit_. +. Click _Secondary Groups_, located to the right of the _Main (Profile) Permission Group_. ++ +image::media/sup-permissions-1_web_client.png[Secondary Permissions Group] ++ +. From the dropdown menu select one of the secondary permission groups. ++ +image::media/sup-permissions-2_web_client.png[Secondary Permission Group List] ++ +. Click _Add_. +. Click _Apply Changes_. ++ +image::media/sup-permissions-3.png[Secondary Permission Group Save] ++ +. Click _Save_ in the top right hand corner of the _Edit Screen_ to save the user's account. + + +==== Removing Secondary Group Permissions ==== +. Open the account of the user you wish to remove the secondary permission group from. +. Click _Edit_. +. Click _Secondary Groups_, located to the right of the _Main (Profile) Permission Group_. ++ +image::media/sup-permissions-1_web_client.png[Secondary Permissions Group] ++ +. Click _Delete_ beside the permission group you would like to remove. ++ +image::media/sup-permissions-4_web_client.png[Secondary Permissions Group Delete] ++ +. Click _Apply Changes_. ++ +image::media/sup-permissions-5_web_client.png[Secondary Permissions Group Save] ++ +. Click _Save_ in the top right hand corner of the _Edit Screen_ to save the user's account. + +== Managing role-based permission groups in the database == +While the ability to assign a user to multiple permission groups has existed in +Evergreen for years, a staff client interface is not currently available to +facilitate the work of the Evergreen administrator. However, if you or members +of your team are comfortable working directly with the Evergreen database, you +can use this approach to separate the borrowing profile of your users from the +permissions that you grant to staff, while minimizing the amount of overlapping +permissions that you need to manage for a set of permission groups that would +otherwise multiply exponentially to represent all possible combinations of +staff roles. + +In the following example, we create three new groups: + +* a _Student_ group used to determine borrowing privileges +* a _Student Cataloger_ group representing a limited set of cataloging + permissions appropriate for students +* a _Student Circulator_ group representing a limited set of circulation + permissions appropriate for students + +Then we add three new users to our system: one who needs to perform some +cataloging duties as a student; one who needs perform some circulation duties +as a student; and one who needs to perform both cataloging and circulation +duties. This section demonstrates how to add these permissions to the users at +the database level. + +To create the Student group, add a new row to the _permission.grp_tree_ table +as a child of the _Patrons_ group: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO permission.grp_tree (name, parent, usergroup, description, application_perm) +SELECT 'Students', pgt.id, TRUE, 'Student borrowers', 'group_application.user.patron.student' +FROM permission.grp_tree pgt + WHERE name = 'Patrons'; +------------------------------------------------------------------------------ + +To create the Student Cataloger group, add a new row to the +_permission.grp_tree_ table as a child of the _Staff_ group: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO permission.grp_tree (name, parent, usergroup, description, application_perm) +SELECT 'Student Catalogers', pgt.id, TRUE, 'Student catalogers', 'group_application.user.staff.student_cataloger' +FROM permission.grp_tree pgt +WHERE name = 'Staff'; +------------------------------------------------------------------------------ + +To create the Student Circulator group, add a new row to the +_permission.grp_tree_ table as a child of the _Staff_ group: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO permission.grp_tree (name, parent, usergroup, description, application_perm) +SELECT 'Student Circulators', pgt.id, TRUE, 'Student circulators', 'group_application.user.staff.student_circulator' +FROM permission.grp_tree pgt +WHERE name = 'Staff'; +------------------------------------------------------------------------------ + +We want to give the Student Catalogers group the ability to work with MARC +records at the consortial level, so we assign the UPDATE_MARC, CREATE_MARC, and +IMPORT_MARC permissions at depth 0: + +[source,sql] +------------------------------------------------------------------------------ +WITH pgt AS ( + SELECT id + FROM permission.grp_tree + WHERE name = 'Student Catalogers' +) +INSERT INTO permission.grp_perm_map (grp, perm, depth) +SELECT pgt.id, ppl.id, 0 +FROM permission.perm_list ppl, pgt +WHERE ppl.code IN ('UPDATE_MARC', 'CREATE_MARC', 'IMPORT_MARC'); +------------------------------------------------------------------------------ + +Similarly, we want to give the Student Circulators group the ability to check +out items and record in-house uses at the system level, so we assign the +COPY_CHECKOUT and CREATE_IN_HOUSE_USE permissions at depth 1 (overriding the +same _Staff_ permissions that were granted only at depth 2): + +[source,sql] +------------------------------------------------------------------------------ +WITH pgt AS ( + SELECT id + FROM permission.grp_tree + WHERE name = 'Student Circulators' +) INSERT INTO permission.grp_perm_map (grp, perm, depth) +SELECT pgt.id, ppl.id, 1 +FROM permission.perm_list ppl, pgt +WHERE ppl.code IN ('COPY_CHECKOUT', 'CREATE_IN_HOUSE_USE'); +------------------------------------------------------------------------------ + +Finally, we want to add our students to the groups. The request may arrive in +your inbox from the library along the lines of "Please add Mint Julep as a +Student Cataloger, Bloody Caesar as a Student Circulator, and Grass Hopper as a +Student Cataloguer / Circulator; I've already created their accounts and given +them a work organizational unit." You can translate that into the following SQL +to add the users to the pertinent permission groups, adjusting for the +inevitable typos in the names of the users. + +First, add our Student Cataloger: + +[source,sql] +------------------------------------------------------------------------------ +WITH pgt AS ( + SELECT id FROM permission.grp_tree + WHERE name = 'Student Catalogers' +) +INSERT INTO permission.usr_grp_map (usr, grp) +SELECT au.id, pgt.id +FROM actor.usr au, pgt +WHERE first_given_name = 'Mint' AND family_name = 'Julep'; +------------------------------------------------------------------------------ + +Next, add the Student Circulator: + +[source,sql] +------------------------------------------------------------------------------ +WITH pgt AS ( + SELECT id FROM permission.grp_tree + WHERE name = 'Student Circulators' +) +INSERT INTO permission.usr_grp_map (usr, grp) +SELECT au.id, pgt.id +FROM actor.usr au, pgt +WHERE first_given_name = 'Bloody' AND family_name = 'Caesar'; +------------------------------------------------------------------------------ + +Finally, add the all-powerful Student Cataloger / Student Circulator: + +[source,sql] +------------------------------------------------------------------------------ + WITH pgt AS ( + SELECT id FROM permission.grp_tree + WHERE name IN ('Student Catalogers', 'Student Circulators') +) +INSERT INTO permission.usr_grp_map (usr, grp) +SELECT au.id, pgt.id +FROM actor.usr au, pgt +WHERE first_given_name = 'Grass' AND family_name = 'Hopper'; +------------------------------------------------------------------------------ + +While adopting this role-based approach might seem labour-intensive when +applied to a handful of students in this example, over time it can help keep +the permission profiles of your system relatively simple in comparison to the +alternative approach of rapidly reproducing permission groups, overlapping +permissions, and permissions granted on a one-by-one basis to individual users. diff --git a/docs/modules/admin_initial_setup/pages/designing_your_catalog.adoc b/docs/modules/admin_initial_setup/pages/designing_your_catalog.adoc new file mode 100644 index 0000000000..43b8ffc53c --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/designing_your_catalog.adoc @@ -0,0 +1,716 @@ += Designing your catalog = +:toc: + +When people want to find things in your Evergreen system, they will check the +catalog. In Evergreen, the catalog is made available through a web interface, +called the _OPAC_ (Online Public Access Catalog). In the latest versions of the +Evergreen system, the OPAC is built on a set of programming modules called the +Template Toolkit. You will see the OPAC sometimes referred to as the _TPAC_. + +In this chapter, we'll show you how to customize the OPAC, change it from its +default configuration, and make it your own. + +== Configuring and customizing the public interface == + +The public interface is referred to as the TPAC or Template Toolkit (TT) within +the Evergreen community. The template toolkit system allows you to customize the +look and feel of your OPAC by editing the template pages (.tt2) files as well as +the associated style sheets. + +=== Locating the default template files === + +The default URL for the TPAC on a default Evergreen system is +_http://localhost/eg/opac/home_ (adjust _localhost_ to match your hostname or IP +address). + +The default template file is installed in _/openils/var/templates/opac_. + +You should generally avoid touching the installed default template files, unless +you are contributing changes for Evergreen to adopt as a new default. Even then, +while you are developing your changes, consider using template overrides rather +than touching the installed templates until you are ready to commit the changes +to a branch. See below for information on template overrides. + +=== Mapping templates to URLs === + +The mapping for templates to URLs is straightforward. Following are a few +examples, where __ is a placeholder for one or more directories that +will be searched for a match: + +* _http://localhost/eg/opac/home => /openils/var//opac/home.tt2_ +* _http://localhost/eg/opac/advanced => +/openils/var//opac/advanced.tt2_ +* _http://localhost/eg/opac/results => +/openils/var//opac/results.tt2_ + +The template files themselves can process, be wrapped by, or include other +template files. For example, the _home.tt2_ template currently involves a number +of other template files to generate a single HTML file. + +Example Template Toolkit file: _opac/home.tt2_. +---- +[% PROCESS "opac/parts/header.tt2"; + WRAPPER "opac/parts/base.tt2"; + INCLUDE "opac/parts/topnav.tt2"; + ctx.page_title = l("Home") %] +
+ [% INCLUDE "opac/parts/searchbar.tt2" %] +
+
+
+
+ [% INCLUDE "opac/parts/homesearch.tt2" %] +
+
+
+[% END %] +---- +Note that file references are relative to the top of the template directory. + +=== How to override template files === + +Overrides for template files or TPAC pages go in a directory that parallels the +structure of the default templates directory. The overrides then get pulled in +via the Apache configuration. + +The following example demonstrates how to create a file that overrides the +default "Advanced search page" (_advanced.tt2_) by adding a new +_templates_custom_ directory and editing the new file in that directory. + +---- +bash$ mkdir -p /openils/var/templates_custom/opac +bash$ cp /openils/var/templates/opac/advanced.tt2 \ + /openils/var/templates_custom/opac/. +bash$ vim /openils/var/templates_custom/opac/advanced.tt2 +---- + +=== Configuring the custom templates directory in Apache's eg.conf === + +You now need to teach Apache about the new custom template directory. Edit +_/etc/apache2/sites-available/eg.conf_ and add the following __ +element to each of the __ elements in which you want to include the +overrides. The default Evergreen configuration includes a VirtualHost directive +for port 80 (HTTP) and another one for port 443 (HTTPS); you probably want to +edit both, unless you want the HTTP user experience to be different from the +HTTPS user experience. + +---- + + # + + # - absorb the shared virtual host settings + Include eg_vhost.conf + + PerlAddVar OILSWebTemplatePath "/openils/var/templates_custom" + + + # + +---- + +Finally, reload the Apache configuration to pick up the changes. You should now +be able to see your change at _http://localhost/eg/opac/advanced_ where +_localhost_ is the hostname of your Evergreen server. + +=== Adjusting colors for your public interface === + +You may adjust the colors of your public interface by editing the _colors.tt2_ +file. The location of this file is in +_/openils/var/templates/opac/parts/css/colors.tt2_. When you customize the +colors of your public interface, remember to create a custom file in your custom +template folder and edit the custom file and not the file located in your default +template. + +=== Adjusting fonts in your public interface === + +Font sizes can be changed in the _colors.tt2_ file located in +_/openils/var/templates/opac/parts/css/_. Again, create and edit a custom +template version and not the file in the default template. + +Other aspects of fonts such as the default font family can be adjusted in +_/openils/var/templates/opac/css/style.css.tt2_. + +=== Media file locations in the public interface === +The media files (mostly PNG images) used by the default TPAC templates are stored +in the repository in _Open-ILS/web/images/_ and installed in +_/openils/var/web/images/_. + +=== Changing some text in the public interface === + +Out of the box, TPAC includes a number of placeholder text and links. For +example, there is a set of links cleverly named Link 1, Link 2, and so on in the +header and footer of every page in TPAC. Here is how to customize that for a +_custom templates_ skin. + +To begin with, find the page(s) that contain the text in question. The simplest +way to do that is with the grep -s command. In the following example, search for +files that contain the text "Link 1": + +---- +bash$ grep -r "Link 1" /openils/var/templates/opac +/openils/var/templates/opac/parts/topnav_links.tt2 +4: [% l('Link 1') %] +---- + + +Next, copy the file into our overrides directory and edit it with vim. + +Copying the links file into the overrides directory. + +---- +bash$ cp /openils/var/templates/opac/parts/topnav_links.tt2 \ +/openils/var/templates_custom/opac/parts/topnav_links.tt2 +bash$ vim /openils/var/templates_custom/opac/parts/topnav_links.tt2 +---- + +Finally, edit the link text in _opac/parts/header.tt2_. Content of the +_opac/parts/header.tt2_ file. + +---- + +---- + +For the most part, the page looks like regular HTML, but note the `[%_(" ")%]` +that surrounds the text of each link. The `[% ... %]` signifies a TT block, +which can contain one or more TT processing instructions. `l(" ... ");` is a +function that marks text for localization (translation); a separate process can +subsequently extract localized text as GNU gettext-formatted PO (Portable +Object) files. + +As Evergreen supports multiple languages, any customization to Evergreen's +default text must use the localization function. Also, note that the +localization function supports placeholders such as `[_1]`, `[_2]` in the text; +these are replaced by the contents of variables passed as extra arguments to the +`l()` function. + +Once the link and link text has been edited to your satisfaction, load the page +in a Web browser and see the live changes immediately. + +=== Adding translations to PO file === + +After you have added custom text in translatable form to a TT2 template, you need to add the custom strings and its translations to the PO file containing the translations. Evergreen PO files are stored in _/openils/var/template/data/locale/_ + +The PO file consists of pairs of the text extracted from the code: Message ID denoted as _msgid_ and message string denoted as _msgstr_. When adding the custom string to PO file: + +* The line with English expressions must start with _msgid_. The English term must be enclosed in double apostrophes. +* The line with translation start with /msgstr/. The translation to local language must be and enclosed in enclosed in double apostrophes. +* It is recommended to add a note in which template and on which line the particular string is located. The lines with notes must be marked as comments i.e., start with number sign (#) + +Example: + +---- + +# --------------------------------------------------------------------- +# The lines below contains the custom strings manually added to the catalog +# --------------------------------------------------------------------- + +#: ../../Open-ILS/src/custom_templates/opac/parts/topnav_links.tt2:1 +msgid "Union Catalog of the Czech Republic" +msgstr "Souborný katalog České republiky" + + +#: ../../Open-ILS/src/custom_templates/opac/parts/topnav_links.tt2:1 +msgid "Uniform Information Gateway " +msgstr "Jednotná informační brána" + +---- + +[NOTE] +==== +It is good practice to save backup copy of the original PO file before changing it. +==== + +After making changes, restart Apache to make the changes take effect. As root run the command: + +---- +service apache2 restart +---- + +=== Adding and removing MARC fields from the record details display page === + +It is possible to add and remove the MARC fields and subfields displayed in the +record details page. In order to add MARC fields to be displayed on the details +page of a record, you will need to map the MARC code to variables in the +_/openils/var/templates/opac/parts/misc_util.tt2 file_. + +For example, to map the template variable _args.pubdates_ to the date of +publication MARC field 260, subfield c, add these lines to _misc_util.tt2_: + +---- +args.pubdates = []; +FOR sub IN xml.findnodes('//*[@tag="260"]/*[@code="c"]'); + args.pubdates.push(sub.textContent); +END; +args.pubdate = (args.pubdates.size) ? args.pubdates.0 : '' +---- + +You will then need to edit the +_/openils/var/templates/opac/parts/record/summary.tt2_ file in order to get the +template variable for the MARC field to display. + +For example, to display the date of publication code you created in the +_misc_util.tt2_ file, add these lines: + +---- +[% IF attrs.pubdate; %] + +[% END; %] +---- + +You can add any MARC field to your record details page. Moreover, this approach +can also be used to display MARC fields in other pages, such as your results +page. + +==== Using bibliographic source variables ==== + +For bibliographic records, there is a "bib source" that can be associated with +every record. This source and its ID are available as record attributes called +_bib_source.source_ and _bib_source.id_. These variables do not present +themselves in the catalog display by default. + +.Example use case +**** + +In this example, a library imports e-resource records from a third party and +uses the bib source to indicate where the records came from. Patrons can place +holds on these titles, but they must be placed via the vendor website, not in +Evergreen. By exposing the bib source, the library can alter the Place Hold +link for these records to point at the vendor website. + +**** + +== Setting the default physical location for your library environment == + +_physical_loc_ is an Apache environment variable that sets the default physical +location, used for setting search scopes and determining the order in which +copies should be sorted. This variable is set in +_/etc/apache2/sites-available/eg.conf_. The following example demonstrates the +default physical location being set to library ID 104: + +---- +SetEnv physical_loc 104 +---- + +[#setting_a_default_language_and_adding_optional_languages] +== Setting a default language and adding optional languages == + +_OILSWebLocale_ adds support for a specific language. Add this variable to the +Virtual Host section in _/etc/apache2/eg_vhost.conf_. + +_OILSWebDefaultLocale_ specifies which locale to display when a user lands on a +page in TPAC and has not chosen a different locale from the TPAC locale picker. +The following example shows the _fr_ca_ locale being added to the locale picker +and being set as the default locale: + +---- +PerlAddVar OILSWebLocale "fr_ca" +PerlAddVar OILSWebLocale "/openils/var/data/locale/opac/fr-CA.po" +PerlAddVar OILSWebDefaultLocale "fr-CA" +---- + +Below is a table of the currently supported languages packaged with Evergreen: + +[options="header"] +|=== +|Language| Code| PO file +|Arabic - Jordan| ar_jo | /openils/var/data/locale/opac/ar-JO.po +|Armenian| hy_am| /openils/var/data/locale/opac/hy-AM.po +|Czech| cs_cz| /openils/var/data/locale/opac/cs-CZ.po +|English - Canada| en_ca| /openils/var/data/locale/opac/en-CA.po +|English - Great Britain| en_gb| /openils/var/data/locale/opac/en-GB.po +|*English - United States| en_us| not applicable +|French - Canada| fr_ca| /openils/var/data/locale/opac/fr-CA.po +|Portuguese - Brazil| pt_br| /openils/var/data/locale/opac/pt-BR.po +|Spanish| es_es| /openils/var/data/locale/opac/es-ES.po +|=== +*American English is built into Evergreen so you do not need to set up this +language and there are no PO files. + +=== Updating translations in Evergreen using current translations from Launchpad === + +Due to Evergreen release workflow/schedule, some language strings may already have been translated in Launchpad, +but are not yet packaged with Evergreen. In such cases, it is possible to manually replace the PO file in +Evergreen with an up-to-date PO file downloaded from Launchpad. + +. Visit the Evergreen translation site in https://translations.launchpad.net/evergreen[Launchpad] +. Select required language (e.g. _Czech_ or _Spanish_) +. Open the _tpac_ template and then select option _Download translation_. Note: to be able to download the translation file you need to be logged in to Launchpad. +. Select _PO format_ and submit the _request for download_ button. You can also request for download of all existing templates and languages at once, see https://translations.launchpad.net/evergreen/master/+export. The download link will be sent You to email address provided. +. Download the file and name it according to the language used (e.g., _cs-CZ.po_ for Czech or _es-ES.po_ for Spanish) +. Copy the downloaded file to _/openils/var/template/data/locale_. It is a good practice to backup the original PO file before. +. Be sure that the desired language is set as default, using the xref:#setting_a_default_language_and_adding_optional_languages[Default language] procedures. + +Analogously, to update the web staff client translations, download the translation template _webstaff_ and copy it to _openils/var/template/data/locale/staff_. + + +Changes require web server reload to take effect. As root run the command + +---- +service apache2 restart +---- + +== Change Date Format in Patron Account View == +Libraries with same-day circulations may want their patrons to be able to view +the due *time* as well as due date when they log in to their OPAC account. To +accomplish this, go to _opac/myopac/circs.tt2_. Find the line that reads: + +---- +[% date.format(due_date, DATE_FORMAT) %] +---- + +Replace it with: + +---- +[% date.format(due_date, '%D %I:%M %p') %] +---- + + +== Including External Content in Your Public Interface == + +The public interface allows you to include external services and content in your +public interface. These can include book cover images, user reviews, table of +contents, summaries, author notes, annotations, user suggestions, series +information among other services. Some of these services are free while others +require a subscription. + +The following are some of the external content services which you can configure +in Evergreen. + +=== OpenLibrary === + +The default install of Evergreen includes OpenLibrary book covers. The settings +for this are controlled by the section of +_/openils/conf/opensrf.xml_. Here are the key elements of this configuration: + +---- +OpenILS::WWW::AddedContent::OpenLibrary +---- + +This section calls the OpenLibrary perl module. If you wish to link to a +different book cover service other than OpenLibrary, you must refer to the +location of the corresponding Perl module. You will also need to change other +settings accordingly. + +---- +1 +---- + +Max number of seconds to wait for an added content request to return data. Data +not returned within the timeout is considered a failure. +---- +600 +---- + +This setting is the amount of time to wait before we try again. + +---- +15 +---- + +Maximum number of consecutive lookup errors a given process can have before +added content lookups are disabled for everyone. To adjust the site of the cover +image on the record details page edit the config.tt2 file and change the value +of the record.summary.jacket_size. The default value is "medium" and the +available options are "small", "medium" and "large." + +=== ChiliFresh === + +ChiliFresh is a subscription-based service which allows book covers, reviews and +social interaction of patrons to appear in your catalog. To activate ChiliFresh, +you will need to open the Apache configuration file _/etc/apache2/eg_vhost.conf_ +and edit several lines: + +. Uncomment (remove the "#" at the beginning of the line) and add your ChiliFresh +account number: + +---- +#SetEnv OILS_CHILIFRESH_ACCOUNT +---- + +. Uncomment this line and add your ChiliFresh Profile: + +---- +#SetEnv OILS_CHILIFRESH_PROFILE +---- + +Uncomment the line indicating the location of the Evergreen JavaScript for +ChiliFresh: + +---- +#SetEnv OILS_CHILIFRESH_URL http://chilifresh.com/on-site /js/evergreen.js +---- + +. Uncomment the line indicating the secure URL for the Evergreen JavaScript : + +---- +#SetEnv OILS_CHILIFRESH_HTTPS_URL https://secure.chilifresh.com/on-site/js/evergreen.js +---- + +[id="_content_cafe"] +Content Café +~~~~~~~~~~~~ + +Content Café is a subscription-based service that can add jacket images, +reviews, summaries, tables of contents and book details to your records. + +In order to activate Content Café, edit the _/openils/conf/opensrf.xml_ file and +change the __ element to point to the ContentCafe Perl Module: + +---- +OpenILS::WWW::AddedContent::ContentCafe +---- + +To adjust settings for Content Café, edit a couple of fields with the +__ Section of _/openils/conf/opensrf.xml_. + +Edit the _userid_ and _password_ elements to match the user id and password for +your Content Café account. + +This provider retrieves content based on ISBN or UPC, with a default preference +for ISBNs. If you wish for UPCs to be preferred, or wish one of the two identifier +types to not be considered at all, you can change the "identifier_order" option +in opensrf.xml. When the option is present, only the identifier(s) listed will +be sent. + +=== Obalkyknih.cz === + +==== Setting up Obalkyknih.cz account ==== + +If your library wishes to use added content provided by Obalkyknih.cz, a service based in the Czech Republic, you have to http://obalkyknih.cz/signup[create an Obalkyknih.cz account]. +Please note that the interface is only available in Czech. After logging in your Obalkyknih.cz account, you have to add your IP address and Evergreen server address to your account settings. +(In case each library uses an address of its own, all of these addresses have to be added.) + +==== Enabling Obalkyknih.cz in Evergreen ==== + +Set obalkyknih_cz.enabled to true in '/openils/var/templates/opac/parts/config.tt2': + +[source,perl] +---- +obalkyknih_cz.enabled = 'true'; +---- + +Enable added content from Obalkyknih.cz in '/openils/conf/opensrf.xml' configuration file (and – at the same time – disable added content from Open Library, i.e., Evergreen's default added content provider): + +[source,xml] +---- + +OpenILS::WWW::AddedContent::ObalkyKnih +---- + +Using default settings for Obalkyknih.cz means all types of added content from Obalkyknih.cz are visible in your online catalog. +If the module is enabled, book covers are always displayed. Other types of added content (summaries, ratings or tables of contents) can be: + +* switched off using _false_ option, +* switched on again using _true_ option. + +The following types of added content are used: + +* summary (or annotation) +* tocPDF (table of contents available as image) +* tocText (table of contents available as text) +* review (user reviews) + +An example of how to switch off summaries: + +[source,xml] +---- +false +---- + + +=== Google Analytics === + +Google Analytics is a free service to collect statistics for your Evergreen +site. Statistic tracking is disabled by default through the Evergreen +client software when library staff use your site within the client, but active +when anyone uses the site without the client. This was a preventive measure to +reduce the potential risks for leaking patron information. In order to use Google +Analytics you will first need to set up the service from the Google Analytics +website at http://www.google.com/analytics/. To activate Google Analytics you +will need to edit _config.tt2_ in your template. To enable the service set +the value of google_analytics.enabled to true and change the value of +_google_analytics.code_ to be the code in your Google Analytics account. + +=== NoveList === + +Novelist is a subscription-based service providing reviews and recommendation +for books in you catalog. To activate your Novelist service in Evergreen, open +the Apache configuration file _/etc/apache2/eg_vhost.conf_ and edit the line: + +---- +#SetEnv OILS_NOVELIST_URL +---- + +You should use the URL provided by NoveList. + +=== RefWorks === + +RefWorks is a subscription-based online bibliographic management tool. If you +have a RefWorks subscription, you can activate RefWorks in Evergreen by editing +the _config.tt2_ file located in your template directory. You will need to set +the _ctx.refworks.enabled_ value to _true_. You may also set the RefWorks URL by +changing the _ctx.refworks.url_ setting on the same file. + +=== SFX OpenURL Resolver === + +An OpenURL resolver allows you to find electronic resources and pull them into +your catalog based on the ISBN or ISSN of the item. In order to use the SFX +OpenURL resolver, you will need to subscribe to the Ex Libris SFX service. To +activate the service in Evergreen edit the _config.tt2_ file in your template. +Enable the resolver by changing the value of _openurl.enabled_ to _true_ and +change the _openurl.baseurl_ setting to point to the URL of your OpenURL +resolver. + +=== Syndetic Solutions === + +Syndetic Solutions is a subscription service providing book covers and other +data for items in your catalog. In order to activate Syndetic, edit the +_/openils/conf/opensrf.xml_ file and change the __ element to point to +the Syndetic Perl Module: + +---- +OpenILS::WWW::AddedContent::Syndetic +---- + +You will also need to edit the __ element to be the user id provided to +you by Syndetic. + +Then, you will need to uncomment and edit the __ element so that it +points to the Syndetic service: + +---- +http://syndetics.com/index.aspx +---- + +For changes to be activated for your public interface you will need to restart +Evergreen and Apache. + +The Syndetic Solutions provider retrieves images based on the following identifiers +found in bibliographic records: + +* ISBN +* UPC +* ISSN + + +=== Clear External/Added Content Cache === + +On the catalog's record summary page, there is a link for staff that will forcibly clear +the cache of the Added Content for that record. This is helpful for when the Added Content +retrieved the wrong cover jacket art, summary, etc. and caches the wrong result. + +image::media/clear-added-content-cache-1.png[Clear Cache Link] + +Once clicked, there is a pop up that will display what was cleared from the cache. + +image::media/clear-added-content-cache-2.jpg[Example Popup] + +You will need to reload the record in the staff client to obtain the new images from your +Added Content Supplier. + + +=== Configure a Custom Image for Missing Images === + +You can configure a "no image" image other than the standard 1-pixel +blank image. The example eg_vhost.conf file provides examples in the +comments. Note: Evergreen does not provide default images for these. + + +== Including Locally Hosted Content in Your Public Interface == + +It is also possible to show added content that has been generated locally +by placing the content in a specific spot on the web server. It is +possible to have local book jackets, reviews, TOC, excerpts or annotations. + +=== File Location and Format === + +By default the files will need to be placed in directories under +*/openils/var/web/opac/extras/ac/* on the server(s) that run Apache. + +The files need to be in specific folders depending on the format of the +added content. Local Content can only be looked up based on the +record ID at this time. + +.URL Format: +\http://catalog/opac/extras/ac/*\{type}/\{format}/r/\{recordid}* + + * *type* is one of *jacket*, *reviews*, *toc*, *excerpt* or *anotes*. + * *format* is type dependent: + - for jacket, one of small, medium or large + - others, one of html, xml or json ... html is the default for non-image added content + * *recordid* is the bibliographic record id (bre.id). + +=== Example === + +If you have some equipment that you are circulating such as a +laptop or eBook reader and you want to add an image of the equipment +that will show up in the catalog. + +[NOTE] +============= +If you are adding jacket art for a traditional type of media +(book, CD, DVD) consider adding the jacket art to the http://openlibrary.org +project instead of hosting it locally. This would allow other +libraries to benefit from your work. +============= + +Make note of the Record ID of the bib record. You can find this by +looking at the URL of the bib in the catalog. +http://catalog/eg/opac/record/*123*, 123 is the record ID. +These images will only show up for one specific record. + +Create 3 different sized versions of the image in png or jpg format. + + * *Small* - 80px x 80px - named _123-s.jpg_ or _123-s.png_ - This is displayed in the browse display. + * *Medium* - 240px x 240px - named _123-m.jpg_ or _123-m.png_ - This is displayed on the summary page. + * *Large* - 400px x 399px - named _123-l.jpg_ or _123-l.png_ - This is displayed if the summary page image is clicked on. + +[NOTE] +The image dimensions are up to you, use what looks good in your catalog. + +Next, upload the images to the evergreen server(s) that run apache, +and move/rename the files to the following locations/name. +You will need to create directories that are missing. + + * Small - Move the file *123-s.jpg* to */openils/var/web/opac/extras/ac/jacket/small/r/123* + * Medium - Move the file *123-m.jpg* to */openils/var/web/opac/extras/ac/jacket/medium/r/123*. + * Large - Move the file *123-l.jpg* to */openils/var/web/opac/extras/ac/jacket/large/r/123*. + +[NOTE] +The system doesn't need the file extension to know what kind of file it is. + +Reload the bib record summary in the web catalog and your new image will display. + +== Styling the searchbar on the homepage == + +The `.searchbar-home` class is added to the div that +contains the searchbar when on the homepage. This allows +sites to customize the searchbar differently on the +homepage than in search results pages, and other places the +search bar appears. For example, adding the following CSS +would create a large, Google-style search bar on the homepage only: + +[source,css] +---- +.searchbar-home .search-box { + width: 80%; + height: 3em; +} + +.searchbar-home #search_qtype_label, +.searchbar-home #search_itype_label, +.searchbar-home #search_locg_label { + display:none; +} +---- + diff --git a/docs/modules/admin_initial_setup/pages/hard_due_dates.adoc b/docs/modules/admin_initial_setup/pages/hard_due_dates.adoc new file mode 100644 index 0000000000..e2a162f2d5 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/hard_due_dates.adoc @@ -0,0 +1,30 @@ += Hard due dates = +:toc: + +This feature allows you to specify a specific due date within your circulation policies. This is particularly useful for academic and school libraries, who may wish to make certain items due at the end of a semester or term. + +NOTE: To work with hard due dates, you will need the CREATE_CIRC_DURATION, UPDATE_CIRC_DURATION, and DELETE_CIRC_DURATION permissions at the _consortium_ level. + +== Creating a hard due date == +Setting up hard due dates is a two-step process. You must first create a hard due date, and then populate it with specific values. + +To create a hard due date: + +. Click *Administration -> Server Administration -> Hard Due Date Changes*. +. Click *New Hard Due Date*. +. In the *Name* field, enter a name for your hard due date. Note that each hard due date can have multiple values, so it's best to use a generic name here, such as "End of semester." +. In the *Owner* field, select the appropriate org unit for your new hard due date. +. In the *Current Ceiling Date* field, select any value. This field is required, but its value will be overwritten in subsequent steps, so you may enter an arbitrary date here. +. Check the *Always Use?* checkbox if you want items to only receive the due dates you specify, regardless of when they would ordinarily be due. If you leave this box unchecked, your specified due dates will serve as "ceiling" values that limit, rather than override, other circulation rules. In other words, with this box checked, items may be due only on the specified dates. With the box unchecked, items may be due _on or before_ the specified dates, simply not after. +. Click *Save*. + +To add date values to your hard due date: + +. Click the hyperlinked name of the due date you just created. +. Click on *New Hard Due Date Value* +. In the *Ceiling Date* field, enter the specific date you would like items to be due. +. In the *Active Date* field, enter the date you want this specific due date value to take effect. +. Click *Save*. +. Each Hard Due Date can include multiple values. For example, you can repeat these steps to enter specific due dates for several semesters using this same screen. + +After creating a hard due date and assigning it values, you can apply it by adding it to a circulation policy. diff --git a/docs/modules/admin_initial_setup/pages/importing_via_staff_client.adoc b/docs/modules/admin_initial_setup/pages/importing_via_staff_client.adoc new file mode 100644 index 0000000000..30a1248afa --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/importing_via_staff_client.adoc @@ -0,0 +1,185 @@ += Importing materials in the staff client = +:toc: + +Evergreen exists to connect users to the materials represented by bibliographic +records, call numbers, and copies -- so getting these materials into your +Evergreen system is vital to a successful system. There are two primary means +of getting materials into Evergreen: + +* The Evergreen staff client offers the *MARC Batch Importer*, which is a + flexible interface primarily used for small batches of records; +* Alternately, import scripts can load data directly into the database, which is + a highly flexible but much more complex method of loading materials suitable + for large batches of records such as the initial migration from your legacy + library system. + +== Staff client batch record imports == +The staff client has a utility for importing batches of bibliographic and item +records available through *Cataloging > MARC Batch Import/Export*. In addition +to importing new records, this interface can be used to match incoming records +to existing records in the database, add or overlay MARC fields in the existing +record, and add copies to those records. + +The MARC Batch Import interface may also be colloquially referred to as +"Vandelay" in the Evergreen community, referring to this interface's internals +in the system.You will also see this name used in several places in the editor. +For instance, when you click on the *Record Match Sets*, the title on the screen +will be *Vandelay Match Sets*. + +=== When to use the MARC Batch Importer === + +* When importing in batches of up to 500 to 1,000 records. +* When you need the system to match those incoming records to existing records + and overlay or add fields to the existing record. +* When you need to add items to existing records in the system. + +WARNING: If you are importing items that do not have barcodes or call numbers, you +must enable the _Vandelay Generate Default Barcodes_ and _Vandelay Default +Barcode Prefix (vandelay.item.barcode.prefix)_ settings. + +=== Record Match Sets === +Click the *Record Match Sets* button to identify how Evergreen should match +incoming records to existing records in the system. + +These record match sets can be used when importing records through the MARC +Batch Importer or when importing order records through the Acquisitions Load +MARC Order Records interface. + +Common match points used when creating a match set include: + +* MARC tag 020a (ISBN) +* MARC tag 022a (ISSN) +* MARC tag 024a (UPC) +* MARC tag 028a (Publisher number) + +=== Create Match Sets === +. On the *Record Match Sets* screen, click *New Match Set* to create a set of + record match points. Give the set a *Name*. Assign the *Owning Library* from + the dropdown list. The *Match Set Type* should remain as *biblio*. Click + *Save*. +. If you don't see your new set in the list, in the upper left corner of the + staff client window, click the *Reload* button. +. If you had to reload, click the *Record Match Sets* button to get back to + that screen. Find your new set in the list and click its name. (The name will + appear to be a hyperlink.) This will bring you to the *Vandelay Match Set + Editor*. +. Create an expression that will define the match points for the incoming + record. You can choose from two areas to create a match: Record Attribute (MARC + fixed fields) or MARC Tag and Subfield. You can use the Boolean operators AND + and OR to combine these elements to create a match set. +. When adding a Record Attribute or MARC tag/subfield, you also can enter a + Match Score. The Match Score indicates the relative importance of that match + point as Evergreen evaluates an incoming record against an existing record. You + can enter any integer into this field. The number that you enter is only + important as it relates to other match points. ++ +Recommended practice is that you create a match score of one (1) for the least +important match point and assign increasing match points to the power of 2 to +working points in increasing importance. +. After creating a match point, drag the completed match point under the folder + with the appropriately-named Boolean folder under the Expression tree. ++ +image::media/create_match_sets.png[Creating a Match Point] +. Click *Save Changes to Expression*. + +=== Quality Metrics === +* Quality metrics provide a mechanism for Evergreen to measure the quality of +records and to make importing decisions based on quality. +* Metrics are configured in the match set editor. +* Quality metrics are not required when creating a match set. +* You can use a value in a record attribute (MARC fixed fields) or a MARC tag + as your quality metric. +* The encoding level record attribute can be one indicator of record quality. + +image::media/record_quality_metrics.png[Quality Metric Grid] + +=== Import Item Attributes === +If you are importing items with your records, you will need to map the data in +your holdings tag to fields in the item record. Click the *Holdings Import +Profile* button to map this information. + +. Click the *New Definition* button to create a new mapping for the holdings tag. +. Add a *Name* for the definition. +. Use the *Tag* field to identify the MARC tag that contains your holdings + information. +. Add the subfields that contain specific item information to the appropriate + item field. +. At a minimum, you should add the subfields that identify the *Circulating +Library*, the *Owning Library*, the *Call Number* and the *Barcode*. +. For more details, see the full list of import fields. + +NOTE: All fields (except for Name and Tag) can contain a MARC subfield code +(such as "a") or an XPATH query. You can also use the +xref:admin:librarysettings.adoc#lse-vandelay[related library settings] to set defaults for some of these fields. + +image::media/batch_import_profile.png[Partial Screenshot of a Holdings Import Profile] + + +=== Overlay/Merge Profiles === +If Evergreen finds a match for an incoming record in the database, you need to +identify which fields should be replaced, which should be preserved, and which +should be added to the record. Click the *Merge/Overlay Profiles* button to +create a profile that contains this information. + +These overlay/merge profiles can be used when importing records through the +MARC Batch Importer or when importing order records through the Acquisitions +Load MARC Order Records interface. + +Evergreen comes pre-installed with two default profiles: + +* *Default merge* - No fields from incoming record are added to match. This + profile is useful for item loads or for order record uploads. +* *Default overlay* - Incoming record will replace existing record. + +You can customize the overlay/merge behavior with a new profile by clicking the +*New Merge Profile* button. Available options for handling the fields include: + +* *Preserve specification* - fields in the existing record that should be + preserved. +* *Replace specification* - fields in existing record that should be replaced + by those in the incoming record. +* *Add specification* - fields from incoming record that should be added to + existing record (in addition to any already there.) +* *Remove specification* - fields that should be removed from incoming record. + +You can add multiple tags to these specifications, separating each tag with a +comma. + +=== Importing the records === +After making the above configurations, you are now ready to import your +records. + +. Click the *Import Records* button +. Provide a unique name for the queue where the records will be loaded +. Identify the match set that should be used for matching +. If you are importing items, identify the *Import Item Attributes* definition + in the Holdings Import Profile +. Select a record source +. Select the overlay/merge profile that defines which fields should be + replaced, added or preserved +. Identify which records should be imported, the options are: + ** *Import Non-Matching Records* will automatically import records that have + no match in the system + ** *Merge on Exact Match* will automatically import records that match on the + 901c (record ID) + ** *Merge on Single Match* will automatically import records when there is + only one match in the system + ** *Merge on Best Match* will automatically import records for the best match + in the system; the best match will be determined by the combined total of the + records match point scores + +You do not need to select any of these import options at this step. You may also opt to review the records first in the import queue and then import them. + +* *Best Single Match Minimum Quality Ratio* should only be changed if quality metrics were used in the match set + + ** Set to 0.0 to import a record regardless of record quality + ** Set to 1.0 if the incoming record must be of equal or higher quality than + the existing record to be imported + ** Set to 1.1 if the incoming record must be of higher quality than the + existing record to be imported + ** *Insufficient Quality Fall-Through Profile* can also be used with quality + metrics. If an incoming record does not meet the standards of the minimum + quality ratio, you can identify a back-up merge profile to be used for + those records. For example, you may want to use the default overlay + profile for high-quality records but use the default merge profile for + lower quality records. diff --git a/docs/modules/admin_initial_setup/pages/introduction.adoc b/docs/modules/admin_initial_setup/pages/introduction.adoc new file mode 100644 index 0000000000..188ec35e36 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/introduction.adoc @@ -0,0 +1,7 @@ += Introduction = +:toc: + +The Evergreen system allows a free range of customizations to every aspect of +the system. Use this part of the documentation to become familiar with the tools +for configuring the system as well as customizing the catalog and staff client. + diff --git a/docs/modules/admin_initial_setup/pages/migrating_patron_data.adoc b/docs/modules/admin_initial_setup/pages/migrating_patron_data.adoc new file mode 100644 index 0000000000..3f5a7f70c1 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/migrating_patron_data.adoc @@ -0,0 +1,263 @@ += Migrating Patron Data = +:toc: + +== Introduction == + +This section will explain the task of migrating your patron data from comma +delimited files into Evergreen. It does not deal with the process of exporting +from the non-Evergreen system since this process may vary depending on where you +are extracting your patron records. Patron could come from an ILS or it could +come from a student database in the case of academic records. + +When importing records into Evergreen you will need to populate 3 tables in your +Evergreen database: + +* actor.usr - The main table for user data +* actor.card - Stores the barcode for users; Users can have more than 1 card but +only 1 can be active at a given time; +* actor.usr_address - Used for storing address information; A user can +have more than one address. + +Before following the procedures below to import patron data into Evergreen, it +is a good idea to examine the fields in these tables in order to decide on a +strategy for data to include in your import. It is important to understand the +data types and constraints on each field. + +. Export the patron data from your existing ILS or from another source into a +comma delimited file. The comma delimited file used for importing the records +should use Unicode (UTF8) character encoding. + +. Create a staging table. A staging table will allow you to tweak the data before +importing. Here is an example sql statement: ++ +[source,sql] +---------------------------------- + CREATE TABLE students ( + student_id int, barcode text, last_name text, first_name text, email text, + address_type text, street1 text, street2 text, + city text, province text, country text, postal_code text, phone text, profile + int DEFAULT 2, ident_type int, home_ou int, claims_returned_count int DEFAULT + 0, usrname text, net_access_level int DEFAULT 2, password text + ); +---------------------------------- ++ +NOTE: The _default_ variables allow you to set default for your library or to populate +required fields in Evergreen if your data includes NULL values. ++ +The data field profile in the above SQL script refers to the user group and should be an +integer referencing the id field in permission.grp_tree. Setting this value will affect +the permissions for the user. See the values in permission.grp_tree for possibilities. ++ +ident_type is the identification type used for identifying users. This is a integer value +referencing config.identification_type and should match the id values of that table. The +default values are 1 for Drivers License, 2 for SSN or 3 for other. ++ +home_ou is the home organizational unit for the user. This value needs to match the +corresponding id in the actor.org_unit table. ++ +. Copy records into staging table from a comma delimited file. ++ +[source,sql] +---------------------------------- + COPY students (student_id, last_name, first_name, email, address_type, street1, street2, + city, province, country, postal_code, phone) + FROM '/home/opensrf/patrons.csv' + WITH CSV HEADER; +---------------------------------- ++ +The script will vary depending on the format of your patron load file (patrons.csv). ++ +. Formatting of some fields to fit Evergreen filed formatting may be required. Here is an example +of sql to adjust phone numbers in the staging table to fit the evergreen field: ++ +[source,sql] +---------------------------------- + UPDATE students phone = replace(replace(replace(rpad(substring(phone from 1 for 9), 10, '-') || + substring(phone from 10), '(', ''), ')', ''), ' ', '-'); +---------------------------------- ++ +Data ``massaging'' will be required to fit formats used in Evergreen. ++ +. Insert records from the staging table into the actor.usr Evergreen table: ++ +[source,sql] +---------------------------------- + INSERT INTO actor.usr ( + profile, usrname, email, passwd, ident_type, ident_value, first_given_name, + family_name, day_phone, home_ou, claims_returned_count, net_access_level) + SELECT profile, students.usrname, email, password, ident_type, student_id, + first_name, last_name, phone, home_ou, claims_returned_count, net_access_level + FROM students; +---------------------------------- ++ +. Insert records into actor.card from actor.usr . ++ +[source,sql] +---------------------------------- + INSERT INTO actor.card (usr, barcode) + SELECT actor.usr.id, students.barcode + FROM students + INNER JOIN actor.usr + ON students.usrname = actor.usr.usrname; +---------------------------------- ++ +This assumes a one to one card patron relationship. If your patron data import has multiple cards +assigned to one patron more complex import scripts may be required which look +for inactive or active flags. ++ +. Update actor.usr.card field with actor.card.id to associate active card with the user: ++ +[source,sql] +---------------------------------- + UPDATE actor.usr + SET card = actor.card.id + FROM actor.card + WHERE actor.card.usr = actor.usr.id; +---------------------------------- ++ +. Insert records into actor.usr_address to add address information for users: ++ +[source,sql] +---------------------------------- + INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) + SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, + students.country, students.postal_code + FROM students + INNER JOIN actor.usr ON students.usrname = actor.usr.usrname; +---------------------------------- ++ +. Update actor.usr.address with address id from address table. + +[source,sql] +---------------------------------- + UPDATE actor.usr + SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id + FROM actor.usr_address + WHERE actor.usr.id = actor.usr_address.usr; +---------------------------------- + +This assumes 1 address per patron. More complex scenarios may require more sophisticated SQL. + +== Creating an sql Script for Importing Patrons == + +The procedure for importing patron can be automated with the help of an sql script. Follow these +steps to create an import script: + +. Create an new file and name it import.sql +. Edit the file to look similar to this: + +[source,sql] +---------------------------------- + BEGIN; + + -- Remove any old staging table. + DROP TABLE IF EXISTS students; + + -- Create staging table. + CREATE TABLE students ( + student_id text, barcode text, last_name text, first_name text, email text, address_type text, + street1 text, street2 text, city text, province text, country text, postal_code text, phone + text, profile int, ident_type int, home_ou int, claims_returned_count int DEFAULT 0, usrname text, + net_access_level int DEFAULT 2, password text, already_exists boolean DEFAULT FALSE + ); + + --Copy records from your import text file + COPY students (student_id, last_name, first_name, email, address_type, street1, street2, city, province, + country, postal_code, phone, password) + FROM '/home/opensrf/patrons.csv' WITH CSV HEADER; + + --Determine which records are new, and which are merely updates of existing patrons + --You may with to also add a check on the home_ou column here, so that you don't + --accidentally overwrite the data of another library in your consortium. + --You may also use a different matchpoint than actor.usr.ident_value. + UPDATE students + SET already_exists = TRUE + FROM actor.usr + WHERE students.student_id = actor.usr.ident_value; + + --Update the names of existing patrons, in case they have changed their name + UPDATE actor.usr + SET first_given_name = students.first_name, family_name=students.last_name + FROM students + WHERE actor.usr.ident_value=students.student_id + AND (first_given_name != students.first_name OR family_name != students.last_name) + AND students.already_exists; + + --Update email addresses of existing patrons + --You may wish to update other fields as well, while preserving others + --actor.usr.passwd is an example of a field you may not wish to update, + --since patrons may have set the password to something other than the + --default. + UPDATE actor.usr + SET email=students.email + FROM students + WHERE actor.usr.ident_value=students.student_id + AND students.email != '' + AND actor.usr.email != students.email + AND students.already_exists; + + --Insert records from the staging table into the actor.usr table. + INSERT INTO actor.usr ( + profile, usrname, email, passwd, ident_type, ident_value, first_given_name, family_name, + day_phone, home_ou, claims_returned_count, net_access_level) + SELECT profile, students.usrname, email, password, ident_type, student_id, first_name, + last_name, phone, home_ou, claims_returned_count, net_access_level + FROM students WHERE NOT already_exists; + + --Insert records from the staging table into the actor.card table. + INSERT INTO actor.card (usr, barcode) + SELECT actor.usr.id, students.barcode + FROM students + INNER JOIN actor.usr + ON students.usrname = actor.usr.usrname + WHERE NOT students.already_exists; + + --Update actor.usr.card field with actor.card.id to associate active card with the user: + UPDATE actor.usr + SET card = actor.card.id + FROM actor.card + WHERE actor.card.usr = actor.usr.id; + + --INSERT records INTO actor.usr_address from staging table. + INSERT INTO actor.usr_address (usr, street1, street2, city, state, country, post_code) + SELECT actor.usr.id, students.street1, students.street2, students.city, students.province, + students.country, students.postal_code + FROM students + INNER JOIN actor.usr ON students.usrname = actor.usr.usrname + WHERE NOT students.already_exists; + + + --Update actor.usr mailing address with id from actor.usr_address table.: + UPDATE actor.usr + SET mailing_address = actor.usr_address.id, billing_address = actor.usr_address.id + FROM actor.usr_address + WHERE actor.usr.id = actor.usr_address.usr; + + COMMIT; +---------------------------------- + +Placing the sql statements between BEGIN; and COMMIT; creates a transaction +block so that if any sql statements fail, the entire process is canceled and the +database is rolled back to its original state. Lines beginning with -- are +comments to let you you what each sql statement is doing and are not processed. + +== Batch Updating Patron Data == + +For academic libraries, doing batch updates to add new patrons to the Evergreen +database is a critical task. The above procedures and import script can be +easily adapted to create an update script for importing new patrons from +external databases. If the data import file contains only new patrons, then, the +above procedures will work well to insert those patrons. However, if the data +load contains all patrons, a second staging table and a procedure to remove +existing patrons from that second staging table may be required before importing +the new patrons. Moreover, additional steps to update address information and +perhaps delete inactive patrons may also be desired depending on the +requirements of the institution. + +After developing the scripts to import and update patrons have been created, +another important task for library staff is to develop an import strategy and +schedule which suits the needs of the library. This could be determined by +registration dates of your institution in the case of academic libraries. It is +important to balance the convenience of patron loads and the cost of processing +these loads vs staff adding patrons manually. + diff --git a/docs/modules/admin_initial_setup/pages/migrating_your_data.adoc b/docs/modules/admin_initial_setup/pages/migrating_your_data.adoc new file mode 100644 index 0000000000..0c89278b61 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/migrating_your_data.adoc @@ -0,0 +1,350 @@ += Migrating from a legacy system = +:toc: + +== Introduction == + +When you migrate to Evergreen, you generally want to migrate the bibliographic +records and item information that existed in your previous library system. For +anything more than a few thousand records, you should import the data directly +into the database rather than use the tools in the staff client. While the data +that you can extract from your legacy system varies widely, this section +assumes that you or members of your team have the ability to write scripts and +are comfortable working with SQL to manipulate data within PostgreSQL. If so, +then the following section will guide you towards a method of generating common +data formats so that you can then load the data into the database in bulk. + +== Making electronic resources visible in the catalog == +Electronic resources generally do not have any call number or item information +associated with them, and Evergreen enables you to easily make bibliographic +records visible in the public catalog within sections of the organizational +unit hierarchy. For example, you can make a set of bibliographic records +visible only to specific branches that have purchased licenses for the +corresponding resources, or you can make records representing publicly +available electronic resources visible to the entire consortium. + +Therefore, to make a record visible in the public catalog, modify the records +using your preferred MARC editing approach to ensure the 856 field contains the +following information before loading records for electronic resources into +Evergreen: + +.856 field for electronic resources: indicators and subfields +[width="100%",options="header"] +|============================================================================= +|Attribute | Value | Note +|Indicator 1 |4 | +|Indicator 2 |0 or 1 | +|Subfield u |URL for the electronic resource | +|Subfield y |Text content of the link | +|Subfield z |Public note | Normally displayed after the link +|Subfield 9 |Organizational unit short name | The record will be visible when + a search is performed specifying this organizational unit or one of its + children. You can repeat this subfield as many times as you need. +|============================================================================= + +Once your electronic resource bibliographic records have the required +indicators and subfields for each 856 field in the record, you can proceed to +load the records using either the command-line bulk import method or the MARC +Batch Importer in the staff client. + +== Migrating your bibliographic records == +Convert your MARC21 binary records into the MARCXML format, with one record per +line. You can use the following Python script to achieve this goal; just +install the _pymarc_ library first, and adjust the values of the _input_ and +_output_ variables as needed. + +[source,python] +------------------------------------------------------------------------------ +#!/usr/bin/env python +# -*- coding: utf-8 -*- +import codecs +import pymarc + +input = 'records_in.mrc' +output = 'records_out.xml' + +reader = pymarc.MARCReader(open(input, 'rb'), to_unicode=True) +writer = codecs.open(output, 'w', 'utf-8') +for record in reader: + record.leader = record.leader[:9] + 'a' + record.leader[10:] + writer.write(pymarc.record_to_xml(record) + "\n") +------------------------------------------------------------------------------ + +Once you have a MARCXML file with one record per line, you can load the records +into your Evergreen system via a staging table in your database. + +. Connect to the PostgreSQL database using the _psql_ command. For example: ++ +------------------------------------------------------------------------------ +psql -U -h -d +------------------------------------------------------------------------------ ++ +. Create a staging table in the database. The staging table is a temporary + location for the raw data that you will load into the production table or + tables. Issue the following SQL statement from the _psql_ command line, + adjusting the name of the table from _staging_records_import_, if desired: ++ +[source,sql] +------------------------------------------------------------------------------ +CREATE TABLE staging_records_import (id BIGSERIAL, dest BIGINT, marc TEXT); +------------------------------------------------------------------------------ ++ +. Create a function that will insert the new records into the production table + and update the _dest_ column of the staging table. Adjust + "staging_records_import" to match the name of the staging table that you plan + to create when you issue the following SQL statement: ++ +[source,sql] +------------------------------------------------------------------------------ +CREATE OR REPLACE FUNCTION staging_importer() RETURNS VOID AS $$ +DECLARE stage RECORD; +BEGIN +FOR stage IN SELECT * FROM staging_records_import ORDER BY id LOOP + INSERT INTO biblio.record_entry (marc, last_xact_id) VALUES (stage.marc, 'IMPORT'); + UPDATE staging_records_import SET dest = currval('biblio.record_entry_id_seq') + WHERE id = stage.id; + END LOOP; + END; + $$ LANGUAGE plpgsql; +------------------------------------------------------------------------------ ++ +. Load the data from your MARCXML file into the staging table using the COPY + statement, adjusting for the name of the staging table and the location of + your MARCXML file: ++ +[source,sql] +------------------------------------------------------------------------------ +COPY staging_records_import (marc) FROM '/tmp/records_out.xml'; +------------------------------------------------------------------------------ ++ +. Load the data from your staging table into the production table by invoking + your staging function: ++ +[source,sql] +------------------------------------------------------------------------------ +SELECT staging_importer(); +------------------------------------------------------------------------------ + +When you leave out the _id_ value for a _BIGSERIAL_ column, the value in the +column automatically increments for each new record that you add to the table. + +Once you have loaded the records into your Evergreen system, you can search for +some known records using the staff client to confirm that the import was +successful. + +Migrating your call numbers, items, and parts +---------------------------------------------- +'Holdings', comprised of call numbers, items, and parts, are the set of +objects that enable users to locate and potentially acquire materials from your +library system. + +'Call numbers' connect libraries to bibliographic records. Each call number has a +'label' associated with a classification scheme such as a the Library of Congress +or Dewey Decimal systems, and can optionally have either or both a label prefix +and a label suffix. Label prefixes and suffixes do not affect the sort order of +the label. + +'Copies' connect call numbers to particular instances of that resource at a +particular library. Each item has a barcode and must exist in a particular item +location. Other optional attributes of items include circulation modifier, +which may affect whether that item can circulate or for how long it can +circulate, and OPAC visibility, which controls whether that particular item +should be visible in the public catalog. + +'Parts' provide more granularity for items, primarily to enable patrons to +place holds on individual parts of a set of items. For example, an encyclopedia +might be represented by a single bibliographic record, with a single call +number representing the label for that encyclopedia at a given library, with 26 +items representing each letter of the alphabet, with each item mapped to a +different part such as _A, B, C, ... Z_. + +To migrate this data into your Evergreen system, you will create another +staging table in the database to hold the raw data for your materials from +which the actual call numbers, items, and parts will be generated. + +Begin by connecting to the PostgreSQL database using the _psql_ command. For +example: + +------------------------------------------------------------------------------ +psql -U -h -d +------------------------------------------------------------------------------ + +Create the staging materials table by issuing the following SQL statement: + +[source,sql] +------------------------------------------------------------------------------ +CREATE TABLE staging_materials ( + bibkey BIGINT, -- biblio.record_entry_id + callnum TEXT, -- call number label + callnum_prefix TEXT, -- call number prefix + callnum_suffix TEXT, -- call number suffix + callnum_class TEXT, -- classification scheme + create_date DATE, + location TEXT, -- shelving location code + item_type TEXT, -- circulation modifier code + owning_lib TEXT, -- org unit code + barcode TEXT, -- copy barcode + part TEXT +); +------------------------------------------------------------------------------ + +For the purposes of this example migration of call numbers, items, and parts, +we assume that you are able to create a tab-delimited file containing values +that map to the staging table properties, with one item per line. For example, +the following 5 lines demonstrate how the file could look for 5 different +items, with non-applicable attribute values represented by _\N_, and 3 of the +items connected to a single call number and bibliographic record via parts: + +------------------------------------------------------------------------------ +1 QA 76.76 A3 \N \N LC 2012-12-05 STACKS BOOK BR1 30007001122620 \N +2 GV 161 V8 Ref. Juv. LC 2010-11-11 KIDS DVD BR2 30007005197073 \N +3 AE 5 E363 1984 \N \N LC 1984-01-10 REFERENCE BOOK BR1 30007006853385 A +3 AE 5 E363 1984 \N \N LC 1984-01-10 REFERENCE BOOK BR1 30007006853393 B +3 AE 5 E363 1984 \N \N LC 1984-01-10 REFERENCE BOOK BR1 30007006853344 C +------------------------------------------------------------------------------ + +Once your holdings are in a tab-delimited format--which, for the purposes of +this example, we will name _holdings.tsv_--you can import the holdings file +into your staging table. Copy the contents of the holdings file into the +staging table using the _COPY_ SQL statement: + +[source,sql] +------------------------------------------------------------------------------ +COPY staging_items (bibkey, callnum, callnum_prefix, + callnum_suffix, callnum_class, create_date, location, + item_type, owning_lib, barcode, part) FROM 'holdings.tsv'; +------------------------------------------------------------------------------ + +Generate the item locations you need to represent your holdings: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO asset.copy_location (name, owning_lib) + SELECT DISTINCT location, 1 FROM staging_materials + WHERE NOT EXISTS ( + SELECT 1 FROM asset.copy_location + WHERE name = location + ); +------------------------------------------------------------------------------ + +Generate the circulation modifiers you need to represent your holdings: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO config.circ_modifier (code, name, description, sip2_media_type) + SELECT DISTINCT circmod, circmod, circmod, '001' + FROM staging_materials + WHERE NOT EXISTS ( + SELECT 1 FROM config.circ_modifier + WHERE circmod = code + ); +------------------------------------------------------------------------------ + +Generate the call number prefixes and suffixes you need to represent your +holdings: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO asset.call_number_prefix (owning_lib, label) + SELECT DISTINCT aou.id, callnum_prefix + FROM staging_materials sm + INNER JOIN actor.org_unit aou + ON aou.shortname = sm.owning_lib + WHERE NOT EXISTS ( + SELECT 1 FROM asset.call_number_prefix acnp + WHERE callnum_prefix = acnp.label + AND aou.id = acnp.owning_lib + ) AND callnum_prefix IS NOT NULL; + +INSERT INTO asset.call_number_suffix (owning_lib, label) + SELECT DISTINCT aou.id, callnum_suffix + FROM staging_materials sm + INNER JOIN actor.org_unit aou + ON aou.shortname = sm.owning_lib + WHERE NOT EXISTS ( + SELECT 1 FROM asset.call_number_suffix acns + WHERE callnum_suffix = acns.label + AND aou.id = acns.owning_lib + ) AND callnum_suffix IS NOT NULL; +------------------------------------------------------------------------------ + +Generate the call numbers for your holdings: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO asset.call_number ( + creator, editor, record, owning_lib, label, prefix, suffix, label_class +) + SELECT DISTINCT 1, 1, bibkey, aou.id, callnum, acnp.id, acns.id, + CASE WHEN callnum_class = 'LC' THEN 1 + WHEN callnum_class = 'DEWEY' THEN 2 + END + FROM staging_materials sm + INNER JOIN actor.org_unit aou + ON aou.shortname = owning_lib + INNER JOIN asset.call_number_prefix acnp + ON COALESCE(acnp.label, '') = COALESCE(callnum_prefix, '') + INNER JOIN asset.call_number_suffix acns + ON COALESCE(acns.label, '') = COALESCE(callnum_suffix, '') +; +------------------------------------------------------------------------------ + +Generate the items for your holdings: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO asset.copy ( + circ_lib, creator, editor, call_number, location, + loan_duration, fine_level, barcode +) + SELECT DISTINCT aou.id, 1, 1, acn.id, acl.id, 2, 2, barcode + FROM staging_materials sm + INNER JOIN actor.org_unit aou + ON aou.shortname = sm.owning_lib + INNER JOIN asset.copy_location acl + ON acl.name = sm.location + INNER JOIN asset.call_number acn + ON acn.label = sm.callnum + WHERE acn.deleted IS FALSE +; +------------------------------------------------------------------------------ + +Generate the parts for your holdings. First, create the set of parts that are +required for each record based on your staging materials table: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO biblio.monograph_part (record, label) + SELECT DISTINCT bibkey, part + FROM staging_materials sm + WHERE part IS NOT NULL AND NOT EXISTS ( + SELECT 1 FROM biblio.monograph_part bmp + WHERE sm.part = bmp.label + AND sm.bibkey = bmp.record + ); +------------------------------------------------------------------------------ + +Now map the parts for each record to the specific items that you added: + +[source,sql] +------------------------------------------------------------------------------ +INSERT INTO asset.copy_part_map (target_copy, part) + SELECT DISTINCT acp.id, bmp.id + FROM staging_materials sm + INNER JOIN asset.copy acp + ON acp.barcode = sm.barcode + INNER JOIN biblio.monograph_part bmp + ON bmp.record = sm.bibkey + WHERE part IS NOT NULL + AND part = bmp.label + AND acp.deleted IS FALSE + AND NOT EXISTS ( + SELECT 1 FROM asset.copy_part_map + WHERE target_copy = acp.id + AND part = bmp.id + ); +------------------------------------------------------------------------------ + +At this point, you have loaded your bibliographic records, call numbers, call +number prefixes and suffixes, items, and parts, and your records should be +visible to searches in the public catalog within the appropriate organization +unit scope. diff --git a/docs/modules/admin_initial_setup/pages/ordering_materials.adoc b/docs/modules/admin_initial_setup/pages/ordering_materials.adoc new file mode 100644 index 0000000000..eac19dd257 --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/ordering_materials.adoc @@ -0,0 +1,232 @@ += Ordering materials = +:toc: + +== Introduction == + +Acquisitions allows you to order materials, track the expenditure of your +collections funds, track invoices and set up policies for manual claiming. In +this chapter, we're going to be describing how to use the most essential +functions of acquisitions in the Evergreen system. + +== When should libraries use acquisitions? == +* When you want to track spending of your collections budget. +* When you want to use Evergreen to place orders electronically with your + vendors. +* When you want to import large batches of records to quickly get your on-order + titles into the system. + +If your library simply wants to add on-order items to the catalog so that +patrons can view and place holds on titles that have not yet arrived, +acquisitions may be more than you need. Adding those on-order records via +cataloging is a simpler option that works well for this use case. + +Below are the basic administrative settings to be configured to get started +with acquisitions. At a minimum, a library must configure *Funding Sources*, +*Funds*, and *Providers* to use acquisitions. + +== Managing Funds == + +=== Funding Sources (Required) === +Funding sources allow you to specify the sources that contribute monies to your +fund(s). You can create as few or as many funding sources as you need. These +can be used to track exact amounts for accounts in your general ledger. + +Example funding sources might be: + +* A municipal allocation for your materials budget; +* A trust fund used for collections; +* A revolving account that is used to replace lost materials; +* Grant funds to be used for collections. + +Funding sources are not tied to fiscal or calendar years, so you can continue +to add money to the same funding source over multiple years, e.g. County +Funding. Alternatively, you can name funding sources by year, e.g. County +Funding 2010 and County Funding 2011, and apply credits each year to the +matching source. + +. To create a funding source, select *Administration -> Acquisitions Administration -> + Funding Sources*. Click the *New Funding Source* button. Give + the funding source a name, an owning library, and code. You should also + identify the type of currency that is used for the fund. +. You must add money to the funding source before you can use it. Click the + hyperlinked name of the funding source and then click the *Apply Credit* + button. Add the amount of funds you need to add. The *Note* field is optional. + +=== Funds (Required) === +Funds allow you to allocate credits toward specific purchases. They typically +are used to track spending and purchases for specific collections. Some +libraries may choose to define very broad funds for their collections (e.g. +children's materials, adult materials) while others may choose to define more +specific funds (e.g. adult non-fiction DVDs for BR1). + +If your library does not wish to track fund accounting, you can create one +large generic fund and use that fund for all of your purchases. + +. To create a fund, select *Administration -> Acquisitions Administration -> + Funds*. Click the *New Fund* button. Give the fund a name and code. +. The *Year* can either be the fiscal or calendar year for the fund. +. If you are a multi-branch library that will be ordering titles for multiple + branches, you should select the system as the owning *Org Unit*, even if this + fund will only be used for collections at a specific branch. If you are a + one-branch library or if your branches do their own ordering, you can select + the branch as the owning *Org Unit*. +. Select the *Currency Type* that will be used for this fund. +. You must select the *Active* checkbox to use the fund. +. Enter a *Balance Stop Percent*. The balance stop percent prevents you from + making purchases when only a specified amount of the fund remains. For example, + if you want to spend 95 percent of your funds, leaving a five percent balance + in the fund, then you would enter 95 in the field. When the fund reaches its + balance stop percent, it will appear in red when you apply funds to copies. +. Enter a *Balance Warning Percent*. The balance warning percent gives you a + warning that the fund is low. You can specify any percent. For example, if you + want to spend 90 percent of your funds and be warned when the fund has only 10 + percent of its balance remaining, then enter 90 in the field. When the fund + reaches its balance warning percent, it will appear in yellow when you apply + funds to copies. +. Check the *Propagate* box to propagate funds. When you propagate a fund, the + system will create a new fund for the following fiscal year with the same + parameters as your current fund. All of the settings transfer except for the + year and the amount of money in the fund. Propagation occurs during the fiscal + year close-out operation. +. Check the *Rollover* box if you want to roll over remaining encumbrances and + funds into the same fund next year. If you need the ability to roll over + encumbrances without rolling over funds, go to the *Library Settings Editor* + (*Administration -> Local Administration -> Library Settings Editor*) and set *Allow + funds to be rolled over without bringing the money along* to *True*. +. You must add money to the fund before you can begin using it. Click the + hyperlinked name of the fund. Click the *Create Allocation button*. Select a + *Funding Source* from which the allocation will be drawn and then enter an + amount for the allocation. The *Note* field is optional. + +=== Fund Tags (Optional) === +You can apply tags to funds so that you can group funds for easy reporting. For +example, you have three funds for children’s materials: Children's Board Books, +Children's DVDs, and Children's CDs. Assign a fund tag of children's to each +fund. When you need to report on the amount that has been spent on all +children's materials, you can run a report on the fund tag to find total +expenditures on children's materials rather than reporting on each individual +fund. + +. To create a fund tag, select *Administration -> Acquisitions Administration -> + Fund Tags*. Click the *New Fund Tag* button. Select a owning library and + add the name for the fund tag. +. To apply a fund tag to a fund, select *Administration -> Acquisitions Administration -> + Funds*. Click on the hyperlinked name for the fund. Click the + *Tags* tab and then click the *Add Tag* button. Select the tag from the + dropdown menu. + +For convenience when propagating or rolling over a fund for a new fiscal year, +fund tags will be copied from the current fund to the new year's fund. + +== Ordering == + +=== Providers (Required) === +Providers are the vendors from whom you order titles. + +. To add a provider record, select *Administration -> Acquisitions Administration -> + Providers*. +. Enter information about the provider. At a minimum, you need to add a + *Provider Name*, *Code*, *Owner*, and *Currency*. You also need to select the + *Active* checkbox to use the provider. + +=== Distribution Formulas (Optional) === +If you are ordering for a multi-branch library system, distribution formulas +are a useful way to specify the number of items that should be distributed to +specific branches and item locations. + +. To create a distribution formula, select *Administration -> Acquisitions + Administration -> Distribution Formulas*. Click the *New Formula* button. Enter + the formula name and select the owning library. Ignore the *Skip Count* field. +. Click *New Entry*. Select an Owning Library from the drop down menu. This + indicates the branch that will receive the items. +. Select a Shelving Location from the drop down menu. +. In the Item Count field, enter the number of items that should be distributed + to that branch and copy location. You can enter the number or use the arrows on + the right side of the field. +. Keep adding entries until the distribution formula is complete. + +=== Helpful acquisitions Library Settings === +There are several acquisitions Library Settings available that will help with +acquisitions workflow. These settings can be found at *Administration -> Local +Administration -> Library Settings Editor*. + +* Default circulation modifier - Automatically applies a default circulation + modifier to all of your acquisitions items. Useful if you use a specific + circulation modifier for on-order items. +* Default copy location - Automatically applies a default item location (e.g. + On Order) to acquisitions items. +* Temporary barcode prefix - Applies a unique prefix to the barcode that is + automatically generated during the acquisitions process. +* Temporary call number prefix - Applies a unique prefix to the start of the + call number that is automatically generated during the acquisitions process. + +=== Preparing for order record loading === +If your library is planning to upload order records in a batch, you need to add +some information to your provider records so that Evergreen knows how to map +the item data contained in the order record. + +. Retrieve the record for the provider that has supplied the order records by + selecting *Administration -> Acquisitions Administration -> Providers*. Click on + the hyperlinked Provider name. +. In the top frame, add the MARC tag that contains your holdings data in the + *Holdings Tag* field (this tag can also be entered at the time you create the + provider record.) +. To map the tag's subfields to the appropriate copy data, click the *Holding + Subfield* tab. Click the *New Holding Subfield* button and select the copy + data that you are mapping. Add the subfield that contains that data and click + *Save*. ++ +image::media/order_record_loading.png[] ++ +. If your vendor is sending other data in a MARC tag that needs to be mapped to +a field in acquisitions, you can do so by clicking the Attribute Definitions +tab. As an example, if you need to import the PO Name, you could set up an +attribute definition by adding an XPath similar to: ++ +------------------------------------------------------------------------------ +code => purchase_order +xpath => //*[@tag="962"]/*[@code="p"] +Is Identifier => false +------------------------------------------------------------------------------ ++ +where 962 is the holdings tag and p is the subfield that contains the PO Name. + +=== Preparing to send electronic orders from Evergreen === +If your library wants to transmit electronic order information to a vendor, you +will need to configure your server to use EDI. You need to install the EDI +translator and EDI scripts on your server by following the instructions in the +command line system administration manual. + +Configure your provider's EDI information by selecting *Administration -> +Acquisitions Administration -> EDI Accounts*. Click *New Account* Button. Give the +account a name in the *Label* box. + +. *Host* is the vendor-assigned FTP/SFTP/SSH hostname. +. *Username* is the vendor-assigned FTP/SFTP/SSH username. +. *Password* is the vendor-assigned FTP/SFTP/SSH password. +. *Account* This field enables you to add a supplemental password for + entry to a remote system after log in has been completed. This field is + optional for the ILS but may be required by your provider. +. *Owner* is the organizational unit who owns the EDI account +. *Last Activity* is the date of last activity for the account +. *Provider* is a link to the codes for the Provider record. +. *Path* is the path on the vendor’s server where Evergreen will deposit its + outgoing order files. +. *Incoming Directory* is the path on the vendor’s server where Evergreen + will retrieve incoming order responses and invoices. +. *Vendor Account Number* is the Vendor assigned account number. +. *Vendor Assigned Code* is usually a sub-account designation. It can be used + with or without the Vendor Account Number. + +You now need to add this *EDI Account* and the *SAN* code to the provider's record. + +. Select *Administration -> Acquisitions Administration -> Providers*. +. Click the hyperlinked Provider name. +. Select the account you just created in the *EDI Default* field. +. Add the vendor-provided SAN code to the *SAN* field. + +The last step is to add your library's SAN code to Evergreen. + +. Select *Administration -> Server Administration -> Organizational Units*. +. Select your library from the organizational hierarchy in the left pane. +. Click the *Addresses* tab and add your library's SAN code to the *SAN* field. diff --git a/docs/modules/admin_initial_setup/pages/troubleshooting_tpac.adoc b/docs/modules/admin_initial_setup/pages/troubleshooting_tpac.adoc new file mode 100644 index 0000000000..fa2530e0ff --- /dev/null +++ b/docs/modules/admin_initial_setup/pages/troubleshooting_tpac.adoc @@ -0,0 +1,19 @@ += Troubleshooting TPAC errors = +:toc: + +If there is a problem such as a TT syntax error, it generally shows up as an +ugly server failure page. If you check the Apache error logs, you will probably +find some solid clues about the reason for the failure. For example, in the +following example, the error message identifies the file in which the problem +occurred as well as the relevant line numbers. + +Example error message in Apache error logs: + +---- +bash# grep "template error" /var/log/apache2/error_log +[Tue Dec 06 02:12:09 2011] [warn] [client 127.0.0.1] egweb: template error: + file error - parse error - opac/parts/record/summary.tt2 line 112-121: + unexpected token (!=)\n [% last_cn = 0;\n FOR copy_info IN + ctx.copies;\n callnum = copy_info.call_number_label;\n +---- + diff --git a/docs/modules/api/_attributes.adoc b/docs/modules/api/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/api/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/api/nav.adoc b/docs/modules/api/nav.adoc new file mode 100644 index 0000000000..2ce92424f3 --- /dev/null +++ b/docs/modules/api/nav.adoc @@ -0,0 +1,5 @@ +* xref:api:introduction.adoc[Getting Data from Evergreen] +** xref:development:data_supercat.adoc[Using Supercat] +** xref:development:data_unapi.adoc[Using UnAPI] +** xref:development:data_opensearch.adoc[Using Opensearch as a developer] + diff --git a/docs/modules/api/pages/_attributes.adoc b/docs/modules/api/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/api/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/api/pages/introduction.adoc b/docs/modules/api/pages/introduction.adoc new file mode 100644 index 0000000000..1eb3429a25 --- /dev/null +++ b/docs/modules/api/pages/introduction.adoc @@ -0,0 +1,6 @@ += Introduction = + +You may be interested in re-using data from your Evergreen installation in +another application. This part describes several methods to get the data +you need. + diff --git a/docs/modules/appendix/_attributes.adoc b/docs/modules/appendix/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/appendix/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/appendix/nav.adoc b/docs/modules/appendix/nav.adoc new file mode 100644 index 0000000000..88de8ad056 --- /dev/null +++ b/docs/modules/appendix/nav.adoc @@ -0,0 +1,4 @@ +* xref:shared:attributions.adoc[Appendix A. Attributions] +* xref:shared:licensing.adoc[Appendix B. Licensing] +* xref:appendix:glossary.adoc[Glossary] +* xref:shared:index.adoc[Index] diff --git a/docs/modules/appendix/pages/_attributes.adoc b/docs/modules/appendix/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/appendix/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/appendix/pages/glossary.adoc b/docs/modules/appendix/pages/glossary.adoc new file mode 100644 index 0000000000..a95e4bfe16 --- /dev/null +++ b/docs/modules/appendix/pages/glossary.adoc @@ -0,0 +1,252 @@ +[glossary] +Evergreen Glossary +================== + +xref:A[A] xref:B[B] xref:C[C] xref:D[D] xref:E[E] xref:F[F] xref:G[G] xref:H[H] xref:I[I] xref:J[J] xref:K[K] xref:L[L] xref:M[M] xref:N[N] xref:O[O] xref:P[P] xref:Q[Q] xref:R[R] xref:S[S] xref:T[T] xref:U[U] xref:V[V] xref:W[W] xref:X[X] xref:Y[Y] xref:Z[Z] + +[glossary] +[[A]]AACR2 (Angolo-American Cataloguing Rules, Second Edition):: + AACR2 is a set of cataloging rules for descriptive cataloging of various types of resources. http://www.aacr2.org/ +Acquisitions:: + Processes related to ordering materials and managing expenditures. +Age Protection:: + Allows libraries to prevent holds on new books (on a item by item basis) from outside the owning library's branch or system for a designated amount of time. +Apache:: + Open-source web server software used to serve both static content and dynamic web pages in a secure and reliable way. More information is available at http://apache.org. +Authority Record:: + Records used to control the contents of MARC fields. +[[B]]Balance stop percent:: + A setting in acquisitions that prevents you from making purchases when only a specified amount of the fund remains. +Barcode:: + The code/number attached to the item. This is not the database ID. Barcodes are added to items to facilitate the checking in and out of an item. Barcodes can be changed as needed. Physical barcodes that can be placed on items can follow several different barcode symboligies. +Bibliographic record:: + The record that contains data about a work, such as title, author and copyright date. +Booking:: + Processes relating to reserving cataloged and non- bibliographic items. +Brick:: + A brick is a unit consisting of one or more servers. It refers to a set of servers with ejabberd, Apache, and all applicable Evergreen services. It is possible to run all the software on a single server, creating a “single server brick.” Typically, larger installations will have more than one such brick and, hence, be more robust. +Buckets:: + This is a container of items. See also Record Buckets and Item Buckets. +[[C]]Call number:: + An item's call number is a string of letters and or numbers that work like map coordinates to describe where in a library a particular item "lives." +Catalog:: + The database of titles and objects +Cataloging:: + The process of adding materials to be circulated to the system. +Check-in:: + The process of returning an item. +Check-out:: + The process of loaning and item to a patron. +Circulation:: + The process of loaning an item to an individual. +Circulating library:: + The library which has checked out the item. +Circulation library:: + The library which is the home of the item. +Circulation limit sets:: + Refines circulation policies by limiting the number of items that users can check out. +Circulation modifiers:: + Circulation modifiers pull together Loan Duration, Renewal Limit, Fine Level, Max Fine, and Profile Permission Group to create circulation rules for different types of materials. Circulation Modifiers are also used to determine Hold Policies. +Cloud Computing:: + The use of a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or computer. Terms sush as Software as a Service(SaaS) refer to these kinds of systems. ILS vendors offering hosting where they manage the servers used by the ILS and provide access via the internet is an example of cloud computing. +Commit:: + To make code changes to the software code permanent. In open source software development, the ability to commit is usually limited to a core group. +Community:: + Community in the open source world of software development and use refers to the users and developers who work in concert to develop, communicate, and collaborate to develop the software. +Compiled:: + A compiled software is where the software has been translated to a machine code for use. Compiled software usually targets a specific computer architecture. The code cannot be read by humans. +Consortium:: + A consortium is a organization of two or more individuals, companies, libraries, consortiums, etc. formed to undertake an enterprise beyond the resources of any one member. +Consortial Library System (CLS):: + An ILS designed to run an consortium. A CLS is designed for resource sharing between all members of the consortium, it provides an union catalog for all items in the consortium. +[[copy]]Copy:: + see <> +[[D]]Default Search Library:: + The default search library setting determines what library is searched from the advanced search screen and portal page by default. Manual selection of a search library will override it. One recommendation is to set the search library to the highest point you would normally want to search. +Distribution formulas:: + Used to specify the number of copies that should be distributed to specific branches and item locations in Acquisitions +Due date:: + The due date is the day on or before which an item must be returned to the library in order to avoid being charged an overdue fine. +[[E]]ejabberd:: + ejabberd stands for Erland Jabber Daemon. This is the software that runs <>. ejabberd is used to exchange data between servers. +Electronic data interchange (EDI):: + Transmission of data between organizations using electronic means. This is used for Acquisitions. +Evergreen:: + Evergreen is an open source ILS designed to handle the processing of a geographical dispersed, resource sharing library network. +[[F]]FIFO (First In First Out):: + In a FIFO environment, holds are filled in the order that they are placed. +FUD (Fear, Uncertainty, Doubt):: + FUD is a marketing stratagy to try to install Fear, Uncertainty, and/or Doubt about a competitors product. +Fund tags:: + Tags used in acquisitions to allow you to group Funds. +Funding sources:: + Sources of the monies to fund acquisitions of materials. +Funds:: + Allocations of money used for purchases. +FRBR (Functional Requirements for Bibliographic Records):: + See https://www.loc.gov/cds/downloads/FRBR.PDF[Library of Congress FRBR documentation] +[[G]]Git:: + Git is a versioning control software for tracking changes in the code. It is designed to work with multiple developers. +GNU:: + GNU is a recursive acronym for "GNU's Not Unix". GNU is an open source Unix like operating system. +GNU GPL version 2 (GNU General Public License version 2):: + GNU GPL Version 2 is the license in which Evergreen is licensed. GNU GPL version 2 is a copyleft licence, which means that derivative work must be open source and distributed under the same licence terms. See https://www.gnu.org/licenses/old-licenses/gpl-2.0.html for complete license information. +[[H]]Hatch:: + A additional program that is installed as an extension of your browser to extend printing functionality with Evergreen. +Hold:: + The exclusive right for a patron to checkout a specific item. +Hold boundaries:: + Define which organizational units are available to fill specific holds. +Holdings import profile:: + Identifies the <> definition. +Holding subfield:: + Used in the acquisitions module to map subfields to the appropriate item data. +[[I]]ICL (Inter-Consortium Loans):: + Inter-Consortium Loans are like ILL's, but different in the fact that the loan happens just with in the Consortium. +[[ILS]]ILS (Integrated Library System):: + The Integrated Library System is a set of applications which perform the business and technical aspects of library management, including but not exclusive to acquistions, cataloging, circulation, and booking. +ILL (Inter-Library Loan):: + Inter-Library Loan is the process of one libray borrows materials for a patron from another library. +[[IIA]]Import item attributes:: + Used to map the data in your holdings tag to fields in the item record during a MARC import. +Insufficient quality fall-through profile:: + A back-up merge profile to be used for importing if an incoming record does not meet the standards of the minimum quality ratio. +ISBN (International Standard Book Number):: + The ISBN is a publisher product number that has been used in the book supply industry since 1968. A published book that is a separate product gets its own ISBN. ISBNs are either 10 digits or 13 digits long. They may contain information on the country of publication, the publisher, title, volume or edition of a title. +ISSN (International Standard Serial Number):: + International Standard Serial Number is a unigue 8 digit number assigned by the Internation Serials Data System to identify a specfic Serial Title. +[[item]]Item:: + The actual item. +Item barcode:: + Item barcodes uniquely identify each specific item entered into the Catalog. +Item Buckets:: + This is a container of individual items. +Item Status:: + Item Status allows you to see the status of a item without having to go to the actual Title Record. Item status is a intragal part of Evergreen and how it works. +[[J]][[jabber]]Jabber:: + The communications protocol used for client-server message passing within Evergreen. Now known as <>, it was originally named "Jabber." +Juvenile flag:: + User setting used to specify if a user is a juvenile user for circulation purposes. +[[K]]KPAC (Kids' OPAC):: + Alternate version of the Template Toolkit OPAC that is kid friendly +[[L]]LaunchPad:: + Launchpad is an open source suite of tools that help people and teams to work together on software projects. Launchpad brings together bug reports, wishlist ideas, translations, and blueprints for future development of Evergreen. https://launchpad.net/evergreen +LCCN (Library of Congress Control Number):: + The LCCN is a system of numbering catalog records at the Library of Congress +LMS (Library Management System):: + see <> +Loan duration:: + Loan duration (also sometimes referred to as "loan period") is the length of time a given type of material can circulate. +[[M]]MARC (Machine Readable Cataloging):: + The MARC formats are standards for the representation and communication of bibliographic and related information in machine-readable form. +MARC batch export:: + Mass exporting of MARC records out of a library system. +MARC batch import:: + Mass importing of MARC records into a library system. +MARCXML:: + Framework for working with MARC data in a XML environment. +Match score:: + Indicates the relative importance of that match point as Evergreen evaluates an incoming record against an existing record. +Minimum quality ratio:: + Used to set the acceptable level of quality for a record to be imported. +[[N]]Non-Cataloged:: + Items that have not been cataloged. +[[O]]OPAC (Online Public Access Catalog):: + An OPAC is an online interface to the database of a library's holdings, used to find resources in their collections. It is possibly searchable by keyword, title, author, subject or call number. The public view of the catalog. +OpenSRF (Open Scalable Request Framework):: + Acronym for Open Scalable Request Framework (pronounced 'open surf'). An enterprise class Service Request Framework. It's purpose is to serve as a robust message routing network upon which one may build complex, scalable applications. To that end, OpenSRF attempts to be invisible to the application developer, while providing transparent load balancing and failover with minimal overhead. +Organizational units (Org Unit):: + Organizational Units are the specific instances of the organization unit types that make up your library's hierarchy. +Organization unit type:: + The organization types in the hierarchy of a library system. +Overlay/merge profiles:: + During a MARC import this is used identify which fields should be replaced, which should be preserved, and which should be added to the record. +Owning library:: + The library which has purchased a particular item and created the volume and item records. +[[P]]Parent organizational unit:: + An organizational unit one level above whose policies may be inherited by its child units. +Parts:: + Provide more granularity for copies, primarily to enable patrons to place holds on individual parts of a set of items. +Patron:: + A user of the ILS. Patrons in Evergreen can both be staff and public users. +Patron barcode / library card number:: + Patrons are uniquely identified by their library card barcode number. +Permission Groups:: + A grouping of permissions granted to a group of individuals, i.e. patrons, cataloging, circulation, administration. Permission Groups also set the depth and grantability of permissions. +Pickup library:: + Library designated as the location where requested material is to be picked up. +PostgreSQL:: + A popular open-source object-relational database management system that underpins Evergreen software. +Preferred Library:: + The library that is used to show items and URIs regardless of the library searched. It is recommended to set this to your Workstation library so that local copies always show up first in search Results. +Print Templates:: + Templates that Evergreen uses to print various receipts and tables. +Printer Settings:: + Settings in Evergreen for selected printers. This is a HATCH functionality. +Propagate funds:: + Create a new fund for the following fiscal year with the same parameters as your current fund. +Providers:: + Vendors from whom you order your materials. Set in the Acquisition module. +Purchase Order (PO):: + A document issued by a buyer to a vendor, indicating types, quantities, and prices of materials. +[[Q]]Quality metrics:: + Provide a mechanism for Evergreen to measure the quality of records and to make importing decisions based on quality. +[[R]]RDA (Resource Description & Access):: + RDA is a set of cataloging standards and guidelines based on FRBR and FRAD. RDA is the successor for AACR2. http://rdatoolkit.org/ +Record Bucket:: + This is a container of Title Records. +Record match sets:: + When importing records, this identifies how Evergreen should match incoming records to existing records in the system. +Recurring fine:: + Recurring Fine is the official term for daily or other regularly accruing overdue fines. +Register Patron:: + The process of adding a Patron account with in Evergreen. +Rollover:: + Used to roll over remaining encumbrances and funds into the same fund the following year. +[[S]]SAN (Standard Address Number):: + SAN is an identificatin code for electronic communication with in the publishing industry. SAN uniguely identify an address for location. +Shelving location:: + Shelving location is the area within the library where a given item is shelved. +SIP (Standard Interchange Protocol):: + SIP is a communications protocol used within Evergreen for transferring data to and from other third party devices, such as RFID and barcode scanners that handle patron and library material information. Version 2.0 (also known as "SIP2") is the current standard. It was originally developed by the 3M Corporation. +[[SRU]]SRU (Search & Retrieve URL):: + Acronym for Search & Retrieve URL Service. SRU is a search protocol used in web search and retrieval. It expresses queries in Contextual Query Language (CQL) and transmits them as a URL, returning XML data as if it were a web page. +Staff client:: + The graphical user interface used by library workers to interact with the Evergreen system. Staff use the Staff Client to access administration, acquisitions, circulation, and cataloging functions. +Standing penalties:: + Serve as alerts and blocks when patron records have met certain criteria, commonly excessive overdue materials or fines; standing penalty blocks will prevent circulation and hold transactions. +Statistical categories:: + Allow libraries to associate locally interesting data with patrons and holdings. Also known as stat cats. +[[T]]Template Toolkit (TT):: + A template processing system written in Perl. +TPAC:: + Evergreen's Template Toolkit based OPAC. The web based public interface in Evergreen written using functionality from the Template Toolkit. +[[U]]URI:: + Universal Resource Identifier. A URI is a string of characters that identify a logical or physical resource. Examples are URL an URN +URL (Universal Resource Locator):: + This is the web address. +URN (Universal Resource Number):: + This is a standard number to identify a resource. Example of URNs are ISBN, ISSN, and UPC. +UPC (Universal Product Code):: + The UPC is a number uniguely assigned to an item by the manufacturer. +User Activity Type:: + Different types of activities users do in Evergreen. Examples: Login, Verification of account +[[V]]Vandelay:: + MARC Batch Import/Export tool original name. +[[W]]Wiki:: + The Evergreen Wiki can be found at https://wiki.evergreen-ils.org. The Evergreen Wiki is a knowledge base of information on Evergreen. +Workstation:: + The unique name associated with a specific computer and Org Unit. +[[X]]XML (eXtensible Markup Language):: + Acronym for eXtensible Markup Language, a subset of SGML. XML is a set of rules for encoding information in a way that is both human-readable and machine-readable. It is primarily used to define documents but can also be used to define arbitrary data structures. It was originally defined by the World Wide Web Consortium (W3C). +[[XMPP]]XMPP (Extensible Messaging and Presence Protocol):: + The open-standard communications protocol (based on XML) used for client-server message passing within Evergreen. It supports the concept of a consistent domain of message types that flow between software applications, possibly on different operating systems and architectures. More information is available at http://xmpp.org. + See Also: <>. +xpath:: + The XML Path Language, a query language based on a tree representation of an XML document. It is used to programmatically select nodes from an XML document and to do minor computation involving strings, numbers and Boolean values. It allows you to identify parts of the XML document tree, to navigate around the tree, and to uniquely select nodes. The currently version is "XPath 2.0". It was originally defined by the World Wide Web Consortium (W3C). +[[Y]]YAOUS:: + Yet Another Organization Unit Setting +[[Z]]Z39.50 :: + An international standard client/server protocol for communication between computer systems, primarily library and information related systems. + See Also: <> + diff --git a/docs/modules/cataloging/_attributes.adoc b/docs/modules/cataloging/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/cataloging/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records1.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records1.jpg new file mode 100644 index 0000000000..ae0962afbb Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records1.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records10.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records10.jpg new file mode 100644 index 0000000000..089c9a1ed2 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records10.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records12.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records12.jpg new file mode 100644 index 0000000000..a8c0d70d4b Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records12.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records14.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records14.jpg new file mode 100644 index 0000000000..f192db2533 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records14.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records15.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records15.jpg new file mode 100644 index 0000000000..06fbf2bd89 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records15.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records2.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records2.jpg new file mode 100644 index 0000000000..b022f873ae Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records2.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records3.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records3.jpg new file mode 100644 index 0000000000..124712c72e Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records3.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records4.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records4.jpg new file mode 100644 index 0000000000..4d556a1bfb Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records4.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records5.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records5.jpg new file mode 100644 index 0000000000..3410561afa Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records5.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records6.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records6.jpg new file mode 100644 index 0000000000..b0dfc8a7a1 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records6.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records7.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records7.jpg new file mode 100644 index 0000000000..e26303ea81 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records7.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records8.jpg b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records8.jpg new file mode 100644 index 0000000000..c8cf321bc2 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/Batch_Importing_MARC_Records8.jpg differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/batch_import_profile.png b/docs/modules/cataloging/assets/images/batch_importing_MARC/batch_import_profile.png new file mode 100644 index 0000000000..748d36b285 Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/batch_import_profile.png differ diff --git a/docs/modules/cataloging/assets/images/batch_importing_MARC/marc_batch_import_acq_overlay.png b/docs/modules/cataloging/assets/images/batch_importing_MARC/marc_batch_import_acq_overlay.png new file mode 100644 index 0000000000..dfca5c8a9f Binary files /dev/null and b/docs/modules/cataloging/assets/images/batch_importing_MARC/marc_batch_import_acq_overlay.png differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records1.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records1.jpg new file mode 100644 index 0000000000..ae0962afbb Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records1.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records10.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records10.jpg new file mode 100644 index 0000000000..089c9a1ed2 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records10.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records12.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records12.jpg new file mode 100644 index 0000000000..a8c0d70d4b Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records14.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records14.jpg new file mode 100644 index 0000000000..f192db2533 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records14.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records15.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records15.jpg new file mode 100644 index 0000000000..06fbf2bd89 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records15.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records2.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records2.jpg new file mode 100644 index 0000000000..b022f873ae Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records2.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records3.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records3.jpg new file mode 100644 index 0000000000..124712c72e Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records3.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records4.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records4.jpg new file mode 100644 index 0000000000..4d556a1bfb Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records4.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records5.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records5.jpg new file mode 100644 index 0000000000..3410561afa Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records5.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records6.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records6.jpg new file mode 100644 index 0000000000..b0dfc8a7a1 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records6.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records7.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records7.jpg new file mode 100644 index 0000000000..e26303ea81 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records7.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records8.jpg b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records8.jpg new file mode 100644 index 0000000000..c8cf321bc2 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Batch_Importing_MARC_Records8.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Holdings_Editor_Defaults_Tab.png b/docs/modules/cataloging/assets/images/media/Holdings_Editor_Defaults_Tab.png new file mode 100644 index 0000000000..92d08f500a Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Holdings_Editor_Defaults_Tab.png differ diff --git a/docs/modules/cataloging/assets/images/media/Holdings_Editor_Hide_Display_Defaults.png b/docs/modules/cataloging/assets/images/media/Holdings_Editor_Hide_Display_Defaults.png new file mode 100644 index 0000000000..45c3d774a3 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Holdings_Editor_Hide_Display_Defaults.png differ diff --git a/docs/modules/cataloging/assets/images/media/Link_Checker1.jpg b/docs/modules/cataloging/assets/images/media/Link_Checker1.jpg new file mode 100644 index 0000000000..b703b6336f Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Link_Checker1.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Link_Checker2.jpg b/docs/modules/cataloging/assets/images/media/Link_Checker2.jpg new file mode 100644 index 0000000000..6477f42090 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Link_Checker2.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Link_Checker6.jpg b/docs/modules/cataloging/assets/images/media/Link_Checker6.jpg new file mode 100644 index 0000000000..e4222a1e7e Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Link_Checker6.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches1.jpg b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches1.jpg new file mode 100644 index 0000000000..55f9cb0ece Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches1.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches2.jpg b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches2.jpg new file mode 100644 index 0000000000..707a43df39 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches2.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches3.jpg b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches3.jpg new file mode 100644 index 0000000000..f9a64356b6 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches3.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches4.jpg b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches4.jpg new file mode 100644 index 0000000000..6bce86ebb9 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Locate_Z39_50_Matches4.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/MARC_Tag_Tables_Detail.PNG b/docs/modules/cataloging/assets/images/media/MARC_Tag_Tables_Detail.PNG new file mode 100644 index 0000000000..21f192cc00 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/MARC_Tag_Tables_Detail.PNG differ diff --git a/docs/modules/cataloging/assets/images/media/MARC_Tag_Tables_Grid.PNG b/docs/modules/cataloging/assets/images/media/MARC_Tag_Tables_Grid.PNG new file mode 100644 index 0000000000..28a2a72319 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/MARC_Tag_Tables_Grid.PNG differ diff --git a/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import1.jpg b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import1.jpg new file mode 100644 index 0000000000..33e96b56e1 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import1.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import2.jpg b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import2.jpg new file mode 100644 index 0000000000..32da4092cc Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import2.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import3.jpg b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import3.jpg new file mode 100644 index 0000000000..e6b563b620 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import3.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import4.jpg b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import4.jpg new file mode 100644 index 0000000000..bb6049bb87 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import4.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import5.jpg b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import5.jpg new file mode 100644 index 0000000000..96c2169d28 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import5.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import6.jpg b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import6.jpg new file mode 100644 index 0000000000..37b9a8ed15 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/Overlay_Existing_Record_via_Z39_50_Import6.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/batch_import_profile.png b/docs/modules/cataloging/assets/images/media/batch_import_profile.png new file mode 100644 index 0000000000..748d36b285 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/batch_import_profile.png differ diff --git a/docs/modules/cataloging/assets/images/media/conj10.jpg b/docs/modules/cataloging/assets/images/media/conj10.jpg new file mode 100644 index 0000000000..1365a92bd5 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conj10.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/conj2.jpg b/docs/modules/cataloging/assets/images/media/conj2.jpg new file mode 100644 index 0000000000..d5d05ce9fa Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conj2.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/conj3.jpg b/docs/modules/cataloging/assets/images/media/conj3.jpg new file mode 100644 index 0000000000..75c8d02ff0 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conj3.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/conj4.jpg b/docs/modules/cataloging/assets/images/media/conj4.jpg new file mode 100644 index 0000000000..9006f354fc Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conj4.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/conj5.jpg b/docs/modules/cataloging/assets/images/media/conj5.jpg new file mode 100644 index 0000000000..7e3b8f7b35 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conj5.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/conjoined_menu_markfor.png b/docs/modules/cataloging/assets/images/media/conjoined_menu_markfor.png new file mode 100644 index 0000000000..eccec72bba Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conjoined_menu_markfor.png differ diff --git a/docs/modules/cataloging/assets/images/media/conjoined_opac.png b/docs/modules/cataloging/assets/images/media/conjoined_opac.png new file mode 100644 index 0000000000..9e1958f691 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/conjoined_opac.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-2.png new file mode 100644 index 0000000000..b602dfe619 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-cat-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-cat-1.png new file mode 100644 index 0000000000..0945569bce Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-cat-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-cat-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-cat-2.png new file mode 100644 index 0000000000..6f7d7e0fab Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-cat-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-cat-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-cat-3.png new file mode 100644 index 0000000000..00906f2d1a Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-cat-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-delete-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-delete-1.png new file mode 100644 index 0000000000..226c5dca51 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-delete-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-delete-copy-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-delete-copy-1.png new file mode 100644 index 0000000000..ec0d88c438 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-delete-copy-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-delete-copy-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-delete-copy-2.png new file mode 100644 index 0000000000..f8cfa1e535 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-delete-copy-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-edit-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-1.png new file mode 100644 index 0000000000..9d2292c8ea Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-edit-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-2.png new file mode 100644 index 0000000000..66797899d7 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-1.png new file mode 100644 index 0000000000..c4e5b75a58 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-2.png new file mode 100644 index 0000000000..db7c231cf8 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-3.png new file mode 100644 index 0000000000..0b1b9b768c Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-edit-copy-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-new-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-new-1.png new file mode 100644 index 0000000000..4164c8bd83 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-new-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-new-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-new-2.png new file mode 100644 index 0000000000..78979d509b Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-new-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-new-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-new-3.png new file mode 100644 index 0000000000..304e276150 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-new-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-pending-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-1.png new file mode 100644 index 0000000000..107284cd3a Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-pending-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-2.png new file mode 100644 index 0000000000..4d476709bf Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-pending-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-3.png new file mode 100644 index 0000000000..66b1e230a3 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-pending-4.png b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-4.png new file mode 100644 index 0000000000..5e709531e4 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-4.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-pending-5.png b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-5.png new file mode 100644 index 0000000000..79dd011f8b Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-pending-5.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-remove-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-remove-1.png new file mode 100644 index 0000000000..3129703e85 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-remove-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-remove-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-remove-2.png new file mode 100644 index 0000000000..c2a1d33d80 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-remove-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-remove-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-remove-3.png new file mode 100644 index 0000000000..606d788fe4 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-remove-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-request-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-request-1.png new file mode 100644 index 0000000000..26527f5b28 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-request-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-request-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-request-2.png new file mode 100644 index 0000000000..2c8a4a9549 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-request-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-share-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-share-1.png new file mode 100644 index 0000000000..d1450410ec Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-share-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-share-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-share-2.png new file mode 100644 index 0000000000..36c7d65c70 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-share-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-share-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-share-3.png new file mode 100644 index 0000000000..211b3cd0fc Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-share-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-share-4.png b/docs/modules/cataloging/assets/images/media/copy-bucket-share-4.png new file mode 100644 index 0000000000..ecf534ddbf Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-share-4.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-1.png b/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-1.png new file mode 100644 index 0000000000..239d41000f Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-1.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-2.png b/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-2.png new file mode 100644 index 0000000000..0873cc5240 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-2.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-3.png b/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-3.png new file mode 100644 index 0000000000..75f116c846 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy-bucket-transfer-3.png differ diff --git a/docs/modules/cataloging/assets/images/media/copy_edit_link_1.jpg b/docs/modules/cataloging/assets/images/media/copy_edit_link_1.jpg new file mode 100644 index 0000000000..46094241d1 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copy_edit_link_1.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/copytags10.png b/docs/modules/cataloging/assets/images/media/copytags10.png new file mode 100644 index 0000000000..3e78846c9e Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copytags10.png differ diff --git a/docs/modules/cataloging/assets/images/media/copytags11.png b/docs/modules/cataloging/assets/images/media/copytags11.png new file mode 100644 index 0000000000..ddbccd342d Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copytags11.png differ diff --git a/docs/modules/cataloging/assets/images/media/copytags7.PNG b/docs/modules/cataloging/assets/images/media/copytags7.PNG new file mode 100644 index 0000000000..313997d07b Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copytags7.PNG differ diff --git a/docs/modules/cataloging/assets/images/media/copytags9.PNG b/docs/modules/cataloging/assets/images/media/copytags9.PNG new file mode 100644 index 0000000000..9fd71405cd Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/copytags9.PNG differ diff --git a/docs/modules/cataloging/assets/images/media/ffrc1_2.12.jpg b/docs/modules/cataloging/assets/images/media/ffrc1_2.12.jpg new file mode 100644 index 0000000000..15a3d2a2ae Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/ffrc1_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/ffrc2_2.12.jpg b/docs/modules/cataloging/assets/images/media/ffrc2_2.12.jpg new file mode 100644 index 0000000000..5be5a3f7f1 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/ffrc2_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/ffrc3_2.12.jpg b/docs/modules/cataloging/assets/images/media/ffrc3_2.12.jpg new file mode 100644 index 0000000000..3d049ba700 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/ffrc3_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/item_tag_button.png b/docs/modules/cataloging/assets/images/media/item_tag_button.png new file mode 100644 index 0000000000..384e1ab7e3 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/item_tag_button.png differ diff --git a/docs/modules/cataloging/assets/images/media/manage_item_tags.png b/docs/modules/cataloging/assets/images/media/manage_item_tags.png new file mode 100644 index 0000000000..f3d67e1a09 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/manage_item_tags.png differ diff --git a/docs/modules/cataloging/assets/images/media/manage_parts_menu.jpg b/docs/modules/cataloging/assets/images/media/manage_parts_menu.jpg new file mode 100644 index 0000000000..0982e3ea23 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/manage_parts_menu.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/manage_parts_opac.png b/docs/modules/cataloging/assets/images/media/manage_parts_opac.png new file mode 100644 index 0000000000..6e0b3d2b62 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/manage_parts_opac.png differ diff --git a/docs/modules/cataloging/assets/images/media/marc_batch_import_acq_overlay.png b/docs/modules/cataloging/assets/images/media/marc_batch_import_acq_overlay.png new file mode 100644 index 0000000000..dfca5c8a9f Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marc_batch_import_acq_overlay.png differ diff --git a/docs/modules/cataloging/assets/images/media/marc_batch_import_popup.png b/docs/modules/cataloging/assets/images/media/marc_batch_import_popup.png new file mode 100644 index 0000000000..d504fbe760 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marc_batch_import_popup.png differ diff --git a/docs/modules/cataloging/assets/images/media/marc_delete_record_3_3.png b/docs/modules/cataloging/assets/images/media/marc_delete_record_3_3.png new file mode 100644 index 0000000000..655d7a0d6c Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marc_delete_record_3_3.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay1.png b/docs/modules/cataloging/assets/images/media/marcoverlay1.png new file mode 100644 index 0000000000..558397987f Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay1.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay2.png b/docs/modules/cataloging/assets/images/media/marcoverlay2.png new file mode 100644 index 0000000000..3f124e30fc Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay2.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay3.png b/docs/modules/cataloging/assets/images/media/marcoverlay3.png new file mode 100644 index 0000000000..0965353a4f Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay3.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay4.png b/docs/modules/cataloging/assets/images/media/marcoverlay4.png new file mode 100644 index 0000000000..68af923002 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay4.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay5.png b/docs/modules/cataloging/assets/images/media/marcoverlay5.png new file mode 100644 index 0000000000..9271bcdcc8 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay5.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay6.png b/docs/modules/cataloging/assets/images/media/marcoverlay6.png new file mode 100644 index 0000000000..96cc4d4cca Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay6.png differ diff --git a/docs/modules/cataloging/assets/images/media/marcoverlay7.png b/docs/modules/cataloging/assets/images/media/marcoverlay7.png new file mode 100644 index 0000000000..69c105930b Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/marcoverlay7.png differ diff --git a/docs/modules/cataloging/assets/images/media/merge_tracking.png b/docs/modules/cataloging/assets/images/media/merge_tracking.png new file mode 100644 index 0000000000..fb6621b36e Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/merge_tracking.png differ diff --git a/docs/modules/cataloging/assets/images/media/monograph_parts2.jpg b/docs/modules/cataloging/assets/images/media/monograph_parts2.jpg new file mode 100644 index 0000000000..0e43663a53 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/monograph_parts2.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/monograph_parts3.jpg b/docs/modules/cataloging/assets/images/media/monograph_parts3.jpg new file mode 100644 index 0000000000..4fad88fc19 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/monograph_parts3.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/monograph_parts4.jpg b/docs/modules/cataloging/assets/images/media/monograph_parts4.jpg new file mode 100644 index 0000000000..08e5747734 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/monograph_parts4.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/monograph_parts5.jpg b/docs/modules/cataloging/assets/images/media/monograph_parts5.jpg new file mode 100644 index 0000000000..a482ff2d8d Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/monograph_parts5.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/pcw1_2.12.jpg b/docs/modules/cataloging/assets/images/media/pcw1_2.12.jpg new file mode 100644 index 0000000000..55e9d8fb99 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/pcw1_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/pcw2_2.12.jpg b/docs/modules/cataloging/assets/images/media/pcw2_2.12.jpg new file mode 100644 index 0000000000..89649c1b5a Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/pcw2_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/pcw3_2.12.jpg b/docs/modules/cataloging/assets/images/media/pcw3_2.12.jpg new file mode 100644 index 0000000000..25a3c53e9b Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/pcw3_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/pcw4_2.12.jpg b/docs/modules/cataloging/assets/images/media/pcw4_2.12.jpg new file mode 100644 index 0000000000..cc3f29f49a Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/pcw4_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/pcw5_2.12.jpg b/docs/modules/cataloging/assets/images/media/pcw5_2.12.jpg new file mode 100644 index 0000000000..88a8687393 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/pcw5_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/pcw6_2.12.jpg b/docs/modules/cataloging/assets/images/media/pcw6_2.12.jpg new file mode 100644 index 0000000000..ae4d27d813 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/pcw6_2.12.jpg differ diff --git a/docs/modules/cataloging/assets/images/media/remove_item_tag.png b/docs/modules/cataloging/assets/images/media/remove_item_tag.png new file mode 100644 index 0000000000..2323bc44e6 Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/remove_item_tag.png differ diff --git a/docs/modules/cataloging/assets/images/media/request_from_item_status.png b/docs/modules/cataloging/assets/images/media/request_from_item_status.png new file mode 100644 index 0000000000..a544b27bdd Binary files /dev/null and b/docs/modules/cataloging/assets/images/media/request_from_item_status.png differ diff --git a/docs/modules/cataloging/nav.adoc b/docs/modules/cataloging/nav.adoc new file mode 100644 index 0000000000..5ce9bc9b04 --- /dev/null +++ b/docs/modules/cataloging/nav.adoc @@ -0,0 +1,20 @@ +* xref:cataloging:introduction.adoc[Cataloging] +** xref:cataloging:copy-buckets_web_client.adoc[Item Buckets] +** xref:cataloging:item_tags_cataloging.adoc[Item Tags] +** xref:cataloging:MARC_Editor.adoc[Working with the MARC Editor] +** xref:cataloging:record_buckets.adoc[Record Buckets] +** xref:admin:staff_client-return_to_results_from_marc.adoc[Return to Search Results from MARC Record] +** xref:cataloging:batch_importing_MARC.adoc[Batch Importing MARC Records] +** xref:cataloging:overlay_record_3950_import.adoc[Overlay Existing Catalog Record via Z39.50 Import] +** xref:cataloging:z39.50_search_enhancements.adoc[Z39.50 Search Enhancements] +** xref:cataloging:monograph_parts.adoc[Monograph Parts] +** xref:cataloging:conjoined_items.adoc[Conjoined Items] +** xref:cataloging:cataloging_electronic_resources.adoc[Cataloging Electronic Resources — Finding Them in Catalog Searches] +** xref:cataloging:item_status.adoc[Using the Item Status interface] +** xref:cataloging:volcopy_editor.adoc[Using the Holdings Editor] +** xref:cataloging:MARC_batch_edit.adoc[MARC Batch Edit] +** xref:cataloging:authorities.adoc[Managing Authorities] +** xref:cataloging:link_checker.adoc[Link Checker] +** xref:admin:schema_bibliographic.adoc[Notes about the Bibliographic Schema in the Database] +** xref:admin:marc_templates.adoc[MARC Templates] + diff --git a/docs/modules/cataloging/pages/MARC_Editor.adoc b/docs/modules/cataloging/pages/MARC_Editor.adoc new file mode 100644 index 0000000000..2ec31e0ed0 --- /dev/null +++ b/docs/modules/cataloging/pages/MARC_Editor.adoc @@ -0,0 +1,185 @@ += Working with the MARC Editor = +:toc: + +== Editing MARC Records == + +. Retrieve the record. ++ +[TIP] +====== +You can retrieve records in many ways, including: + +* If you know its database ID, enter it into Cataloging > Retrieve Bib Record by ID. +* If you know its control number, enter it into Cataloging > Retrieve Bib Record by TCN. +* Searching in the catalog. +* Clicking on a link from the Acquisitions or Serials modules. +====== ++ +. Click on the MARC Edit tab. +. The MARC record will display. +. Select viewing and editing options, if desired. +* Stack subfields to display each subfield on its own line. +* Flat-Text Editor switches to a plain-text (mnemonic) MARC format. This format can be useful when copying and pasting multiple lines. It also allows the use of tools like MarcEdit (http://marcedit.reeset.net/ ). Uncheck the box to switch back. + * Note that you can use a backslash character as a placeholder in the flat text editor's indicators and fixed-length fields. +* Add Item allows attaching items quickly with call number and barcode. When _Save_ is clicked, the copy editor will open. NOTE: Browser pop-up blockers will prevent this, please allow pop-ups. +. Make changes as desired. +* Right click into a tag field to add/remove rows or replace tags. +* To work with the data in a tag or indicator, click or _Tab_ into the required field. Right click to view valid +tags or indicators. ++ +[NOTE] +========== +You can navigate the MARC Editor using keyboard shortcuts. Click _Help_ to see the shortcut menu from +within the MARC Editor. +========== ++ +. When finished, click _Save_. The record will remain open in the editor. You can close the browser window or browser tab. Or you can switch to +another view from the navigation near the top (for example to view it as it appears in the OPAC choose _OPAC View_). + +=== MARC Record Leader and MARC fixed field 008 === + +You can edit parts of the leader and the 008 field in the MARC Editor via the fixed field editor box displayed above +the MARC record. + +==== To edit the MARC record leader ==== + +. Retrieve and display the appropriate record in _MARC Edit_ view. + +. Click into any box displayed in the fixed field editor. + +. Press _Tab_ or use the mouse to move between fields. + +. Click _Save_. + +. The OPAC icon for the appropriate material type will display. + + +OPAC icons for text, moving pictures and sound rely on correct MARC coding in the leader, 007, and 008, as do OPAC +search filters such as publication date, item type, or target audience. + +==== MARC Fixed Field Editor Right-Click Context Menu Options ==== + +The MARC Fixed Field Editor provides suggested values for select fixed fields based on the record type being edited. Users can right-click on the value control for a fixed field and choose the appropriate value from the menu options. +The Evergreen database contains information from the Library of Congress’s MARC 21 format standards that includes possible values for select fixed fields. The right-click context menu options are available for fixed fields whose values are already stored in the database. Fixed fields that do not contain possible values in the database, the user will receive the default web browser menu (such as cut, copy, paste, etc.). + +*To Access the MARC Fixed Field Editor Right-Click Context Menu Options:* + +. Within the bibliographic record that needs to be edited, select *MARC Edit*. +. Make sure that the Flat-Text Editor checkbox is not selected and that you are not using the Flat-Text Editor interface. +. Right-click on the value control for the fixed field that needs to be edited. ++ +image::media/ffrc1_2.12.jpg[Right click on the fixed field input labeled Form] ++ +. Select the appropriate value for the fixed field from the menu options. ++ +image::media/ffrc2_2.12.jpg[One of the options in the Form fixed field context menu is r - Regular print reproduction] ++ +. Continue editing the MARC record, as needed. Once you are finished editing the record, click *Save*. + +Changing the values in the fixed fields will also update the appropriate position in the Leader or 008 Field and other applicable fields (such as the 006 Field). + +image::media/ffrc3_2.12.jpg[Selecting r in the context menu resulted in an r being placed in the 008 field later in the MARC Record display] + +MARC Editor users retain the option of leaving the fixed field value blank or entering special values (such as # or | ). + +[NOTE] +It may be necessary for MARC Editor users to first correctly pad the fixed fields to their appropriate lengths before making further modifications to the fixed field values. + + +*Administration* +The Evergreen database already contains information from the Library of Congress’s MARC 21 format standards that includes possible values for select fixed fields. Users may also add values to these and other fixed fields through the MARC Coded Value Maps interface. Once new values are added, the right-click context menu for the selected fixed field will display those values in the MARC Editor for any Record Type that utilizes that fixed field. +There are three relevant tables that contain the values that display in the fixed field context menu options: + +. *config.marc21_ff_pos_map* describes, for the given record type, where a fixed field is located, its start position, and its length. +. *config.coded_value_map* defines the set of valid values for many of the fixed fields and the translatable, human-friendly labels for them. +. *config.record_attr_definition* links together the information from the config.marc21_ff_pos_map and config.coded_value_map tables. + +=== Deleting MARC Records === +You can delete MARC records using the MARC Editor. + +==== To Delete a MARC record ==== + +. Retrieve and display the appropriate record in the MARC editor. +. Click on the _MARC Edit_ tab. +. Click the *Delete* button. +. In the modal window, click the *OK/Continue* button to remove the MARC record. + +image::media/marc_delete_record_3_3.png[The Delete button is located in the Marc Edit tab] + +=== MARC Tag-table Service === +The tag tables for the web staff client MARC editor are +stored in the database. The tag-table +service has the following features: + +- specifies whether (sub)fields are optional or mandatory +- specifies whether (sub)fields are repeatable or not +- a coded value map can be associated with a subfield to + establish a controlled vocabulary for that subfield +- MARC field and subfield definitions can be overridden + by institutions further down in the organizational unit + hierarchy. This allows, for example, a library to specify + definitions for local MARC tags. +- values supplied by the tag-table service are used to + populate values in context menus in the web staff client + MARC editor. + +MARC Tag Tables can be found under Administration -> Server Administration -> MARC Tag Tables. + +MARC Tag Tables Grid: + +image::media/MARC_Tag_Tables_Grid.PNG[Grid view of MARC Tag Tables] + +MARC Tag Tables Detail: + +image::media/MARC_Tag_Tables_Detail.PNG[Detail view of MARC Tag Tables] + +The initial seed data for the in-database tag table is +derived from the current tooltips XML file. + +== MARC 007 Field Physical Characteristics Wizard == + +The MARC 007 Field Physical Characteristics Wizard enables catalogers to interact with a database wizard that leads the user step-by-step through the MARC 007 field positions. The wizard displays the significance of the current position and provides dropdown lists of possible values for the various components of the MARC 007 field in a more user-friendly way. + +*To Access the MARC 007 Field Physical Characteristics Wizard for a Record that Does Not Already Contain the 007 Field (i.e. Creating the 007 Field from Scratch):* + +. Within the bibliographic record that needs to be edited, select *MARC Edit*. +. Make sure that the Flat-Text Editor checkbox is not selected and that you are not using the Flat-Text Editor interface. +. Right-click in the MARC field column. ++ +image::media/pcw1_2.12.jpg[] ++ +. Click *Add/Replace 007*. The 007 row will appear in the record. +. Click the chain link icon to the right of the field. ++ +image::media/pcw2_2.12.jpg[] ++ +. Click *Physical Characteristics Wizard*. + +The *MARC 007 Field Physical Characteristics Wizard* will open. + +*Using the Physical Characteristics Wizard:* + +As the user navigates through the wizard, each position will display its corresponding label that describes the significance of that position. Each position contains a selection of dropdown choices that list the possible values for that particular position. When the user makes a selection from the dropdown options, the value for that position will also change. + +The first value defines the *Category of Material*. Users select the Category of Material for the given record by choosing an option from the *Category of Material?* dropdown menu. The choices within the remaining character positions will be appropriate for the Category of Material selected. + +Once the Category of Material is selected, click *Next*. + +Evergreen will display the result of each selection in the preview above. The affected character will be in red. + +image::media/pcw3_2.12.jpg[] + +By clicking either the *Previous* or *Next* buttons, the user may step forward and backward, as needed, through the various positions in the 007 field. + +Once the user enters all of the applicable values for the 007 field and is ready to exit the wizard, click *Save*. + +image::media/pcw4_2.12.jpg[] + +All of the values selected will be stored and displayed within the 007 field of the bibliographic record. + +image::media/pcw5_2.12.jpg[] + +Continue editing the MARC record, as needed. Once the user is finished editing the record, click *Save*. + +image::media/pcw6_2.12.jpg[] + diff --git a/docs/modules/cataloging/pages/MARC_batch_edit.adoc b/docs/modules/cataloging/pages/MARC_batch_edit.adoc new file mode 100644 index 0000000000..b9435a6bab --- /dev/null +++ b/docs/modules/cataloging/pages/MARC_batch_edit.adoc @@ -0,0 +1,96 @@ += MARC Batch Edit = +:toc: + +== Introduction == + +This function is used to batch edit MARC records either adding a field, removing a field or changing the contents of a field. + +.What MARC Batch Edit Can and Can't Do +************************************** +MARC Batch Edit is a powerful tool, but it also has some limitations. +This tool can do the following tasks to a group of MARC records: + +* Remove all instances of a specific tag (e.g. remove all 992 tags) +* Remove all instances of a specific tag _if_ a particular subfield +has a particular value (e.g. remove all 650 fields in which the $2 +is _fast_) +* Remove all instances of a specific subfield (e.g. remove all 245$h) +* Remove all instances of a specific set of subfields +* Add a field +* Add a subfield to an existing field +* Replace data in a specific field or subfield + +It cannot do more advanced tasks, such as: + +* Swapping data from one field to another +* Deduplicating MARC records +* Complex logic based on existing data + +For more advanced projects, you may wish to export your records and +use a free tool such as http://marcedit.reeset.net/[MARCEdit] or +https://github.com/edsu/pymarc[PyMarc]. + +************************************** + +== Setting Up a Batch Edit Session == + +Record Source:: +This includes options to batch edit identifying MARC records in a record bucket, CSV file or by record id. + +Go! (button):: +This button runs the action defined by the rule template(s). + +=== Action (Rule Type) === +Replace:: +Replaces the value in a MARC field for a batch of records. +Delete:: +Removes a MARC field and its contents from the batch of records. +Add:: +Use this to add a field and its contents to a batch of records. + +=== Other Template Fields === +MARC Tag:: +This is used to identify the field for adding, replacing, or deleting. +Subfield (optional):: +Indicates which subfield is being edited. +MARC Data:: +Use this to indicate the data to add or used in replacing the existing data. + +=== Advanced Matching Restrictions (Optional) === +Subfield +Regular Expression:: +Using PERL syntax for a regular expression to identify the data to be removed or replaced. + +.Running a Template to Add, Delete, or Replace MARC data +. Click Cataloging->MARC Batch Edit +. Select *Record source* +. Select the appropriate bucket, load the CSV file or enter record id depending on *Record source* selected +. Select the *Action Rule* +. Enter the *MARC Tag* with no indicators (eg. 245) +. Enter the *subfields* with no spaces. Subfields are optional. Multiple subfield can be entered such as _auz_. +. Enter the *MARC Data* which is the value in the fields +. Enter optional *Advanced Matching Restrictions* +.. Subfield +.. Regular Expression (using PERL syntax) +. Click *Go!* +. Results page will display indicating the number of records successfully edited + +== Examples == + +=== Adding a new field to all records === + +. In the _action_ menu, choose _Add_. +. In _MARC Tag_, type the MARC tag number. +. Leave the _Subfields_ field blank. +. In _MARC Data_, type the field you would like to add. + +=== Delete a field if it contains a particular string === + +. In the _action_ menu, choose _Delete_. +. In _MARC Tag_, type the MARC tag number. +. Leave the _Subfields_ field blank. +. In _MARC Data_, type the field you would like to add. +. In the _subfield_ field under _Advanced Matching Restriction_, type the subfield code where you expect to see the string. +. In _Regular Expression_, type the string you expect to see. + + diff --git a/docs/modules/cataloging/pages/_attributes.adoc b/docs/modules/cataloging/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/cataloging/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/cataloging/pages/authorities.adoc b/docs/modules/cataloging/pages/authorities.adoc new file mode 100644 index 0000000000..ec8bdff8d7 --- /dev/null +++ b/docs/modules/cataloging/pages/authorities.adoc @@ -0,0 +1,148 @@ += Managing Authorities = +:toc: + +== Introduction == + +This section describes how you can create, import, view, modify, merge, and delete authority records in Evergreen. + +== Creating Authorities == +Currently in Evergreen to create a new authority record, as opposed to importing an authority record, you +need to have a bib record open in the bib MARC editor. + +* For example, if you want to create a new author +authority you need to have a bib record that has a bib 1xx or 7xx tag with the main entry filled out. +* Then you need to right click on that 1xx or 7xx tag. In the context menu that shows up, select _Create +New Authority from this field_, then select either _Create Immediately_ or _Create and Edit..._. +* If you +choose _Create and Edit..._, after the authority MARC editor opens you need to click on the _Save_ button +to finally add the new authority record to your system. + + +[[importing_authority_records_from_the_staff_client]] +== Importing Authorities == +. Click *Cataloging -> MARC Batch Import/Export.* +. You may create a queue to better track this import project. If you do not create a new queue, it will automatically put your records into a default queue named *-*. +. Don't set a value for Holdings Import Profile, because this doesn't apply to authority records. +. Select a file of authority data and put it in the *File to Upload* field. +. Make sure all the settings are correct, then press *Upload.* ++ +The screen displays "Uploading... Processing..." to show that the records +are being transferred to the server, then displays a progress bar to show +the actual import progress. When the staff client displays the progress +bar, you can disconnect your staff client safely. Very large batches of +records might time out at this stage. + +. Evergreen will automatically assign a thesaurus based on the *Subj* fixed field, which is character 11 in the 008 field. +. Evergreen will also try to determine who edited the record (based on the MARC 905u field or the user performing the import) and set the edit date, which you can view +when you examine the record in the future. + +. Once the import is finished, the staff client displays the results of +the import process. You can manually display the import progress by +selecting the _Inspect Queue_ tab of the _MARC Batch Import/Export_ +interface and selecting the queue name. By default, the staff client does +not display records that were imported successfully; it only shows records +that conflicted with existing entries in the database. The screen shows +the overall status of the import process in the top right-hand corner, +with the Total and Imported number of records for the queue. + + +[TIP] +================= +If you are importing authorities from an external vendor and want to track this, you may wish to set a unique Record Source. This source will be visible in the MARC +Editor and in the 901$s field of the imported authority records. +================= + + +=== Setting up Authority Record Match Sets === +. Click *Cataloging -> MARC Batch Import/Export.* +. Click *Record Match Sets.* +. If you have sufficient privileges, you will be able to click on the *New Match Set*. If you are unable to do so, check that you have the ADMIN_IMPORT_MATCH_SET permission. +. Give your new set a descriptive name, an owning library, and a match set type of *authority.* +. Click on the blue hyperlinked name of the match set you just created to add criteria. +. You can match against MARC tag/subfield entries or against a record's normalized heading. + +[NOTE] +================= +Evergreen's database stores normalized authority headings in a format that includes the thesaurus. This way, record match sets will not match terms from other thesauri, even if the term is very similar. +================= + +[TIP] +================= +Evergreen's internal identifier is in the 901c field. If you have previously exported authority record -- perhaps for an external vendor to do authority cleanup work -- and you want to import them back into your catalog, you may wish to include the 901c field in your match set. +================= + +== Viewing and Editing Authority Records by Database ID == + +The authority record retriever allows catalogers to retrieve a specific +authority record using its database ID. Catalogers can +find those IDs in subfield $0 of matching fields in +bibliographic records. + +To use the authority record retriever: + +. Click *Cataloging -> Retrieve Authority Record by ID*. +. Type in the ID number of the authority record you are +interested in. Don't include any prefixes, just the ID +number. +. Click *Submit*. +. View or edit the authority record as needed. + + +== Manage Authorities Interface == + +In Evergreen to view, edit, merge, and delete authority records you would use the *Manage Authorities* interface +through the *Cataloging* menu. + + + +=== Searching for authorities === + +To search for authorities in your system, first select the *Cataloging* menu and then select *Manage Authorities*. +Then proceed to fill out the search form. + +. Type in your _Search Term_ +. Select an _Authority type_, types currently include: Author, Subject, Title, Topic +. Click on the _Submit_ button + + +The authority search results will include the following elements from left to right: + +* _Actions_ menu, which can be used to select actions that affect the corresponding authority record. Actions include: +_Edit_, _Mark for Merge_, _Delete_ +* Count of how many bibs are linked to the corresponding authority +* Main entry of the authority, i.e the authority tag 1xx value +* _Control set_ value, with LoC being the default, but others can be added +* Authority Subject heading system/thesaurus, for example a value of "a" means authority originated from the Library of Congress + (http://www.loc.gov/marc/authority/ad008.html) + + +*Library of Congress list of thesaurus values:* + +* '' = Alternate no attempt to code +* a = Library of Congress Subject Headings +* b = LC subject headings for children's literature +* c = Medical Subject Headings +* d = National Agricultural Library subject authority file +* k = Canadian Subject Headings +* n = Not applicable +* r = Art and Architecture Thesaurus +* s = Sears List of Subject Headings +* v = Repertoire de vedettes-matiere +* z = Other +* | = No attempt to code + + +==== Editing authority records ==== + +Editing an authority record (or merging two authority records) can cause its linked bibliographic records to also update. For example, +if you correct a spelling error in the 150 field of a subject authority record, the relevant 650 field in linked bibliographic records +will also be updated to reflect the correct spelling. + +[TIP] +================= +When a bib record is automatically updated as a result of the modification of a linked authority record, the bib record's "Last Edit Date/ +Time" and "Last Editing User" fields will be updated to match the time of the update and the editor of the authority record. If you'd +prefer that these fields not be automatically updated, you can set the _ingest.disable_authority_auto_update_bib_meta_ setting to true in the +Library Settings Editor. +================= + diff --git a/docs/modules/cataloging/pages/batch_importing_MARC.adoc b/docs/modules/cataloging/pages/batch_importing_MARC.adoc new file mode 100644 index 0000000000..42211c4f58 --- /dev/null +++ b/docs/modules/cataloging/pages/batch_importing_MARC.adoc @@ -0,0 +1,396 @@ += Batch Importing MARC Records = +:toc: + +== Introduction == + +indexterm:[MARC records,importing,using the staff client] + +[[batchimport]] +The cataloging module includes an enhanced MARC Batch Import interface for +loading MARC (and MARCXML) records. In general, it can handle batches up to 5,000 records +without a problem. This interface allows you to specify match points +between incoming and existing records, to specify MARC fields that should be +overlaid or preserved, and to only overlay records if the incoming record is +of higher quality than the existing record. Records are added to a queue where +you can apply filters that enable you to generate any errors that may have +occurred during import. You can print, email or export your queue as a CSV file. + +== Permissions == + +To use match sets to import records, you will need the following permission: + +ADMIN_IMPORT_MATCH_SET + + +== Record Display Attributes == + +This feature enables you to specify the tags and subfields that will display in +records that appear in the import queue. + + +[[matchsets]] +== Record Match Sets == + +This feature enables you to create custom match points that you can use to +accurately match incoming records with existing catalog records. + +=== Creating a Match Set === + +In this example, to demonstrate matching on record attributes and MARC tags and +subfields, we will create a record match set that defines a match based on the +title of the record, in either the 240 or 245, and the fixed field, Lang. You +can add multiple record attributes and MARC tags to customize a record match +set. + + +. Click *Cataloging -> MARC Batch Import/Export*. + +. Create a new record match set. Click *Record Match Sets -> New Match Set*. + +. Enter a name for the record match set. + +. Select an *Owning Library* from the drop down menu. Staff with permissions +at this location will be able to use this record match set. + +. Select a *Match Set Type* from the drop down menu. You can create a match +set for authority records or bibliographic records. + +. Click *Save*. ++ +image::media/Batch_Importing_MARC_Records1.jpg[Batch_Importing_MARC_Records1] + +. The screen will refresh to list the record match set that you created. Click +the link to the record match set. + +. Create an expression that will define the match points for the incoming +record. You can choose from two areas to create a match: *Record Attribute* or +*MARC Tag and Subfield*. You can use the Boolean operators AND and OR to +combine these elements to create a match set. + +. Select a *Record Attribute* from the drop-down menu. + +. Enter a *Match Score.* The *Match Score* indicates the relative importance +of that match point as Evergreen evaluates an incoming record against an +existing record. You can enter any integer into this field. The number that +you enter is only important as it relates to other match points. Recommended +practice is that you create a match score of one (1) for the least important +match point and assign increasing match points to the power of 2 to working +points in increasing importance. + +. Check the *Negate?* box if you want to negate the match point. Checking +this box would be the equivalent of applying a Boolean operator of NOT to the +match point. ++ +image::media/Batch_Importing_MARC_Records2.jpg[Batch_Importing_MARC_Records2] + +. Click *Ok.* + +. Drag the completed match point under the folder with the +appropriately-named Boolean folder under the Expression tree. ++ +image::media/Batch_Importing_MARC_Records3.jpg[Batch_Importing_MARC_Records3] ++ +The match point will nest underneath the folder in the Expression tree. ++ +image::media/Batch_Importing_MARC_Records4.jpg[Batch_Importing_MARC_Records4] + +. Enter another *Boolean Operator* to further refine your match set. + +. Click *Boolean Operator*. + +. Select the *OR* operator from the drop down menu. + +. Click *Ok*. + +. Drag the operator to the expression tree. ++ +image::media/Batch_Importing_MARC_Records5.jpg[Batch_Importing_MARC_Records5] + +. Click *MARC Tag and Subfield*. + +. Enter a *MARC tag* on which you want the records to match. + +. Enter a *subfield* on which you want the records to match. + +. Enter a *Match Score.* The *Match Score* indicates the relative importance +of that match point as Evergreen evaluates an incoming record against an +existing record. You can enter any integer into this field. The number that +you enter is only important as it relates to other match points. Recommended +practice is that you create a match score of one (1) for the least important +match point and assign increasing match points to the power of 2 to working +points in increasing importance. + +. Check the *Negate?* box if you want to negate the match point. Checking +this box would be the equivalent of applying a Boolean operator of NOT to the +match point. + +. Click *Ok.* ++ +image::media/Batch_Importing_MARC_Records6.jpg[Batch_Importing_MARC_Records6] + +. Drag the completed match point under the folder with the +appropriately-named Boolean folder under the Expression tree. The Expression +will build across the top of the screen. + +. Add additional MARC tags or record attributes to build the expression tree. + +. Click *Save Changes to Expression*. ++ +image::media/Batch_Importing_MARC_Records7.jpg[Batch_Importing_MARC_Records7] + +=== Replace Mode === + +Replace Mode enables you to replace an existing part of the expression tree +with a new record attribute, MARC tag, or Boolean operator. For example, if +the top of the tree is AND, in Replace Mode, you could change that to an OR. + +. Create a working match point. + +. Click *Enter Replace Mode*. + +. Highlight the piece of the tree that you want to replace. + +. Drag the replacement piece over the highlighted piece. + +. Click *Exit Replace Mode*. + + +=== Quality Metrics === + +. Set the *Quality Metrics for this Match Set*. Quality metrics are used to +determine the overall quality of a record. Each metric is given a weight and +the total quality value for a record is equal to the sum of all metrics that +apply to that record. For example, a record that has been cataloged thoroughly +and contains accurate data would be more valuable than one of poor quality. You +may want to ensure that the incoming record is of the same or better quality +than the record that currently exists in your catalog; otherwise, you may want +the match to fail. The quality metric is optional. + +. You can create quality metrics based on the record attribute or the MARC Tag +and Subfield. + +. Click *Record Attribute.* + +. Select an attribute from the drop down menu. + +. Enter a value for the attribute. + +. Enter a match score. You can enter any integer into this field. The number +that you enter is only important as it relates to other quality values for the +current configuration. Higher scores would indicate increasing quality of +incoming records. You can, as in the expression match score, increase the +quality points by increasing subsequent records by a power of 2 (two). + +. Click *Ok*. ++ +image::media/Batch_Importing_MARC_Records8.jpg[Batch_Importing_MARC_Records8] + +== Merge/Overlay Profiles == + +If Evergreen finds a match for an incoming record in the database, you need to identify which fields should be replaced, which should be preserved, and which should be added to the record. +Click the Merge/Overlay Profiles button to create a profile that contains this information. + +You can use these profiles when importing records through the MARC Batch Importer or Acquisitions Load MARC Order Records interface. + +You can create a new profile by clicking the New Merge Profile button. Available options for handling the fields include: + +. _Preserve specification_ - fields in the existing record that should be preserved. + +. _Replace specification_ - fields in existing record that should be replaced by those in the incoming record. + +. _Add specification_ - fields from incoming record that should be added to existing record (in addition to any already there.) + +. _Remove specification_ - fields that should be removed from incoming record. + +. _Update bib source_ - If this value is false, just the bibliographic data will be updated when you overlay a new MARC record. If it is true, then Evergreen will also update +the record's bib source to the one you select on import; the last edit date to the date the new record is imported, and the last editor to the person who imported the new +record. + +You can add multiple tags to the specification options, separating each tag with a comma. + + +== Import Item Attributes == +If you are importing items with your records, you will need to map the data in +your holdings tag to fields in the item record. Click the *Holdings Import +Profile* button to map this information. + +. Click the *New Definition* button to create a new mapping for the holdings tag. +. Add a *Name* for the definition. +. Use the *Tag* field to identify the MARC tag that contains your holdings + information. +. Add the subfields that contain specific item information to the appropriate + item field. +. At a minimum, you should add the subfields that identify the *Circulating +Library*, the *Owning Library*, the *Call Number* and the *Barcode*. + +NOTE: All fields (except for Name and Tag) can contain a MARC subfield code +(such as "a") or an XPATH query. You can also use the +related library settings to set defaults for some of these fields. + +image::media/batch_import_profile.png[Partial Screenshot of a Holdings Import Profile] + +.Holdings Import Profile Fields +[options="header"] +|============================= +|Field | Recommended | Description +|Name | Yes | Name you will choose from the MARC Batch Import screen +|Tag | Yes | MARC Holdings Tag/Field (e.g. 949). Use the Tag field to +identify the MARC tag that contains your holdings information. +|Barcode | Yes | +|Call Number | Yes | +|Circulating Library | Yes | +|Owning Library | Yes | +|Alert Message || +|Circulate || +|Circulate As MARC Type || +|Circulation Modifier || +|Copy Number || +|Deposit || +|Deposit Amount || +|Holdable || +|OPAC Visible || +|Overlay Match ID || The copy ID of an existing item to overlay +|Parts Data || Of the format `PART LABEL 1\|PART LABEL 2`. +|Price || +|Private Note || +|Public Note || +|Reference || +|Shelving Location || +|Stat Cat Data || Of the format `CATEGORY 1\|VALUE 1\|\|CATEGORY 2\|VALUE 2`. +If you are overlaying existing items which already have stat cats +attached to them, the overlay process will keep those values unless the +incoming items contain updated values for matching categories. +|Status || +|============================= + + +== Import Records == + +The *Import Records* interface incorporates record match sets, quality metrics, +more merging options, and improved ways to manage your queue. In this example, +we will import a batch of records. One of the records in the queue will +contain a matching record in the catalog that is of lower quality than the +incoming record. We will import the record according to the guidelines set by +our record match set, quality metrics, and merge/overlay choices that we will +select. + +. Select a *Record Type* from the drop down menu. + +. Create a queue to which you can upload your records, or add you records to +an existing queue. Queues are linked to match sets and a holdings import +profile. You cannot change a holdings import or record match set for a queue. + +. Select a *Record Match Set* from the drop down menu. + +. Select a *Holdings Import Profile* if you want to import holdings that are +attached to your records. + +. Select a *Record Source* from the drop down menu. + +. Select a *Merge Profile*. Merge profiles enable you to specify which tags +should be removed or preserved in incoming records. + +. Choose one of the following import options if you want to auto-import +records: + +.. *Merge on Single Match* - Using the Record Match Set, Evergreen will only +attempt to perform the merge/overlay action if only one match was found in the +catalog. + +.. *Merge on Best Match* - If more than one match is found in the catalog for a +given record, Evergreen will attempt to perform the merge/overlay action with +the best match as defined by the match score and quality metric. ++ +NOTE: Quality ratio affects only the *Merge on Single Match* and *Merge on Best +Match* options. + +. Enter a *Best/Single Match Minimum Quality Ratio.* Divide the incoming +record quality score by the record quality score of the best match that might +exist in the catalog. By default, Evergreen will assign any record a quality +score of 1 (one). If you want to ensure that the inbound record is only +imported when it has a higher quality than the best match, then you must enter +a ratio that is higher than 1. For example, if you want the incoming record to +have twice the quality of an existing record, then you should enter a 2 (two) +in this field. If you want to bypass all quality restraints, enter a 0 (zero) +in this field. + +. Select an *Insufficient Quality Fall-Through Profile* if desired. This +field enables you to indicate that if the inbound record does not meet the +configured quality standards, then you may still import the record using an +alternate merge profile. This field is typically used for selecting a merge +profile that allows the user to import holdings attached to a lower quality +record without replacing the existing (target) record with the incoming record. +This field is optional. + +. Under *Copy Import Actions*, choose _Auto-overlay In-process Acquisitions +Copies_ if you want to overlay temporary copies that were created by the +Acquisitions module. The system will attempt to overlay copies that: + +* have associated lineitem details (that is, they were created by the acquisitions process), +* that lineitem detail has the same owning_lib as the incoming copy's owning_lib, and +* the current copy associated with that lineitem detail is _In process_. + +. *Browse* to find the appropriate file, and click *Upload*. The file will +be uploaded to a queue. The file can be in either MARC or MARCXML format. ++ +image::media/marc_batch_import_acq_overlay.png[Batch Importing MARC Records] + +. The screen will display records that have been uploaded to your queue. Above +the table there are three sections: + * *Queue Actions* lists common actions for this queue. _Export Non-Imported +Records_ will export a MARC file of records that failed to import, allowing +those records to be edited as needed and imported separately. (Those +records can be viewed by clicking the _Limit to Non-Imported Records_ +filter.) + * *Queue Summary* shows a brief summary of the records included in the queue. + * *Queue Filters* provides options for limiting which records display in the +table. ++ +image::media/Batch_Importing_MARC_Records15.jpg[Batch_Importing_MARC_Records15] + +. If Evergreen indicates that matching records exist, then click the +*Matches* link to view the matching records. Check the box adjacent to the +existing record that you want to merge with the incoming record. ++ +image::media/Batch_Importing_MARC_Records10.jpg[Batch_Importing_MARC_Records10] + +. Click *Back to Import Queue*. + +. Check the boxes of the records that you want to import, and click *Import +Selected Records*, or click *Import All Records*. + +. A pop up window will offer you the same import choices that were present on +the *Import Records* screen. You can choose one of the import options, or +click *Import*. ++ +image::media/marc_batch_import_popup.png[Batch Importing MARC Records Popup] + +. The screen will refresh. The *Queue Summary* indicates that the record was +imported. The *Import Time* column records the date that the record was +imported. Also, the *Imported As* column should now display the database ID (also known as the bib record number) for the imported record. ++ +image::media/Batch_Importing_MARC_Records12.jpg[Batch_Importing_MARC_Records12] + +. You can confirm that the record was imported by using the value of the *Imported As* column by selecting the menu *Cataloging* -> *Retrieve title by database ID* and using the supplied *Imported As* number. Alternatively, you can search the catalog to confirm that the record was imported. ++ +image::media/Batch_Importing_MARC_Records14.jpg[Batch_Importing_MARC_Records14] + + +== Default Values for Item Import == + +Evergreen now supports additional functionality for importing items through *Cataloging* -> *MARC Batch Import/Export*. When items are imported via a *Holdings Import Profile* in *Cataloging* -> *MARC Batch Import/Export*, Evergreen will create an item-level record for each copy. If an item barcode, call number, shelving location, or circulation modifier is not set in the embedded holdings, Evergreen will apply a default value based on the configured Library Settings. A default prefix can be applied to the auto-generated call numbers and item barcodes. + +The following *Library Settings* can be configured to apply these default values to imported items: + +* *Vandelay: Generate Default Barcodes* —Auto-generate default item barcodes when no item barcode is present + +* *Vandelay: Default Barcode Prefix* —Apply this prefix to any auto-generated item barcodes + +* *Vandelay: Generate Default Call Numbers* —Auto-generate default item call numbers when no item call number is present + +* *Vandelay: Default Call Number Prefix* —Apply this prefix to any auto-generated item call numbers + +* *Vandelay: Default Copy Location* —Default copy location value for imported items + +* *Vandelay: Default Circulation Modifier* —Default circulation modifier value for imported items + diff --git a/docs/modules/cataloging/pages/cataloging_electronic_resources.adoc b/docs/modules/cataloging/pages/cataloging_electronic_resources.adoc new file mode 100644 index 0000000000..9228b51377 --- /dev/null +++ b/docs/modules/cataloging/pages/cataloging_electronic_resources.adoc @@ -0,0 +1,158 @@ += Cataloging Electronic Resources -- Finding Them in Catalog Searches = +:toc: +There are two ways to make electronic resources visible in the catalog without +adding items to the record: + +. Adding a Located URI to the record +. Attaching the record to a bib source that is transcendent + +The Located URI approach is useful for Evergreen sites where libraries have +access to different electronic resources. The transcendent bib source approach +is useful if all libraries have access to the same electronic resources. + +Another difference between the two approaches is that electronic resources with +Located URI's never appear in results where the search is limited to a specific +shelving location(s). In contrast, transcendent electronic resources will appear in +results limited to any shelving location. + +== Adding a Located URI to the Record == +A Located URI allows you to add the short name for the owning library to the 856 +field to indicate which organizational units should be able to find the +resource. The owning organizational unit can be a branch, system, or consortium. + +A global flag called _When enabled, Located URIs will provide visibility +behavior identical to copies_ will determine where these resources will appear +in search results. This flag is available through *Admin* -> *Server +Administration* -> *Global Flags*. + +If the _When enabled, Located URIs will provide visibility behavior identical +to copies_ flag is set to False (default behavior): + +* When the user's search scope is set at the owning organizational unit or to +a child of the owning organizational unit, the record will appear in search +results. +* When a logged-in user's preferred search library is set to the owning +organizational unit or to a child of that owning organizational unit, the record +will appear regardless of search scope. + +If the _When enabled, Located URIs will provide visibility behavior identical +to copies_ flag is set to True: + +* When the user's search scope is set at the owning organizational unit, at a +child of the owning organizational unit, or at a parent of the owning +organizational unit, the record will appear in search results. +* When a logged-in user's preferred search library is set to the owning +organizational unit, to a child of the owning organizational unit, or to a +parent (with the exception of the consortium) of the owning organizational unit, +the record will appear regardless of search scope. + + +To add a located URI to the record: + +. Open the record in _MARC Edit_ +. Add a subfield 9 to the 856 field of the record and enter the short name of +the organizational unit for the value. Make sure there is a 4 entered as the +first indicator and a 0 entered as the second indicator. +For example: ++ +'856 40 $u http://lwn.net $y Linux Weekly News $9 BR1' ++ +would make this item visible to people searching in a library scope of BR1 or to +logged-in users who have set BR1 as their preferred search library. ++ +[NOTE] +If multiple organizational units own the resource, you can enter more than one +subfield 9 to the 856 field or you can enter multiple 856 fields with a subfield +9 to the record ++ +. Save the record + +[NOTE] +When troubleshooting located URIs, check to make sure there are no spaces either +before or after the organizational unit short name. + +=== Located URI Example 1 === + +The _When enabled, Located URIs will provide visibility behavior identical to +copies_ flag is set to False (default behavior) + +The Record has two 856 fields: one with SYS1 in subfield 9 and the other with +BR4 in subfield 9 + +* Any user searching SYS1 or any of its children (BR1, BR2, SL1) will find the +record. These users will only see the URL belonging to SYS1. +* Any user searching BR4 will find the record. These users will only see the +URL belonging to BR4. +* A user searching SYS2 will NOT find the record because SYS2 is a parent of +an owning org unit, not a child. The same thing happens if the user is searching +the consortium. In this case, the system assumes the user is unlikely to +have access to this resource and therefore does not retrieve it. +* A logged-in user with a preferred search library of BR4 will find the record +at any search scope. This user will see the URL belonging to BR4. Because this +user previously identified a preference for using this library, the system +assumes the user is likely to have access to this resource. +* A logged-in user with a preferred search library of BR4 who is searching SYS1 +or any of its children will also retrieve the record. In this case, the user +will see both URLs, the one belonging to SYS1 because the search library matches +or is a child of the owning organizational unit and the one belonging to BR4 +because it matches or is a child of the preferred search library. The URL +belonging to the search library (if it is an exact match, not a child) will sort +to the top. + +=== Located URI Example 2 === + +The _When enabled, Located URIs will provide visibility behavior identical to +copies_ flag is set to True + +The Record has two 856 fields: one with SYS1 in subfield 9 and the other with +BR4 in subfield 9 + +* Any user searching SYS1 or any of its children (BR1, BR2, SL1) will find the +record. These users will only see the URL belonging to SYS1. +* Any user searching BR4 will find the record. These users will only see the +URL belonging to BR4. +* Any user searching the consortium will find the record. These users will see +both URLs in the record. In this case, the system sees this user as a potential +user of SYS2 or BR4 and therefore offers them the option of accessing the +resource through either URL. +* A user searching SYS2 will find the record because SYS2 is a parent of +an owning org unit. The user will see the URL belonging to BR4. Once again, +the system sees this user as a potential user of BR4 and therefore offers +them the option of accessing this resource. +* A user searching BR3 will NOT find the record because BR3 is neither a child +nor a parent of an owning organizational unit. +* A logged-in user with a preferred search library of BR4 who is searching BR3 +will find the record. This user will see the URL belonging to BR4. Because this +user previously identified a preference for using this library, the system +assumes the user is likely to have access to this resource. +* A logged-in user with a preferred search library of BR4 who is searching SYS1 +or any of its children will also retrieve the record. In this case, the user +will see both URLs, the one belonging to SYS1 because the search library matches +or is a child of the owning organizational unit and the one belonging to BR4 +because it matches or is a child of the preferred search library. The URL +belonging to the search library (if it is an exact match, not a child) will sort +to the top. + +== Using Transcendant Bib Sources for Electronic Resources == +Connecting a bib record to a transcendent bib source will make the record +visible in search results regardless of the user's search scope. + +To start, you need to create a transcendent bib source by adding it to +'config.bib_source' in the Evergreen database and setting the _transcendant_ +field to true. For example: + ++# INSERT INTO config.bib_source(quality, source, transcendant, can_have_copies) +VALUES (50, `ebooks', TRUE, FALSE);+ + +[NOTE] +If you want to allow libraries to add copies to these records, set the +_can_have_copies_ field to _TRUE_. If you want to prevent libraries from adding +copies to these records, set the _can_have_copies_ field to _FALSE_. + +When adding or uploading bib records for electronic resources, set the +bibliographic source for the record to the newly-created transcendent +bibliographic source. Using the staff client, the bibliographic source can be +selected in the _MARC Batch Import_ interface when importing new, non-matching +records or in the _MARC Edit_ interface when editing existing records. + + diff --git a/docs/modules/cataloging/pages/conjoined_items.adoc b/docs/modules/cataloging/pages/conjoined_items.adoc new file mode 100644 index 0000000000..6119c4c9f0 --- /dev/null +++ b/docs/modules/cataloging/pages/conjoined_items.adoc @@ -0,0 +1,94 @@ += Conjoined Items = +:toc: + +Prior to Evergreen version 2.1, items could be attached to only one bibliographic record. The Conjoined Items feature in Evergreen 2.1 enables catalogers to link items to multiple bibliographic records. This feature will enable more precise cataloging. For example, catalogers will be able to indicate items that are printed back to back, are bilingual, are part of a bound volume, are part of a set, or are available as an e-reader pre-load. This feature will also help the user retrieve more relevant search results. For example, a librarian catalogs a multi-volume festschrift. She can create a bibliographic record for the festschrift and a record for each volume. She can link the items on each volume to the festschrift record so that a patron could search for a volume or the festschrift and retrieve information about both works. + +In the example below, a librarian has created a bibliographic record for two bestselling items. These books are available as physical copies in the library, and they are available as e-reader downloads. The librarian will link the copy of the Kindle to the bibliographic records that are available on the e-reader. + +== Using the Conjoined Items Feature == + +The Conjoined Items feature was designed so that you can link items between bibliographic records when you have the item in hand, or when the item is not physically present. Both processes are described here. The steps are fewer if you have the item in hand, but both processes accomplish the same task. This document also demonstrates the process to edit or delete links between items and bibliographic records. Finally, the permission a cataloger needs to use this feature is listed. + +.Scenario 1: I want to link an item to another bibliographic record, but I do not have the item in hand + +1. Retrieve the bibliographic record to which you would like to link an item. + +2. Click *Actions for this Record -> Mark as Target for Conjoined Items.* ++ +image::media/conjoined_menu_markfor.png[Menu: Mark as Target for Conjoined Items] + +3. A confirmation message will appear. Click *OK.* + +4. In a new tab, retrieve the bibliographic record with the item that +you want to link to the other record. + +5. Click *Actions for this Record -> Holdings Maintenance.* + +6. Select the copy that you want to link to the other bibliographic +record. Right-click, or click *Actions for Selected Rows -> Link as +Conjoined Items to Previously Marked Bib Record.* ++ +image::media/conj2.jpg[conj2] + +7. The *Manage Conjoined Items* interface opens in a new tab. This +interface enables you to confirm the success of the link, and to change +the peer type if desired. The *Result* column indicates that you +created a successful link between the item and the bib record. ++ +image::media/conj3.jpg[conj3] ++ +The default peer type, *Back-to-back*, was set as the peer type for our item. To change a peer type after the link has been created, right-click or click *Actions for Selected Items -> Change Peer Type*. A drop down menu will appear. Select the desired peer type, and click *OK.* ++ +image::media/conj4.jpg[conj4] + +8. The *Result* column will indicate that the *Peer Type* [has been] *Updated.* ++ +image::media/conj5.jpg[conj5] + +9. To confirm the link between the item and the desired bib record, +reload the tab containing the bib record to which you linked the item. +You should now see the copy linked in the copies table. ++ +image::media/conjoined_opac.png[Catalog Record showing Conjoined Item link] + + +.Scenario 2: I want to link an item to another bibliographic record, and I do have the item in hand + +1. Retrieve the bibliographic record to which you would like to add the item. + +2. Click *Actions for this Record -> Manage Conjoined Items.* ++ +image::media/conjoined_menu_markfor.png[Menu: Manage Conjoined Items] + +3. A note in the bottom left corner of the screen will confirm that the +record was targeted for linkage with conjoined items, and the *Manage +Conjoined Items* screen will appear. + +4. Select the peer type from the drop down menu, and scan in the barcode +of the item that you want to link to this record. + +5. Click *Link to Bib (Submit).* ++ +image::media/conj10.jpg[conj10] + +6. The linked item will appear in the screen. The *Result* column indicates Success. + +7. To confirm the linkage, click *Actions for this Record -> OPAC View.* + +8. When the bibliographic record appears, click *Reload. Linked Titles* +will show the linked title and item. + + +.Scenario 3: I want to edit or break the link between a copy and a bibliographic record + +1. Retrieve the bibliographic record that has a copy linked to it. + +2. Click *Actions for this Record -> Manage Conjoined Items.* + +3. Select the copy that you want to edit, and right-click or click +*Actions for Selected Items.* + +4. Make any changes, and click *OK.* + + +UPDATE_COPY - Link items to bibliographic records diff --git a/docs/modules/cataloging/pages/copy-buckets_web_client.adoc b/docs/modules/cataloging/pages/copy-buckets_web_client.adoc new file mode 100644 index 0000000000..2f448e3ced --- /dev/null +++ b/docs/modules/cataloging/pages/copy-buckets_web_client.adoc @@ -0,0 +1,289 @@ += Item Buckets = +:toc: + +Item buckets are containers copy records can be put into to easily perform batch actions on. Copies stay in buckets until they are removed. + +The _Item Bucket_ interface is accessed by going to *Cataloguing* -> *Copy Buckets*. + +image::media/copy-bucket-2.png[Cataloguing Menu] + +NOTE: The words _copy_ and _item_ are used interchangeably in Evergreen. + +== Managing Item Buckets == + +=== Creating Item Buckets === + +Item buckets can be created in the _Item Bucket_ interface as well as on the fly when adding items to a bucket from +a catalogue search or from within the _Item Status_ interface. For information on creating buckets on the fly see _Adding Copies to a Bucket_ (needs section ID). + +1. In the _Item Bucket_ interface on the click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. ++ +image::media/copy-bucket-new-1.png[Item Bucket Interface] ++ +2. From the drop down menu select *New Bucket*. ++ +image::media/copy-bucket-new-2.png[Item Bucket Interface] ++ +3. Enter a _Name_ and a _Description_ (optional) for your bucket and click *Create Bucket*. ++ +image::media/copy-bucket-new-3.png[Item Bucket Interface] ++ +The bucket can also be set as _Publicly Visible_ at this time. + +NOTE: The functionality for making buckets publicly visible does not appear to be in place at this time. + +=== Editing Item Buckets === + +1. In the _Item Bucket_ interface click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. ++ +image::media/copy-bucket-new-1.png[Item Bucket Interface] ++ +2. From the drop down menu select the bucket you would like to edit. The bucket will load in the interface. +3. Click on *Buckets*. +4. From the drop down menu select *Edit Bucket*. ++ +image::media/copy-bucket-edit-1.png[Item Bucket Interface] ++ +5. Update the desired information and click *Apply Changes*. ++ +image::media/copy-bucket-edit-2.png[Item Bucket Interface] + +NOTE: The functionality for making buckets publicly visible does not appear to be in place at this time. + +=== Sharing Item Buckets === + +==== Finding the Bucket ID ==== + +1. With the bucket open, look at the URL for the bucket ID. Share this ID with the staff member who needs access to this bucket. + +image::media/copy-bucket-share-1.png[Bucket ID URL] + +==== Opening a Shared Bucket ==== + +. In the _Item Bucket_ interface click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. ++ +image::media/copy-bucket-new-1.png[Item Bucket Interface] ++ +. From the drop down menu select *Shared Bucket*. ++ +image::media/copy-bucket-share-2.png[Item Bucket Interface] ++ +. Enter the bucket ID and click *Load Bucket*. ++ +image::media/copy-bucket-share-3.png[Item Bucket Interface] ++ +. The shared bucket will display and can be worked with the same as any bucket you own. ++ +image::media/copy-bucket-share-4.png[Item Bucket Interface] + +=== Deleting Item Buckets === + +1. In the _Item Bucket_ interface click *Buckets* in either the _Pending Copies_ or _Bucket View_ tab. ++ +image::media/copy-bucket-new-1.png[Item Bucket Interface] ++ +2. From the drop down menu select the bucket you would like to delete. The bucket will load in the interface. +3. Click on *Buckets*. +4. From the drop down menu select *Delete Bucket*. ++ +image::media/copy-bucket-delete-1.png[Item Bucket Interface] ++ +5. On the confirmation pop up click *Delete Bucket*. +6. Refresh your screen. + + +== Adding Copies to a Bucket == + +=== From the Item Bucket Interface === + +1. In the _Item Bucket_ interface click on the *Pending Copies* tab. ++ +image::media/copy-bucket-pending-1.png[Item Bucket Interface] ++ +2. Scan in all of the items you wish to add to the bucket. ++ +image::media/copy-bucket-pending-3.png[Item Bucket Interface] ++ +3. Click on *Buckets*. +4. From the drop down menu select the bucket you wish to add the items to. +Alternatively you can create a *New Bucket* (link back to Item Bucket Interface section of Creating Copy Buckets). ++ +image::media/copy-bucket-pending-2.png[Item Bucket Interface] ++ +5. Use the check boxes to select the item(s) you wish to add to the bucket. +6. Click *Actions*. +7. From the drop down menu select *Add To Bucket*. ++ +image::media/copy-bucket-pending-4.png[Item Bucket Interface] ++ +8. The number of items in the bucket, displayed beside the bucket name, will update as will the number on the *Bucket View* tab. ++ +image::media/copy-bucket-pending-5.png[Item Bucket Interface] + +NOTE: Once you have added your selected items to a bucket you can deselect them, select other items on your pending list, and add those items to a different bucket. + + +=== From a Catalogue Search === + +1. Retrieve the title through a catalogue search. +2. If it is not your default view click on the *Holdings View* tab. ++ +image::media/copy-bucket-cat-1.png[Holdings View] ++ +3. Use the check boxes to select the item(s) you would like to add to the bucket. +4. Click *Actions*. +5. From the drop down menu select *Add Items to Bucket* ++ +image::media/copy-bucket-cat-2.png[Holdings View] ++ +6. Enter a name for your bucket or select an existing from the drop down menu. +7. Click *Add To New Bucket* or *Add To Selected Bucket*. ++ +image::media/copy-bucket-cat-3.png[Item Bucket Interface] ++ +8. Repeat steps 1 through 7 to add additional items. + + +=== From the Scan Item Interface === + +. Click on _Search_ -> _Search for Copies by Barcode_ +. Scan the barcode(s) of the item(s) you wish to add to the bucket. +. Make sure that the items you want to add are selected (i.e. that the checkbox on the left +side of the screen is checked. +. Right click on one of the selected items. +. Click _Add items to bucket_. +. Choose the existing bucket that you'd like to add to, or create a new bucket. + + +== Removing Copies from a Bucket == + +. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. ++ +image::media/copy-bucket-remove-1.png[Item Bucket Interface] ++ +. Click on *Buckets*. +. From the drop down menu select the bucket containing the item(s) you would like to remove. ++ +image::media/copy-bucket-remove-2.png[Item Bucket Interface] ++ +. Use the check boxes to select the item(s) you wish to remove from the bucket. +. Click *Actions*. +. From the drop down menu select *Remove Selected Copies from Bucket*. ++ +image::media/copy-bucket-remove-3.png[Item Bucket Interface] ++ +. Your bucket will reload and the selected item(s) will no longer be in the bucket. + +== Editing Copies in a Bucket == + +. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. ++ +image::media/copy-bucket-remove-1.png[Item Bucket Interface] ++ +. Click on *Buckets*. +. From the drop down menu select the bucket containing the item(s) you would like to edit. ++ +image::media/copy-bucket-remove-2.png[Item Bucket Interface] ++ +. Use the check boxes to select the item(s) you wish to edit. +. Click *Actions*. +. From the drop down menu select *Edit Selected Copies*. ++ +image::media/copy-bucket-edit-copy-1.png[Item Bucket Interface] ++ +. The _Copy Editor_ will open in a new tab. Make your edits and then click *Save and Exit*. ++ +image::media/copy-bucket-edit-copy-2.png[Item Bucket Interface] ++ +. Your items have been updated. ++ +image::media/copy-bucket-edit-copy-3.png[Item Bucket Interface] + +== Deleting Copies from the Catalogue == + +. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. ++ +image::media/copy-bucket-remove-1.png[Item Bucket Interface] ++ +. Click on *Buckets*. +. From the drop down menu select the bucket containing the item(s) you would like to delete from the catalogue. ++ +image::media/copy-bucket-remove-2.png[Item Bucket Interface] ++ +. Use the check boxes to select the item(s) you wish to delete. +. Click *Actions*. +. From the drop down menu select *Delete Selected Copies from Catalog*. ++ +image::media/copy-bucket-delete-copy-1.png[Item Bucket Interface] ++ +. On the confirmation pop up click *OK/Continue*. ++ +image::media/copy-bucket-delete-copy-2.png[Item Bucket Interface] ++ +. The items have been deleted from the catalogue. + + +== Placing Holds on Copies in a Bucket == + +. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. ++ +image::media/copy-bucket-remove-1.png[Item Bucket Interface] ++ +. Click on *Buckets*. +. From the drop down menu select the bucket containing the item(s) you would like to place a hold on. ++ +image::media/copy-bucket-remove-2.png[Item Bucket Interface] ++ +. Use the check boxes to select the item(s) you wish to delete. +. Click *Actions*. +. From the drop down menu select *Request Selected Copies*. ++ +image::media/copy-bucket-request-1.png[Item Bucket Interface] ++ +. Enter the barcode for the patron who the hold is for. By default the system enters the barcode of the account logged into the client. ++ +image::media/copy-bucket-request-2.png[Item Bucket Interface] ++ +. Select the correct _Pickup Library_. +. Select the correct _Hold Type_. (More explanation of the hold types needed here.) +. Click *OK*. +. The hold has been placed. + + +== Transferring Copies to Volumes == + +1. Retrieve the title through a catalogue search. +2. If it is not your default view click on the *Holdings View* tab. ++ +image::media/copy-bucket-cat-1.png[Holdings View] ++ +3. Use the check boxes to select the volume you would like to transfer the item(s) to. +4. Click *Actions*. +5. From the drop down menu select *Volume as Item Transfer Destination* ++ +image::media/copy-bucket-transfer-1.png[Holdings View] ++ +6. Open the _Item Bucket_ interface. By default you are on the *Bucket View* tab. ++ +image::media/copy-bucket-remove-1.png[Item Bucket Interface] ++ +7. Click on *Buckets*. +8. From the drop down menu select the bucket containing the item(s) you would like to transfer to the volume. ++ +image::media/copy-bucket-remove-2.png[Item Bucket Interface] ++ +9. Use the check boxes to select the item(s) you wish to transfer. +10. Click *Actions*. +11. From the drop down menu select *Transfer Selected Copies to Marked Volume*. ++ +image::media/copy-bucket-transfer-2.png[Item Bucket Interface] ++ +12. The item(s) is transferred. ++ +image::media/copy-bucket-transfer-3.png[Item Bucket Interface] + + + + + + diff --git a/docs/modules/cataloging/pages/holdings_templates.adoc b/docs/modules/cataloging/pages/holdings_templates.adoc new file mode 100644 index 0000000000..377fe94c74 --- /dev/null +++ b/docs/modules/cataloging/pages/holdings_templates.adoc @@ -0,0 +1,27 @@ += Working with holdings templates = +:toc: + +Setting up holdings templates can save a lot of time when creating items, and they +also improve consistency and accuracy. Any time you find yourself creating multiple +items with the same item-level data, you may wish to create a holdings template +to automate that process. + +== Creating a new holdings template == + +* Open _Administration_ -> _Local Administration_ -> _Holdings Template Editor_. +* Select the desired template attributes by moving through the fields in the +editor. The attributes you've changed will appear in green. If you want to +start this process over, you can click the _Clear_ button in the top right +corner of the screen. +* Type a name for your template into the box labeled _Template_ at the top +of the screen. +* Press the _Save_ button. + +== Using a holdings template == + +Whenever you see the holdings editor, you can use data from your templates. + +* In the _Template_ menu, choose the template you wish to use. +* Click _Apply_. +* Make any other necessary changes. + diff --git a/docs/modules/cataloging/pages/introduction.adoc b/docs/modules/cataloging/pages/introduction.adoc new file mode 100644 index 0000000000..e2feb18836 --- /dev/null +++ b/docs/modules/cataloging/pages/introduction.adoc @@ -0,0 +1,3 @@ += Introduction = +:toc: +This part describes cataloging in Evergreen. diff --git a/docs/modules/cataloging/pages/item_status.adoc b/docs/modules/cataloging/pages/item_status.adoc new file mode 100644 index 0000000000..2bebee8b76 --- /dev/null +++ b/docs/modules/cataloging/pages/item_status.adoc @@ -0,0 +1,89 @@ += Using the Item Status interface = +:toc: +indexterm:[copies] +indexterm:[items] + +The Item Status interface is a powerful tool that can give you a lot of information +about specific items in your catalog. + +== Accessing the Item Status interface == + +There are three ways to access the item status interface: + +=== Through the Search menu === + +. Click *Search -> Search for Copies by Barcode*. +. Scan your barcode. + +=== Through the Circulation menu === + +. Click *Circulation -> Item Status*. +. Scan your barcode. + +=== From the OPAC view === + +. Click *Search -> Search the Catalog*. +. Find a bibliographic record that you are interested in. +. Make sure you are on the _OPAC View_ tab of that record. +. Locate the _BARCODE_ column in the holdings session. +. Click _view_ next to the barcode of the item you're interested +in. + + +== Specific fields == + +=== Active date === +indexterm:[active date] +indexterm:[copies,activating] +indexterm:[items,activating] + +This date is automatically added by Evergreen the first time +an item receives a status that is considered active (i.e. the +first date on which patrons could access the copy). While your +consortium may customize which statuses are considered active +and which are not, statuses like _Available_ and _On holds shelf_ +are typically considered active, and statuses like _In process_ or +_On order_ are typically not. + +== Printing spine labels == + +indexterm:[spine labels] +indexterm:[printing, spine labels] +indexterm:[item labels] +indexterm:[printing, item labels] +indexterm:[pocket labels] + +Before printing spine labels, you will want to install Hatch +or turn off print headers and footers in your browser. + +include::admin:partial$turn-off-print-headers-firefox.adoc[] + +include::admin:partial$turn-off-print-headers-chrome.adoc[] + +=== Creating spine labels === + +To create spine and item labels for an item (or group of items): + +. Click *Circulation -> Item Status*. +. Scan your barcode(s). +. Select all the items you'd like to print labels for. +. Right-click on the items, or click the Actions drop-down menu. +. Under _Show_, click on _Print Labels_. +. Take a look at the Label Preview area. +. When you are satisfied with your labels, click the _Print_ button. + +== Request Items Action == + +To place requests from the Item Status interface, select one or more items in List View and select *Actions -> Request Items*. This action can also be invoked for a single item from Item Status Detail View. + +Starting in 3.4, this action has an Honor User Preferences checkbox which does the following for the selected user when checked: + +* Changes the Pickup Library selection to match the user's Default Hold Pickup Location +* Honor the user's Holds Notification settings (including Default Phone Number, etc.) + +Also beginning with 3.4, a Title Hold option has been added to the Hold Type menu. This will create one title-level hold request for each unique title associated with the items that were selected when Request Items was invoked. + +image::media/request_from_item_status.png[Request from Item Status] + +Success and Failure toasts have also been added based on what happens after the Request Items interface has closed. + diff --git a/docs/modules/cataloging/pages/item_tags_cataloging.adoc b/docs/modules/cataloging/pages/item_tags_cataloging.adoc new file mode 100644 index 0000000000..f44c553ef7 --- /dev/null +++ b/docs/modules/cataloging/pages/item_tags_cataloging.adoc @@ -0,0 +1,89 @@ += Item Tags = +:toc: + +indexterm:[copy tags] + +Item Tags allow staff to apply custom, pre-defined labels or tags to items. Item tags are visible in the public catalog and are searchable in both the staff client and public catalog based on configuration. This feature was designed to be used for Digital Bookplates to attach donation or memorial information to items, but may be used for broader purposes to tag items. + +Item tags can be created ahead of time in the Administration module (See the Administration section of this documentation for more information.) and then applied to items or they can be created on the fly during the cataloging process. + +== Adding Existing Item Tags to Items == + +Item Tags can be added to existing items or to new items as they are cataloged. To add an item tag: + +. In the _Holdings Editor_, click on *Item Tags*. A dialog box called _Manage Item Tags_ will appear. + +image::media/item_tag_button.png[Location of Item Tag Button] + +. Select the *Tag Type* from the drop down menu and start typing in the Tag field to bring up tag suggestions from the existing item tags. Select the tag and click *Add Tag*, then click *OK*. +.. If you are cataloging a new item, make any other changes to the item record. +. Click *Save & Exit*. The item tag will now appear in the catalog. + +image::media/manage_item_tags.png[Assigning an Item Tag] + +image::media/copytags7.PNG[Item Tags in the OPAC] + +== Creating and Applying a Item Tag During Cataloging == + +Item tags can be created in the Holdings Editor on the fly while cataloging or viewing an item: + +. In the _Holdings Editor_, click on *Item Tags*. A dialog box called _Manage Item Tags_ will appear. +. Select the *Tag Type* from the drop down menu and type in the new Tag you want to apply to the item. Click *Add Tag*, then click *OK*. The new tag will be created and attached to the item. It will be owned by the organization unit your workstation is registered to. The tag can be modified under *Admin->Local Administration->Item Tags*. + + +== Removing Item Tags from Items == + +To remove an item tag from a item: + +. In the Holdings Editor, click on *Item Tags*. A dialog box called _Manage Item Tags_ will appear. +. Click *Remove* next to the tag you would like to remove, and click *OK*. +. Click *Save & Exit*. The item tag will now be removed from the catalog. + +image::media/remove_item_tag.png[Removing an Item Tag] + + +== Adding Item Tags to Items in Batch == + +Item tags can be added to multiple items in batch using _Item Buckets_. After adding the items to a Item Bucket: + +. Go to *Cataloging->Item Buckets->Bucket View* and select the bucket from the Buckets drop down menu. +. Select the items to which you want to add the item tag and go to *Actions->Apply Tags* or right-click and select *Apply Tags*. The _Apply Item Tags_ dialog box will appear. +. Select the *Tag Type* and enter the *Tag*. Click *Add Tag*, then click *OK*. The item tag will now be attached to the items. + +image::media/copytags9.PNG[Item Bucket View] + +NOTE: It is not possible to remove tags using the Item Bucket interface. + +== Searching Item Tags == + +Item Tags can be searched in the public catalog if searching has been enabled via Library Settings. Item Tags can be searched in the Basic and Advanced Search interfaces by selecting Digital Bookplate as the search field. Specific item tags can also be searched using a Keyword search and a specific search syntax. + +=== Digital Bookplate Search Field === + +*Basic Search* + +image::media/copytags10.png[Digital Bookplates Search Field Location in Basic Search] + +*Advanced Search* + +image::media/copytags11.png[Digital Bookplates Search Field Location in Advanced Search] + + +=== Keyword Search === + +Item Tags can also be searched by using a Keyword search in the Basic and Advanced search interfaces. Searches need to be constructed using the following syntax: + + +copy_tag(item tag type code, search term) + + +For example: + + +copy_tag(bookplate, friends of the library) + + +It is also possible to conduct a wildcard search across all item tag types: + +copy_tag(*, smith) + diff --git a/docs/modules/cataloging/pages/link_checker.adoc b/docs/modules/cataloging/pages/link_checker.adoc new file mode 100644 index 0000000000..e229c988bd --- /dev/null +++ b/docs/modules/cataloging/pages/link_checker.adoc @@ -0,0 +1,78 @@ += Link Checker = +:toc: + +The Link Checker enables you to verify the validity of URLs stored in MARC records. +The ability to verify URLs would benefit locations with large electronic resource collections. + +== Search for URLs == + +Search for MARC records that contain URLs that you want to verify. + +. Click *Cataloging* -> *Link Checker*. +. Click *New Link Checker Session*. +. Create a session name. Note that each session must have a unique name. +. Select a search scope from the drop down menu. Records that would be retrieved by searching +Example Branch 1 (BR1) in an OPAC search would also be retrieved here. For example, +a record that describes an electronic resource with a URL in the 856 $u and an org unit code, +such as BR1, in the 856 $9, would be retrieved by a search of relevant keywords. Also, records +that contain a URL without the $9 subfield, but also have physical copies at BR1, would be +retrieved. Note that you can skip this step if you enter the org unit code of the location +that you want to search in the *Search* field. +. Enter search terms to retrieve records with URLs that you want to verify. You can also add +a location filter, such as BR1. +. You may further limit your search by selecting a saved search. Saved searches are filters made +up of specific criteria, such as shelving location or audience. Adding a saved search to your +keyword search will narrow your search for records with URLs. This step is optional. +. Enter tags and subfields that contain URLs in the appropriate boxes. Click *Add* after you enter +the data in the fields. You can add multiple tags and subfields by repeating this process. Evergreen +will search for records that match your search terms, and then, from the set that it retrieves, it +will extract any URLs from all of the tag/subfield locations you have specified for the session. +. To view and manually verify the URLs that Evergreen retrieves, leave the *Process Immediately* button +unchecked. If you want Evergreen to automatically verify the URLs that it retrieves, then check the box to *Process Immediately*. +. Click *Begin* to process your search. + +image::media/Link_Checker1.jpg[Link_Checker1] + + +== View Your Results == + +If you do not click *Process Immediately*, then you must select the links that you want to verify, and click +*Verify Selected URLs*. If you click *Process Immediately*, then you skip this step, and Evergreen +jumps directly to the results of the verification attempts as seen in the next step. + +image::media/Link_Checker2.jpg[Link_Checker2] + +Evergreen displays the results of the verification attempts, including the tags that you searched, +the URLs that Evergreen retrieved, the Bib Record ID, the request and result time, and the result code and text. + +image::media/Link_Checker6.jpg[Link_Checker6] + +== Manage Your Sessions == + +=== Edit Columns === + +You can use the *Column Picker* to add and remove columns on any of the *Link Checker* interfaces. +To access the *Column Picker*, right click on any of the column headings. The columns are saved to your user account. + + +=== Clone Sessions === + +You can clone sessions that you run frequently or that have frequently-used parameters that +need only minor adjustments to create new searches. To clone a session: + +. Click *Cataloging* -> *Link Checker*. +. In the Session ID column, click *Clone*. A copy of the parameters of that search will appear. + + +=== View Verification Attempts === + +To view the results of a verification attempt after you have closed the session, click *Cataloging* -> *Link Checker*. +Your link checker sessions appear in a list. To view the results of a session, click the *Open* link in the Session ID column. + +Click *Filter* to refine the results on this page. To add a filter: + +. Select a column from the first drop down menu. +. Select an operator from the second drop down menu. +. A third field will appear. Enter the appropriate text. +. Click *Apply* to apply the filter to your current results. Click *Save Filters* to save the filter to your user account for later use. + diff --git a/docs/modules/cataloging/pages/monograph_parts.adoc b/docs/modules/cataloging/pages/monograph_parts.adoc new file mode 100644 index 0000000000..1b341c9b94 --- /dev/null +++ b/docs/modules/cataloging/pages/monograph_parts.adoc @@ -0,0 +1,97 @@ += Monograph Parts = +:toc: + +*Monograph Parts* enables you to differentiate between parts of +monographs or other multi-part items. This feature enables catalogers +to describe items more precisely by labeling the parts of an item. For +example, catalogers might identify the parts of a monograph or the discs +of a DVD set. This feature also allows patrons more flexibility when +placing holds on multi-part items. A patron could place a hold on a +specific disc of a DVD set if they want to access a specific season or +episode rather than an entire series. + +Four new permissions are used by this functionality: + +* CREATE_MONOGRAPH_PART +* UPDATE_MONOGRAPH_PART +* DELETE_MONOGRAPH_PART +* MAP_MONOGRAPH_PART + +These permissions should be assigned at the consortial level to those +groups or users that will make use of the features described below. + + +== Add a Monograph Part to an Existing Record == + +To add a monograph part to an existing record in the catalog: + +. Retrieve a record. + +. Click the *Manage Parts* tab. ++ +image::media/manage_parts_menu.jpg[Menu: Manage Parts] + +. Click the *New Monograph Part* button + +. Enter the *label* that you want to appear to the user in the catalog, +and click *Save*. This will create a list of monograph parts from which +you can choose when you create holdings. ++ +image::media/monograph_parts2.jpg[monograph_parts2] + +. Add holdings. To add holdings to your workstation +library, click the *Add Holdings* button in the *Record Summary* area above the tabs. ++ +To add holdings to your workstation library or other libraries, +click the *Holdings View* tab, right-click the appropriate +library, and choose *Add -> Call numbers and Items*. ++ +image::media/monograph_parts3.jpg[monograph_parts3] + +. The Holdings Editor opens. Enter the number of call numbers +that you want to add to the catalog and the call number description. + +. Enter the number of items and barcode(s) of each item. + +. Choose the part label from the *Part* drop down menu. ++ +image::media/monograph_parts4.jpg[monograph_parts4] + +. Apply a template to the items, or edit fields in the *Working Items* section below. ++ +image::media/monograph_parts5.jpg[monograph_parts5] + +. Click *Store Selected* when those items are ready. + +. Review your completed items on the "Completed Items" tab. + +. When all items have been stored and reviewed, click "Save & Exit". ++ +NOTE: If you are only making one set of changes, you can simply click +*Save & Exit* and skip the *Store Selected* stage. + +. The *Holdings View* tab now shows the new part information. These fields +also appear in the OPAC View. ++ +image::media/manage_parts_opac.png[Catalog Record showing items with part details] + +== Monograph Part Merging == + +The monograph part list for a bibliographic record may, over time, diverge from +the proscribed format, resulting in multiple labels for what are essentially the +same item. For instance, ++Vol.{nbsp}1++ may have variants +like ++V.1++, ++Vol{nbsp}1++, or ++{nbsp}Vol.{nbsp}1++ (leading +space). Merging parts will allow cataloging staff to collapse the variants into +one value. + +In the Monograph Parts display: + +. Click the checkbox for all items you wish to merge including the one you wish +to prevail when done. +. Click on the ``Merge Selected'' button. A pop-up window will list the selected +items in a monospaced font, with blanks represented by a middle-dot character +for more visibility. +. Click on the item you wish to prevail. + +The undesired part labels will be deleted, and any items that previously used +those labels will now use the prevailing label diff --git a/docs/modules/cataloging/pages/overlay_record_3950_import.adoc b/docs/modules/cataloging/pages/overlay_record_3950_import.adoc new file mode 100644 index 0000000000..44a384de1b --- /dev/null +++ b/docs/modules/cataloging/pages/overlay_record_3950_import.adoc @@ -0,0 +1,55 @@ += Overlay Existing Catalog Record via Z39.50 Import = +:toc: + +This feature enables you to replace a catalog record with a record obtained through a Z39.50 search. No new permissions or administrative settings are needed to use this feature. + +*To Overlay an Existing Record via Z39.50 Import:* + +1) Click *Cataloging -> Import Record from Z39.50* + +2) Select at least one *Service* in addition to the *Local Catalog* in the *Service and Credentials* window in the top right panel. + +3) Enter search terms in the *Query* window in the top left panel. + +4) Click *Search*. + +image::media/Overlay_Existing_Record_via_Z39_50_Import1.jpg[] + +5) The results will appear in the lower window. + +6) Select the record in the local catalog that you wish to overlay. + +7) Click *Mark Local Result as Overlay Target* + + +image::media/Overlay_Existing_Record_via_Z39_50_Import2.jpg[] + + +8) A confirmation message appears. Click *OK*. + +9) Select the record that you want to replace the existing catalog record. + +10) Click *Overlay.* + + +image::media/Overlay_Existing_Record_via_Z39_50_Import3.jpg[] + + +11) The record that you selected will open in the MARC Editor. Make any desired changes to the record, and click *Overlay Record*. + +image::media/Overlay_Existing_Record_via_Z39_50_Import4.jpg[] + + +12) The catalog record that you want to overlay will appear in a new window. Review the MARC record to verify that you are overlaying the correct catalog record. + +13) If the correct record appears, click *Overlay*. + + +image::media/Overlay_Existing_Record_via_Z39_50_Import5.jpg[] + +14) A confirmation message will appear to confirm that you have overlaid the record. Click *Ok*. + +15) The screen will refresh in the OPAC View to show that the record has been overlaid. + + +image::media/Overlay_Existing_Record_via_Z39_50_Import6.jpg[] diff --git a/docs/modules/cataloging/pages/record_buckets.adoc b/docs/modules/cataloging/pages/record_buckets.adoc new file mode 100644 index 0000000000..8b46717cd6 --- /dev/null +++ b/docs/modules/cataloging/pages/record_buckets.adoc @@ -0,0 +1,129 @@ += Record Buckets = +:toc: + +== Introduction == + +Record buckets are containers for MARC records. Once records are in a bucket, you can take +various types of actions, including: + +* Editing all the records at once using the MARC Batch Editor. +* Deleting all the records in the bucket. +* Merging all the records in the bucket. +* Downloading the MARC files for all records in the bucket, so you can edit them in another +program like http://marcedit.reeset.net[MARCEdit]. + +== Creating Record Buckets == + +. Click on _Cataloging_ -> _Record Buckets_. +. On the _Buckets_ menu, click _New Bucket_. +. Give the bucket a name and (optionally) a description. + +== Adding Records to a Bucket == + +=== From the Record Bucket Interface === +. Click on _Cataloging_ -> _Record Buckets_. +. On the _Buckets_ menu, choose the bucket that you'd like to add records to. +. Go to the _Record Query_ tab. +. Enter your query into the _Record Query_ box. +. Select the records you would like to add. +. On the _Actions_ menu, click _Add to Bucket_. + +.Advanced record queries +**** + +The _Record Query_ tab allows some advanced search functionality through the use of search keys, +which can be combined with one another. + +.Record Bucket search keys +[options="header"] +|=================== +|Search key |Abbreviated version |Usage example |Description +|author: |au: |au:Anzaldua |An author, creator, or contributor +|available: | |available:yes |Limits to available items. There is no way to limit to _unavailable_ items +|keyword: |kw: |kw:Schirmer |A keyword +|lang: | |lang:Spanish |A language +|series: |se: |se:avatar last airbender |A series title +|site: | |site:LIB3 |The shortname of the library/system/consortium you'd like to search +|subject: |su: |su:open source software |A subject +|subject\|geographic: | |subject\|geographic:Uruguay |A geographic subject +|title: |ti: |ti:Harry Potter |Title proper or alternate title +|title\|proper: | |title\|proper:Harry Potter |Title proper taken from 245 +|=================== + +You can combine these in the same query, e.g. `ti:borderlands au:anzaldua available:yes`. However -- with the exception of the _lang_ search key, +you should not repeat the same search key twice. + +**** + +[TIP] +You can use the same boolean operator symbols that are used in the OPAC (_||_ for boolean OR, _&&_ for boolean AND, and _-_ for boolean NOT). + + +== Bibliographic Record Merging and Overlay == + +Catalogers can merge or overlay records in record buckets or using records obtained from a Z39.50 service. + +=== Merge Records in Record Buckets === + +1. Click *Cataloging>Record Buckets*. +2. Create and/or select a record bucket. +3. Select the records that you want to merge, and click *Actions>Merge Selected Records*. + +image::media/marcoverlay1.png[] + +4. The Merge Selected Records interface appears. +5. The records to be merged appear on the right side of the screen. Click *Use as Lead Record* to select a lead record from those that need to be merged. + +image::media/marcoverlay2.png[] + +6. Select a merge profile from the drop down box. + +image::media/marcoverlay3.png[] + +7. After you select the profile, you can preview the changes that will be made to the record. + +image::media/marcoverlay4.png[] + +8. You can change the merge profile at any time; after doing so, the result of the merge will be recalculated. The merge result will also be recalculated after editing the lead record, changing which record is to be used as lead, or removing a record from consideration. +9. When you are satisfied that you have selected the correct merge profile, click the *Merge* button in the bottom right corner. +10. Note that merge profiles that contain a preserve field specification are not available to be chosen in this interface, as they would have the effect of reversing which bibliographic record is considered the target of the merge. + +=== Track Record Merges === + +When 2 or more bib records are merged in a record bucket, all records involved are stamped with a new merge_date value. For any bib record, this field indicates the last time it was involved in a merge. At the same time, all subordinate records (i.e. those deleted as a product of the merge) are stamped with a merged_to value indicating which bib record the source record was merged with. + +In the browser client bib record display, a warning alert now appears along the top of the page (below the Deleted alert) indicating when a record was used in a merge, when it was merged, and which record it was merge with, rendered as a link to the target record. + +image::media/merge_tracking.png[merge message with date] + +=== Merge Records Using Z39.50 === + +1. Search for a record in the catalog that you want to overlay. +2. Select the record, and click *MARC View*. +3. Select *Mark for: Overlay Target*. + +image::media/marcoverlay5.png[] + +4. Click *Cataloging>Import Record from Z39.50*. +5. Search for the lead record that you want to overlay within the Z39.50 interface. +6. Select the desired record, and click *Overlay*. + +image::media/marcoverlay6.png[] + +7. The record that you have targeted to be overlaid, and the new record, appear side by side. + +image::media/marcoverlay7.png[] + +8. You can edit the lead record before you overlay the target. To edit the record, click the *Edit Z39.50 Record* button above the lead record. +9. The MARC editor will appear. You can make your changes in the MARC editor, or you can select the *Flat Text Editor* to make changes. After you have edited the record, click *Modify* in the top right corner, and then *Use Edits* in the bottom right corner. Note that the record you are editing is the version from the Z39.50 server not including any changes that would be made as a result of applying the selected merge file. +10. You will return to the side-by-side comparison of the records and then can proceed with the overlay. +11. Once you are satisfied with the record that you want to overlay, select a merge profile from the drop down box, *Choose merge profile*. +12. Click *Overlay*. The overlay will occur, and you will be taken back to the Z39.50 interface. +13. Note that the staff client remembers the last merge overlay profile that you selected, so the next time that you open the interface, it will default to that profile. Simply change the profile to make a different selection. +14. Also note when the merge profile is applied, the Z39.50 record acts as the target of the merge. For example, if your merge profile adds 650 fields, those 650 fields are brought over from the record that already exists in the Evergreen database (i.e., the one that you are overlaying from Z39.50). +15. Also note that merge profiles that contain a preserve field specification are not available to be chosen in this interface, as they would have the effect of reversing which bibliographic record is considered the target of the merge. + +=== New Admin Settings === + +1. Go to *Admin>Local Administration>Library Settings Editor>Upload Default Merge Profile (Z39.50 and Record Buckets)*. +2. Select a default merge profile, and *click Update Setting*. The merge profiles that appear in this drop down box are those that are created in *MARC Batch Import/Export*. Note that catalogers will only see merge profiles that are allowed by their org unit and permissions. diff --git a/docs/modules/cataloging/pages/specific_variable_fields.adoc b/docs/modules/cataloging/pages/specific_variable_fields.adoc new file mode 100644 index 0000000000..d059fec61b --- /dev/null +++ b/docs/modules/cataloging/pages/specific_variable_fields.adoc @@ -0,0 +1,7 @@ += Specific fields = +:toc: + +== 264 == + +The Public Catalog displays tag 264 information for Publisher, Producer, Distributor, Manufacturer, +and Copyright within a full bib record's summary. diff --git a/docs/modules/cataloging/pages/volcopy_editor.adoc b/docs/modules/cataloging/pages/volcopy_editor.adoc new file mode 100644 index 0000000000..261833335f --- /dev/null +++ b/docs/modules/cataloging/pages/volcopy_editor.adoc @@ -0,0 +1,82 @@ += Using the Holdings Editor = +:toc: +indexterm:[copies,editing] +indexterm:[items,editing] +indexterm:[call numbers,editing] +indexterm:[volumes,editing] +indexterm:[holdings editor] +[[holdings_editor]] + +The Holdings Editor is the tool where you can edit all holdings data. + +== Specific fields == + +=== Acquisitions Cost === +indexterm:[acquisitions cost] + +This field is populated with the invoiced cost of the originating acquisition. +This field will be empty until its originating acquisition is connected to an +invoice. + +=== Item Number === +indexterm:[copy number] +indexterm:[item number] + +If you have multiple copies of the same item, you may want to +assign them item numbers to help distinguish them. If you do +not include an item number in this field, Evergreen will assign your +item a default item number of 1. + +== Accessing the Holdings Editor by barcode == + +. Click *Search -> Search for Items by Barcode* +. Scan your barcode. +. Right click on the entry in the grid. +. Click *Edit -> Call Numbers and Items* on the actions menu that appears. + +== Accessing the holdings editor from a catalog record == + +The bibliographic record detail page displays library holdings, including the call number, shelving location, and item barcode. Within the +staff client, the holdings list displays a column next to the item barcode(s) containing two links, *view* and *edit*. + +image::media/copy_edit_link_1.jpg[Copy Edit Link] + +Clicking on the *view* link opens the *Item Status* screen for that specific item. + +Clicking on the *edit* link opens the *Holdings Editor* screen for that specific item. + +The *edit* link will only be exposed next to copies when the user has the *UPDATE_COPY* permission at the copy owning or circulating library. + +== Hiding Fields in the Holdings Editor == + + +A user may hide specific fields in the holdings editor if these fields are not used for cataloging in their organization. Hiding fields that are not used by your organization helps to reduce confusion among staff and also declutters the holdings editor screen. + +To hide one or more fields from the holdings editor: + +. Retrieve the record. ++ +[NOTE] +=================================================================================== +You can retrieve records in many ways, including: + +* If you know its database ID, enter it into Cataloging > Retrieve Bib Record by ID. + +* If you know its control number, enter it into Cataloging > Retrieve Bib Record by TCN. + +* Searching in the catalog. + +* Clicking on a link from the Acquisitions or Serials modules. +=================================================================================== ++ +. Select the *Add Holdings* button. The *Holdings Editor* will display. + +. In the Holdings Editor, select the *Defaults* tab. ++ +image::media/Holdings_Editor_Defaults_Tab.png[Holdings editor defaults tab] ++ +. On the Defaults tab, uncheck the boxes for the field(s) that you wish to hide. It is not necessary to save this screen; changes are saved automatically. ++ +image::media/Holdings_Editor_Hide_Display_Defaults.png[Holdings editor display defaults with deselected fields] ++ +. Select the *Edit* tab; the de-selected fields no longer appear on the holdings editor. diff --git a/docs/modules/cataloging/pages/z39.50_search_enhancements.adoc b/docs/modules/cataloging/pages/z39.50_search_enhancements.adoc new file mode 100644 index 0000000000..97a139c1e4 --- /dev/null +++ b/docs/modules/cataloging/pages/z39.50_search_enhancements.adoc @@ -0,0 +1,102 @@ += Z39.50 Search Enhancements = +:toc: + +*Abstract* + +In Evergreen version 2.5, you will be able to search multiple Z39.50 sources simultaneously from record buckets. Using this feature, you can match records from Z39.50 sources to catalog records in your bucket and import the Z39.50 records via Vandelay. + + +*Administration* + +The following administrative interfaces will enable you to configure Z39.50 search parameters. + + + +*Z39.50 Index Field Maps* + +Click *Administration* -> *Server Administration* -> *Z39.50 Index Field Maps* to map bib record indexes (metabib fields and record attributes) in your catalog records to Z39.50 search attributes. Metabib fields are typically free form fields found in the body of a catalog record while record attributes typically have only one value and are often found in the leader. + +You can map a metabib field or a record attribute to a Z39.50 attribute or a Z39.50 attribute type. To map a specific field in your catalog record to a specific field in a chosen Z39.50 source, you should map to a Z39.50 attribute. For example, if you want the Personal Author in your catalog record to map to the Author field when searching the Library of Congress, then you should do the following: + +. Click *New* or double-click to edit an existing map. + +. Select the *Metabib Field* from the drop down menu. + +. Select the appropriate source and field from the *Z39.50 Attribute* drop down menu. + +. Click *Save*. + + +Alternatively, if you want the Personal Author in your catalog record to map to the generic author field of any Z39.50 source, then you should do the following: + +. Click *New* or double-click to edit an existing map. + +. Select the *Metabib Field* from the drop down menu. + +. Select the appropriate heading from the *Z39.50 Attribute Type* drop down menu. + +. Click *Save*. + + + +*Z39.50 servers* + +Click *Admin* -> *Server Admin* -> *Z39.50 Servers* to input your Z39.50 server. Click the hyperlinked name of any server to view the Z39.50 search attribute types and settings. These settings describe how the search values (from a metabib field or record attribute) are translated into Z39.50 searches. + + + + +*Apply Quality Sets to Z30.50 Sources* + +From this interface, you can rank the quality of incoming search results according to the match set that you have established and their Z39.50 point of origin. By applying a quality score, you tell Evergreen to merge the highest quality records into the catalog. + +. Click *Cataloging* -> *MARC Batch Import/Export*. + +. Click *Record Match Sets*. Match Sets specify the MARC attributes, tags, and subfields that you want Evergreen to use to identify matches between catalog and incoming records. + +. Rank the quality of the records from Z39.50 sources by adding quality metrics for the match set. Click *MARC Tag and Subfield*, and enter the 901z tag and subfield, specify the Z39.50 source, and enter a quality metric. Source quality increases as the numeric quality increases. + +image::media/Locate_Z39_50_Matches4.jpg[Locate_Z39.50_Matches4] + + + +*Org Unit Settings* + +Org Unit settings can be set for your local branch, your system, or your consortium. To access these settings, click *Administration* -> *Local Administration* -> *Library Settings Editor* -> *Maximum Parallel Z39.50 Batch Searches*. + +Two new settings control the Z39.50 search enhancements. + +. Maximum Parallel Z39.50 Batch Searches - This setting enables you to set the maximum number of Z39.50 searches that can be in-flight at any given time when performing batch Z39.50 searches. The default value is five (5), which means that Evergreen will perform 5 searches at a given time regardless of the number of sources selected. The searches will be divided between the sources selected. Thus, if you maintain this default and perform a search using two Z39.50 sources, Evergreen will conduct five searches, shared between the two sources. + +. Maximum Z39.50 Batch Search Results - This setting enables you to set the maximum number of search results to retrieve and queue for each record + Z39 source during batch Z39.50 searches. The default value is five (5). + + + +*Matching Records in Buckets with Records from Z39.50 Sources* + +. Add records to a bucket. + +. Click *Bucket Actions* -> *Locate Z39.50 Matches*. A pop up window will appear. + +. Select a *Z39.50 Server(s)*. + +. Select a *Z39.50 Search Index(es)*. Note that selecting multiple checkboxes will AND the search indexes. + +. Select a Vandelay queue from the drop down menu to which you will add your results, or create a queue by typing its name in the empty field. + +. Select a *Match Set*. The Match Set is configured in Vandelay and, in this instance, will only be used to compare the Z39.50 results with the records in your bucket. + +. Click *Perform Search*. + +image::media/Locate_Z39_50_Matches1.jpg[Locate_Z39.50_Matches1] + +. Status information will appear, including the number of records in the bucket that were searched, the matches that were found, and the progress of the search. When the search is complete, click *Open Queue*. + +image::media/Locate_Z39_50_Matches2.jpg[Locate_Z39.50_Matches2] + +. The Vandelay Queue will display. Matching records are identified in the *Matches* column. From this interface, import records according to your normal procedure. It is suggested that to merge the incoming records with the catalog records, you should choose an option to import the records. Next, select either merge option from the drop down menu, click *Merge on Best Match*, and then click *Import*. + +image::media/Locate_Z39_50_Matches3.jpg[Locate_Z39.50_Matches3] + +. The records from the Z39.50 search will merge with the catalog records. NOTE: A new column has been added to this interface to identify the Z39.50 source. When records are imported to the Vandelay queue via a record bucket, Evergreen tags the Z39.50 source and enters the data into the $901z. + diff --git a/docs/modules/circulation/_attributes.adoc b/docs/modules/circulation/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/circulation/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/circulation/assets/images/advanced_holds/Display_Hold_Types_on_Pull_Lists1.jpg b/docs/modules/circulation/assets/images/advanced_holds/Display_Hold_Types_on_Pull_Lists1.jpg new file mode 100644 index 0000000000..26d7952b47 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/Display_Hold_Types_on_Pull_Lists1.jpg differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location1.png b/docs/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location1.png new file mode 100644 index 0000000000..a76b1b2a99 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location1.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location2.jpg b/docs/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location2.jpg new file mode 100644 index 0000000000..605dd896e5 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/custom_hold_pickup_location2.jpg differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-1.png b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-1.png new file mode 100644 index 0000000000..34c2a0ad55 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-1.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-2.png b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-2.png new file mode 100644 index 0000000000..9fb2a77726 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-2.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-3.png b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-3.png new file mode 100644 index 0000000000..1f75803846 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-3.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-4.png b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-4.png new file mode 100644 index 0000000000..eef6c646e5 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-clearing-4.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-1.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-1.png new file mode 100644 index 0000000000..76eb32fc8d Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-1.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-10.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-10.JPG new file mode 100644 index 0000000000..f2fffc8136 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-10.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-10.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-10.png new file mode 100644 index 0000000000..fab2a8be54 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-10.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-11.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-11.JPG new file mode 100644 index 0000000000..8479b79cda Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-11.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-11.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-11.png new file mode 100644 index 0000000000..89beec91a0 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-11.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-12.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-12.JPG new file mode 100644 index 0000000000..130c370ae1 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-12.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-12.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-12.png new file mode 100644 index 0000000000..9265a57634 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-12.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-13.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-13.JPG new file mode 100644 index 0000000000..030ceee289 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-13.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-13.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-13.png new file mode 100644 index 0000000000..2765f756e3 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-13.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-14.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-14.JPG new file mode 100644 index 0000000000..78fde73166 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-14.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-14.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-14.png new file mode 100644 index 0000000000..ee8649480d Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-14.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-15.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-15.JPG new file mode 100644 index 0000000000..e7e5865f3c Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-15.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-15.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-15.png new file mode 100644 index 0000000000..20dd0b454b Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-15.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-16.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-16.png new file mode 100644 index 0000000000..2db6c18230 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-16.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-17.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-17.png new file mode 100644 index 0000000000..5d0e175b55 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-17.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-18.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-18.png new file mode 100644 index 0000000000..2ffe43a89d Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-18.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-19.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-19.png new file mode 100644 index 0000000000..c74c47ed6e Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-19.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-2.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-2.JPG new file mode 100644 index 0000000000..382d799e54 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-2.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-2.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-2.png new file mode 100644 index 0000000000..4b7af26bcb Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-2.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-3.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-3.png new file mode 100644 index 0000000000..16ca2674f6 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-3.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-4.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-4.JPG new file mode 100644 index 0000000000..e0fdf1fedb Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-4.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-4.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-4.png new file mode 100644 index 0000000000..34e758d6eb Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-4.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-5.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-5.png new file mode 100644 index 0000000000..96b836f3fe Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-5.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-5_and_6.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-5_and_6.JPG new file mode 100644 index 0000000000..4522ab6fe0 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-5_and_6.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-6.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-6.png new file mode 100644 index 0000000000..f7ce7ebd7f Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-6.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-7.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-7.JPG new file mode 100644 index 0000000000..e8237f3886 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-7.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-7.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-7.png new file mode 100644 index 0000000000..12f7ffef38 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-7.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-8.JPG b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-8.JPG new file mode 100644 index 0000000000..738cfd41e0 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-8.JPG differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-8.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-8.png new file mode 100644 index 0000000000..bbbff642ba Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-8.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-managing-9.png b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-9.png new file mode 100644 index 0000000000..006b105c88 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-managing-9.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-1.png b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-1.png new file mode 100644 index 0000000000..6d6b147fc5 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-1.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-2.png b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-2.png new file mode 100644 index 0000000000..c64fcdb83a Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-2.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-3.png b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-3.png new file mode 100644 index 0000000000..bfb8ee201d Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-3.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-4.png b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-4.png new file mode 100644 index 0000000000..fd9201b872 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-notifications-4.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-1.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-1.png new file mode 100644 index 0000000000..5f60630e8b Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-1.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-10.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-10.png new file mode 100644 index 0000000000..46fe88a112 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-10.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-11.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-11.png new file mode 100644 index 0000000000..cd1a10201e Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-11.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-2.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-2.png new file mode 100644 index 0000000000..71daec5c6c Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-2.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-3.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-3.png new file mode 100644 index 0000000000..16ca2674f6 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-3.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-4.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-4.png new file mode 100644 index 0000000000..7b0c2e7423 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-4.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-5.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-5.png new file mode 100644 index 0000000000..7ef50cb46f Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-5.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-6.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-6.png new file mode 100644 index 0000000000..d8e7c3d5f9 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-6.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-7.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-7.png new file mode 100644 index 0000000000..aa0ec912a7 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-7.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-8.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-8.png new file mode 100644 index 0000000000..c354a9d4d9 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-8.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-placing-9.png b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-9.png new file mode 100644 index 0000000000..41fe57efda Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-placing-9.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-1.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-1.png new file mode 100644 index 0000000000..1f325e2713 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-1.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-2.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-2.png new file mode 100644 index 0000000000..c4e385115d Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-2.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-3.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-3.png new file mode 100644 index 0000000000..494febb84e Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-3.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-4.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-4.png new file mode 100644 index 0000000000..5db02f7f17 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-4.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-5.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-5.png new file mode 100644 index 0000000000..74829a091b Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-5.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-6.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-6.png new file mode 100644 index 0000000000..b022065760 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-6.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-7.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-7.png new file mode 100644 index 0000000000..37651dceb9 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-7.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds-pull-9.png b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-9.png new file mode 100644 index 0000000000..365db4eb99 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds-pull-9.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds_title_options.png b/docs/modules/circulation/assets/images/advanced_holds/holds_title_options.png new file mode 100644 index 0000000000..cd79a155f1 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds_title_options.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds_title_options_adv.png b/docs/modules/circulation/assets/images/advanced_holds/holds_title_options_adv.png new file mode 100644 index 0000000000..2f6dbac2e6 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds_title_options_adv.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds_title_searchresults.png b/docs/modules/circulation/assets/images/advanced_holds/holds_title_searchresults.png new file mode 100644 index 0000000000..5bbae023ba Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds_title_searchresults.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/holds_title_success.png b/docs/modules/circulation/assets/images/advanced_holds/holds_title_success.png new file mode 100644 index 0000000000..0db924f696 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/holds_title_success.png differ diff --git a/docs/modules/circulation/assets/images/advanced_holds/place-another-hold-1.png b/docs/modules/circulation/assets/images/advanced_holds/place-another-hold-1.png new file mode 100644 index 0000000000..130ec10964 Binary files /dev/null and b/docs/modules/circulation/assets/images/advanced_holds/place-another-hold-1.png differ diff --git a/docs/modules/circulation/assets/images/media/Check_In-Cancel_Transit.png b/docs/modules/circulation/assets/images/media/Check_In-Cancel_Transit.png new file mode 100644 index 0000000000..cf9cdfa404 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/Check_In-Cancel_Transit.png differ diff --git a/docs/modules/circulation/assets/images/media/Display_Hold_Types_on_Pull_Lists1.jpg b/docs/modules/circulation/assets/images/media/Display_Hold_Types_on_Pull_Lists1.jpg new file mode 100644 index 0000000000..26d7952b47 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/Display_Hold_Types_on_Pull_Lists1.jpg differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-0.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-0.JPG new file mode 100644 index 0000000000..dc11dbd92d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-0.JPG differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-1.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-1.JPG new file mode 100644 index 0000000000..efe099354d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-1.JPG differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-2.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-2.JPG new file mode 100644 index 0000000000..44d70374de Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-2.JPG differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-3.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-3.JPG new file mode 100644 index 0000000000..a49ffc2ba1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-3.JPG differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-4.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-4.JPG new file mode 100644 index 0000000000..23d63a69bd Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-4.JPG differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-5.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-5.JPG new file mode 100644 index 0000000000..f1a48d22fc Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-5.JPG differ diff --git a/docs/modules/circulation/assets/images/media/PlaceHold-6.JPG b/docs/modules/circulation/assets/images/media/PlaceHold-6.JPG new file mode 100644 index 0000000000..a5ae4f55fe Binary files /dev/null and b/docs/modules/circulation/assets/images/media/PlaceHold-6.JPG differ diff --git a/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices1.jpg b/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices1.jpg new file mode 100644 index 0000000000..07392f60e0 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices1.jpg differ diff --git a/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices2.jpg b/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices2.jpg new file mode 100644 index 0000000000..8ff0d82ce8 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices2.jpg differ diff --git a/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices3.jpg b/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices3.jpg new file mode 100644 index 0000000000..69d9ab0c7b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/Triggered_Events_and_Notices3.jpg differ diff --git a/docs/modules/circulation/assets/images/media/backdate_checkin_web_client.png b/docs/modules/circulation/assets/images/media/backdate_checkin_web_client.png new file mode 100644 index 0000000000..6784318ec1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/backdate_checkin_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/backdate_post_checkin_web_client.png b/docs/modules/circulation/assets/images/media/backdate_post_checkin_web_client.png new file mode 100644 index 0000000000..ca98a613bd Binary files /dev/null and b/docs/modules/circulation/assets/images/media/backdate_post_checkin_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/backdate_post_date_web_client.png b/docs/modules/circulation/assets/images/media/backdate_post_date_web_client.png new file mode 100644 index 0000000000..aea169d1ca Binary files /dev/null and b/docs/modules/circulation/assets/images/media/backdate_post_date_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/backdate_red_web_client.png b/docs/modules/circulation/assets/images/media/backdate_red_web_client.png new file mode 100644 index 0000000000..dfa4bd01ea Binary files /dev/null and b/docs/modules/circulation/assets/images/media/backdate_red_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/booking-capture-1_web_client.png b/docs/modules/circulation/assets/images/media/booking-capture-1_web_client.png new file mode 100644 index 0000000000..032f440da1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/booking-capture-1_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/booking-capture-2_web_client.png b/docs/modules/circulation/assets/images/media/booking-capture-2_web_client.png new file mode 100644 index 0000000000..3288f96937 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/booking-capture-2_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/booking-capture-3.png b/docs/modules/circulation/assets/images/media/booking-capture-3.png new file mode 100644 index 0000000000..2de6c8a5eb Binary files /dev/null and b/docs/modules/circulation/assets/images/media/booking-capture-3.png differ diff --git a/docs/modules/circulation/assets/images/media/check_in_menu_web_client.png b/docs/modules/circulation/assets/images/media/check_in_menu_web_client.png new file mode 100644 index 0000000000..0b52f67b61 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/check_in_menu_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/checkin_barcode_web_client.png b/docs/modules/circulation/assets/images/media/checkin_barcode_web_client.png new file mode 100644 index 0000000000..e2156ae538 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/checkin_barcode_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/checkinmodifiers-with-inventory2.png b/docs/modules/circulation/assets/images/media/checkinmodifiers-with-inventory2.png new file mode 100644 index 0000000000..753010d8d6 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/checkinmodifiers-with-inventory2.png differ diff --git a/docs/modules/circulation/assets/images/media/checkout_item_barcode_web_client.png b/docs/modules/circulation/assets/images/media/checkout_item_barcode_web_client.png new file mode 100644 index 0000000000..358844e1dc Binary files /dev/null and b/docs/modules/circulation/assets/images/media/checkout_item_barcode_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/checkout_menu_web_client.png b/docs/modules/circulation/assets/images/media/checkout_menu_web_client.png new file mode 100644 index 0000000000..0b52f67b61 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/checkout_menu_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-11_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-11_web_client.png new file mode 100644 index 0000000000..02d07f0bfb Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-11_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-12.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records-12.JPG new file mode 100644 index 0000000000..7f690425fb Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-12.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-16.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records-16.JPG new file mode 100644 index 0000000000..6762ac4563 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-16.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-18_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-18_web_client.png new file mode 100644 index 0000000000..bb5dcdf299 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-18_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-19_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-19_web_client.png new file mode 100644 index 0000000000..4c27f0122b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-19_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-1a_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-1a_web_client.png new file mode 100644 index 0000000000..944dc0eb4d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-1a_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-1b_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-1b_web_client.png new file mode 100644 index 0000000000..bc24f1327c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-1b_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-20.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-20.png new file mode 100644 index 0000000000..dfdd9161c5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-20.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-23_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-23_web_client.png new file mode 100644 index 0000000000..8b113e5969 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-23_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-24_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-24_web_client.png new file mode 100644 index 0000000000..e8c4199a1c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-24_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-25_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-25_web_client.png new file mode 100644 index 0000000000..88786c23ed Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-25_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-26_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-26_web_client.png new file mode 100644 index 0000000000..45aa347ae3 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-26_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-2_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-2_web_client.png new file mode 100644 index 0000000000..e47c0f0022 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-2_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-4.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records-4.JPG new file mode 100644 index 0000000000..ef38851158 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-4.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-5.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records-5.JPG new file mode 100644 index 0000000000..da9e3d7204 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-5.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-6.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records-6.JPG new file mode 100644 index 0000000000..326fa707a0 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-6.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-8.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records-8.JPG new file mode 100644 index 0000000000..9fe45c782e Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-8.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records-9_web_client.png b/docs/modules/circulation/assets/images/media/circulation_patron_records-9_web_client.png new file mode 100644 index 0000000000..78844b983e Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records-9_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records_13.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records_13.JPG new file mode 100644 index 0000000000..7ef41992c6 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records_13.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records_14.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records_14.JPG new file mode 100644 index 0000000000..6a1b64a9c4 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records_14.JPG differ diff --git a/docs/modules/circulation/assets/images/media/circulation_patron_records_15.JPG b/docs/modules/circulation/assets/images/media/circulation_patron_records_15.JPG new file mode 100644 index 0000000000..f5e0dd659b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/circulation_patron_records_15.JPG differ diff --git a/docs/modules/circulation/assets/images/media/claimed_date_web_client.png b/docs/modules/circulation/assets/images/media/claimed_date_web_client.png new file mode 100644 index 0000000000..0de9b1bdc4 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/claimed_date_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/cr_section_web_client.png b/docs/modules/circulation/assets/images/media/cr_section_web_client.png new file mode 100644 index 0000000000..bdf80b9f38 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/cr_section_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/custom_hold_pickup_location1.png b/docs/modules/circulation/assets/images/media/custom_hold_pickup_location1.png new file mode 100644 index 0000000000..a76b1b2a99 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/custom_hold_pickup_location1.png differ diff --git a/docs/modules/circulation/assets/images/media/custom_hold_pickup_location2.jpg b/docs/modules/circulation/assets/images/media/custom_hold_pickup_location2.jpg new file mode 100644 index 0000000000..605dd896e5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/custom_hold_pickup_location2.jpg differ diff --git a/docs/modules/circulation/assets/images/media/due_date_display_web_client.png b/docs/modules/circulation/assets/images/media/due_date_display_web_client.png new file mode 100644 index 0000000000..e0e2eff76f Binary files /dev/null and b/docs/modules/circulation/assets/images/media/due_date_display_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/edit_due_date_action_web_client.png b/docs/modules/circulation/assets/images/media/edit_due_date_action_web_client.png new file mode 100644 index 0000000000..5f622efe4c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/edit_due_date_action_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/ereceipts1_web_client.PNG b/docs/modules/circulation/assets/images/media/ereceipts1_web_client.PNG new file mode 100644 index 0000000000..6db93af0f7 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/ereceipts1_web_client.PNG differ diff --git a/docs/modules/circulation/assets/images/media/ereceipts2_web_client.PNG b/docs/modules/circulation/assets/images/media/ereceipts2_web_client.PNG new file mode 100644 index 0000000000..981b1c8d8d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/ereceipts2_web_client.PNG differ diff --git a/docs/modules/circulation/assets/images/media/ereceipts3_web_client.PNG b/docs/modules/circulation/assets/images/media/ereceipts3_web_client.PNG new file mode 100644 index 0000000000..84e70f6e5c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/ereceipts3_web_client.PNG differ diff --git a/docs/modules/circulation/assets/images/media/ereceipts4_web_client.PNG b/docs/modules/circulation/assets/images/media/ereceipts4_web_client.PNG new file mode 100644 index 0000000000..8f94d97cf5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/ereceipts4_web_client.PNG differ diff --git a/docs/modules/circulation/assets/images/media/ereceipts5_web_client.PNG b/docs/modules/circulation/assets/images/media/ereceipts5_web_client.PNG new file mode 100644 index 0000000000..fd2ea05e88 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/ereceipts5_web_client.PNG differ diff --git a/docs/modules/circulation/assets/images/media/ereceipts6_web_client.PNG b/docs/modules/circulation/assets/images/media/ereceipts6_web_client.PNG new file mode 100644 index 0000000000..74de1e59d5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/ereceipts6_web_client.PNG differ diff --git a/docs/modules/circulation/assets/images/media/holds-clearing-1.png b/docs/modules/circulation/assets/images/media/holds-clearing-1.png new file mode 100644 index 0000000000..34c2a0ad55 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-clearing-1.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-clearing-2.png b/docs/modules/circulation/assets/images/media/holds-clearing-2.png new file mode 100644 index 0000000000..9fb2a77726 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-clearing-2.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-clearing-3.png b/docs/modules/circulation/assets/images/media/holds-clearing-3.png new file mode 100644 index 0000000000..1f75803846 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-clearing-3.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-clearing-4.png b/docs/modules/circulation/assets/images/media/holds-clearing-4.png new file mode 100644 index 0000000000..eef6c646e5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-clearing-4.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-1.png b/docs/modules/circulation/assets/images/media/holds-managing-1.png new file mode 100644 index 0000000000..76eb32fc8d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-1.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-10.JPG b/docs/modules/circulation/assets/images/media/holds-managing-10.JPG new file mode 100644 index 0000000000..f2fffc8136 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-10.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-11.JPG b/docs/modules/circulation/assets/images/media/holds-managing-11.JPG new file mode 100644 index 0000000000..8479b79cda Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-11.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-12.JPG b/docs/modules/circulation/assets/images/media/holds-managing-12.JPG new file mode 100644 index 0000000000..130c370ae1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-12.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-13.JPG b/docs/modules/circulation/assets/images/media/holds-managing-13.JPG new file mode 100644 index 0000000000..030ceee289 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-13.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-14.JPG b/docs/modules/circulation/assets/images/media/holds-managing-14.JPG new file mode 100644 index 0000000000..78fde73166 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-14.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-15.JPG b/docs/modules/circulation/assets/images/media/holds-managing-15.JPG new file mode 100644 index 0000000000..e7e5865f3c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-15.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-16.png b/docs/modules/circulation/assets/images/media/holds-managing-16.png new file mode 100644 index 0000000000..2db6c18230 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-16.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-17.png b/docs/modules/circulation/assets/images/media/holds-managing-17.png new file mode 100644 index 0000000000..5d0e175b55 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-17.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-18.png b/docs/modules/circulation/assets/images/media/holds-managing-18.png new file mode 100644 index 0000000000..2ffe43a89d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-18.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-19.png b/docs/modules/circulation/assets/images/media/holds-managing-19.png new file mode 100644 index 0000000000..c74c47ed6e Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-19.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-2.JPG b/docs/modules/circulation/assets/images/media/holds-managing-2.JPG new file mode 100644 index 0000000000..382d799e54 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-2.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-4.JPG b/docs/modules/circulation/assets/images/media/holds-managing-4.JPG new file mode 100644 index 0000000000..e0fdf1fedb Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-4.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-5_and_6.JPG b/docs/modules/circulation/assets/images/media/holds-managing-5_and_6.JPG new file mode 100644 index 0000000000..4522ab6fe0 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-5_and_6.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-7.JPG b/docs/modules/circulation/assets/images/media/holds-managing-7.JPG new file mode 100644 index 0000000000..e8237f3886 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-7.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-8.JPG b/docs/modules/circulation/assets/images/media/holds-managing-8.JPG new file mode 100644 index 0000000000..738cfd41e0 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-8.JPG differ diff --git a/docs/modules/circulation/assets/images/media/holds-managing-9.png b/docs/modules/circulation/assets/images/media/holds-managing-9.png new file mode 100644 index 0000000000..006b105c88 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-managing-9.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-notifications-1.png b/docs/modules/circulation/assets/images/media/holds-notifications-1.png new file mode 100644 index 0000000000..6d6b147fc5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-notifications-1.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-notifications-2.png b/docs/modules/circulation/assets/images/media/holds-notifications-2.png new file mode 100644 index 0000000000..c64fcdb83a Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-notifications-2.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-1.png b/docs/modules/circulation/assets/images/media/holds-pull-1.png new file mode 100644 index 0000000000..1f325e2713 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-1.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-2.png b/docs/modules/circulation/assets/images/media/holds-pull-2.png new file mode 100644 index 0000000000..c4e385115d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-2.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-4.png b/docs/modules/circulation/assets/images/media/holds-pull-4.png new file mode 100644 index 0000000000..5db02f7f17 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-4.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-5.png b/docs/modules/circulation/assets/images/media/holds-pull-5.png new file mode 100644 index 0000000000..74829a091b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-5.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-6.png b/docs/modules/circulation/assets/images/media/holds-pull-6.png new file mode 100644 index 0000000000..b022065760 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-6.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-7.png b/docs/modules/circulation/assets/images/media/holds-pull-7.png new file mode 100644 index 0000000000..37651dceb9 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-7.png differ diff --git a/docs/modules/circulation/assets/images/media/holds-pull-9.png b/docs/modules/circulation/assets/images/media/holds-pull-9.png new file mode 100644 index 0000000000..365db4eb99 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds-pull-9.png differ diff --git a/docs/modules/circulation/assets/images/media/holds_title_options.png b/docs/modules/circulation/assets/images/media/holds_title_options.png new file mode 100644 index 0000000000..cd79a155f1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds_title_options.png differ diff --git a/docs/modules/circulation/assets/images/media/holds_title_options_adv.png b/docs/modules/circulation/assets/images/media/holds_title_options_adv.png new file mode 100644 index 0000000000..2f6dbac2e6 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds_title_options_adv.png differ diff --git a/docs/modules/circulation/assets/images/media/holds_title_searchresults.png b/docs/modules/circulation/assets/images/media/holds_title_searchresults.png new file mode 100644 index 0000000000..5bbae023ba Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds_title_searchresults.png differ diff --git a/docs/modules/circulation/assets/images/media/holds_title_success.png b/docs/modules/circulation/assets/images/media/holds_title_success.png new file mode 100644 index 0000000000..0db924f696 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/holds_title_success.png differ diff --git a/docs/modules/circulation/assets/images/media/in_house_use_non_cat.png b/docs/modules/circulation/assets/images/media/in_house_use_non_cat.png new file mode 100644 index 0000000000..fd2b2cea60 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/in_house_use_non_cat.png differ diff --git a/docs/modules/circulation/assets/images/media/in_house_use_web_client.png b/docs/modules/circulation/assets/images/media/in_house_use_web_client.png new file mode 100644 index 0000000000..0851df52d1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/in_house_use_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/item_status_altview_web_client.png b/docs/modules/circulation/assets/images/media/item_status_altview_web_client.png new file mode 100644 index 0000000000..624810c9cd Binary files /dev/null and b/docs/modules/circulation/assets/images/media/item_status_altview_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/item_status_barcode_web_client.png b/docs/modules/circulation/assets/images/media/item_status_barcode_web_client.png new file mode 100644 index 0000000000..1dced5021f Binary files /dev/null and b/docs/modules/circulation/assets/images/media/item_status_barcode_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/item_status_list_view_web_client.png b/docs/modules/circulation/assets/images/media/item_status_list_view_web_client.png new file mode 100644 index 0000000000..7ddb694653 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/item_status_list_view_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/item_status_menu_web_client.png b/docs/modules/circulation/assets/images/media/item_status_menu_web_client.png new file mode 100644 index 0000000000..109c331f4d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/item_status_menu_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/items_out_click_web_client.png b/docs/modules/circulation/assets/images/media/items_out_click_web_client.png new file mode 100644 index 0000000000..0320f9d159 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/items_out_click_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/last_few_circs_action_web_client.png b/docs/modules/circulation/assets/images/media/last_few_circs_action_web_client.png new file mode 100644 index 0000000000..b4f698129d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/last_few_circs_action_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/last_few_circs_display_web_client.png b/docs/modules/circulation/assets/images/media/last_few_circs_display_web_client.png new file mode 100644 index 0000000000..4c6e8b3475 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/last_few_circs_display_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/long_overdue1.png b/docs/modules/circulation/assets/images/media/long_overdue1.png new file mode 100644 index 0000000000..3e66d054e1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/long_overdue1.png differ diff --git a/docs/modules/circulation/assets/images/media/long_overdue2.png b/docs/modules/circulation/assets/images/media/long_overdue2.png new file mode 100644 index 0000000000..fe770a418e Binary files /dev/null and b/docs/modules/circulation/assets/images/media/long_overdue2.png differ diff --git a/docs/modules/circulation/assets/images/media/lost_section_web_client.png b/docs/modules/circulation/assets/images/media/lost_section_web_client.png new file mode 100644 index 0000000000..a21edae3c8 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/lost_section_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/mark_claims_returned_web_client.png b/docs/modules/circulation/assets/images/media/mark_claims_returned_web_client.png new file mode 100644 index 0000000000..1fd625c00c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/mark_claims_returned_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/mark_lost_web_client.png b/docs/modules/circulation/assets/images/media/mark_lost_web_client.png new file mode 100644 index 0000000000..dc4fa0bf17 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/mark_lost_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_checkin.png b/docs/modules/circulation/assets/images/media/offline_checkin.png new file mode 100644 index 0000000000..152b41c752 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_checkin.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_checkout.png b/docs/modules/circulation/assets/images/media/offline_checkout.png new file mode 100644 index 0000000000..a8d2d4b2bd Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_checkout.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_clear_pending.png b/docs/modules/circulation/assets/images/media/offline_clear_pending.png new file mode 100644 index 0000000000..f06014d13b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_clear_pending.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_exceptions.png b/docs/modules/circulation/assets/images/media/offline_exceptions.png new file mode 100644 index 0000000000..006cfcfc4f Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_exceptions.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_homepage_loggedin.png b/docs/modules/circulation/assets/images/media/offline_homepage_loggedin.png new file mode 100644 index 0000000000..d961918ef9 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_homepage_loggedin.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_homepage_loggedout.png b/docs/modules/circulation/assets/images/media/offline_homepage_loggedout.png new file mode 100644 index 0000000000..29ef316270 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_homepage_loggedout.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_inhouse.png b/docs/modules/circulation/assets/images/media/offline_inhouse.png new file mode 100644 index 0000000000..c2958ba3d0 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_inhouse.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_logout_warning.png b/docs/modules/circulation/assets/images/media/offline_logout_warning.png new file mode 100644 index 0000000000..482bde6cdc Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_logout_warning.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_patron_blocked.png b/docs/modules/circulation/assets/images/media/offline_patron_blocked.png new file mode 100644 index 0000000000..627bedcae4 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_patron_blocked.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_patron_registration.png b/docs/modules/circulation/assets/images/media/offline_patron_registration.png new file mode 100644 index 0000000000..f1d46b98d8 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_patron_registration.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_pending_xacts.png b/docs/modules/circulation/assets/images/media/offline_pending_xacts.png new file mode 100644 index 0000000000..61610325be Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_pending_xacts.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_processing_complete.png b/docs/modules/circulation/assets/images/media/offline_processing_complete.png new file mode 100644 index 0000000000..9cbc24d755 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_processing_complete.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_renew.png b/docs/modules/circulation/assets/images/media/offline_renew.png new file mode 100644 index 0000000000..b0f6a717f8 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_renew.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_session_list.png b/docs/modules/circulation/assets/images/media/offline_session_list.png new file mode 100644 index 0000000000..5caaceab37 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_session_list.png differ diff --git a/docs/modules/circulation/assets/images/media/offline_unprocessed.png b/docs/modules/circulation/assets/images/media/offline_unprocessed.png new file mode 100644 index 0000000000..6cd479d8e3 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/offline_unprocessed.png differ diff --git a/docs/modules/circulation/assets/images/media/overdue_checkin_web_client.png b/docs/modules/circulation/assets/images/media/overdue_checkin_web_client.png new file mode 100644 index 0000000000..aaa2f6352c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/overdue_checkin_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/patron_self_registration2.jpg b/docs/modules/circulation/assets/images/media/patron_self_registration2.jpg new file mode 100644 index 0000000000..51da802c7e Binary files /dev/null and b/docs/modules/circulation/assets/images/media/patron_self_registration2.jpg differ diff --git a/docs/modules/circulation/assets/images/media/patron_summary_checkouts_web_client.png b/docs/modules/circulation/assets/images/media/patron_summary_checkouts_web_client.png new file mode 100644 index 0000000000..ea5f6dc6aa Binary files /dev/null and b/docs/modules/circulation/assets/images/media/patron_summary_checkouts_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/place-another-hold-1.png b/docs/modules/circulation/assets/images/media/place-another-hold-1.png new file mode 100644 index 0000000000..130ec10964 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/place-another-hold-1.png differ diff --git a/docs/modules/circulation/assets/images/media/precat_web_client.png b/docs/modules/circulation/assets/images/media/precat_web_client.png new file mode 100644 index 0000000000..24628c61e1 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/precat_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/record_in_house_action_web_client.png b/docs/modules/circulation/assets/images/media/record_in_house_action_web_client.png new file mode 100644 index 0000000000..30605db899 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/record_in_house_action_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/renew_action_web_client.png b/docs/modules/circulation/assets/images/media/renew_action_web_client.png new file mode 100644 index 0000000000..0d177f20ad Binary files /dev/null and b/docs/modules/circulation/assets/images/media/renew_action_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/renew_item_calendar_web_client.png b/docs/modules/circulation/assets/images/media/renew_item_calendar_web_client.png new file mode 100644 index 0000000000..5a2e06fdc4 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/renew_item_calendar_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/renew_item_web_client.png b/docs/modules/circulation/assets/images/media/renew_item_web_client.png new file mode 100644 index 0000000000..a81d2d682d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/renew_item_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/retrieve_patron_web_client.png b/docs/modules/circulation/assets/images/media/retrieve_patron_web_client.png new file mode 100644 index 0000000000..d1ed320ec4 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/retrieve_patron_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/self-check-admin-login.png b/docs/modules/circulation/assets/images/media/self-check-admin-login.png new file mode 100644 index 0000000000..ed0ec9f587 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self-check-admin-login.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_check_out_1.png b/docs/modules/circulation/assets/images/media/self_check_check_out_1.png new file mode 100644 index 0000000000..ed2220cdb8 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_check_out_1.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_check_out_2.png b/docs/modules/circulation/assets/images/media/self_check_check_out_2.png new file mode 100644 index 0000000000..40fd9b8537 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_check_out_2.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_check_out_3.png b/docs/modules/circulation/assets/images/media/self_check_check_out_3.png new file mode 100644 index 0000000000..79a418af0e Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_check_out_3.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_check_out_4.png b/docs/modules/circulation/assets/images/media/self_check_check_out_4.png new file mode 100644 index 0000000000..4092994542 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_check_out_4.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_check_out_5.png b/docs/modules/circulation/assets/images/media/self_check_check_out_5.png new file mode 100644 index 0000000000..01b50c134a Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_check_out_5.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_check_out_6.png b/docs/modules/circulation/assets/images/media/self_check_check_out_6.png new file mode 100644 index 0000000000..230ed0da1f Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_check_out_6.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_error_1.png b/docs/modules/circulation/assets/images/media/self_check_error_1.png new file mode 100644 index 0000000000..e6df645953 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_error_1.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_view_fines_1.png b/docs/modules/circulation/assets/images/media/self_check_view_fines_1.png new file mode 100644 index 0000000000..106b392ef2 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_view_fines_1.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_view_fines_2.png b/docs/modules/circulation/assets/images/media/self_check_view_fines_2.png new file mode 100644 index 0000000000..17201dc0e6 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_view_fines_2.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_view_holds_1.png b/docs/modules/circulation/assets/images/media/self_check_view_holds_1.png new file mode 100644 index 0000000000..3ad4c354b9 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_view_holds_1.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_view_holds_2.png b/docs/modules/circulation/assets/images/media/self_check_view_holds_2.png new file mode 100644 index 0000000000..41b89738c8 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_view_holds_2.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_view_items_out_1.png b/docs/modules/circulation/assets/images/media/self_check_view_items_out_1.png new file mode 100644 index 0000000000..d239f4d1c9 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_view_items_out_1.png differ diff --git a/docs/modules/circulation/assets/images/media/self_check_view_items_out_2.png b/docs/modules/circulation/assets/images/media/self_check_view_items_out_2.png new file mode 100644 index 0000000000..5323ba360b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/self_check_view_items_out_2.png differ diff --git a/docs/modules/circulation/assets/images/media/specify_due_date1_web_client.png b/docs/modules/circulation/assets/images/media/specify_due_date1_web_client.png new file mode 100644 index 0000000000..f28b921c1b Binary files /dev/null and b/docs/modules/circulation/assets/images/media/specify_due_date1_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-1_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-1_web_client.png new file mode 100644 index 0000000000..35b11cc8eb Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-1_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-2_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-2_web_client.png new file mode 100644 index 0000000000..cd13b6f463 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-2_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-3_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-3_web_client.png new file mode 100644 index 0000000000..3214e2550c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-3_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-4_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-4_web_client.png new file mode 100644 index 0000000000..f9e7520746 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-4_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-5_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-5_web_client.png new file mode 100644 index 0000000000..c1079c7568 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-5_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-6_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-6_web_client.png new file mode 100644 index 0000000000..e90b4f9b52 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-6_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/staff-penalties-7_web_client.png b/docs/modules/circulation/assets/images/media/staff-penalties-7_web_client.png new file mode 100644 index 0000000000..9fbc019a8d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/staff-penalties-7_web_client.png differ diff --git a/docs/modules/circulation/assets/images/media/userbucket1.PNG b/docs/modules/circulation/assets/images/media/userbucket1.PNG new file mode 100644 index 0000000000..39bb4a214c Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket1.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket10.png b/docs/modules/circulation/assets/images/media/userbucket10.png new file mode 100644 index 0000000000..da96d10ff3 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket10.png differ diff --git a/docs/modules/circulation/assets/images/media/userbucket11.PNG b/docs/modules/circulation/assets/images/media/userbucket11.PNG new file mode 100644 index 0000000000..b2120937ac Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket11.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket12.PNG b/docs/modules/circulation/assets/images/media/userbucket12.PNG new file mode 100644 index 0000000000..33b3a077ce Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket12.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket2.PNG b/docs/modules/circulation/assets/images/media/userbucket2.PNG new file mode 100644 index 0000000000..54d5dc7334 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket2.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket3.PNG b/docs/modules/circulation/assets/images/media/userbucket3.PNG new file mode 100644 index 0000000000..033cc397cb Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket3.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket4.PNG b/docs/modules/circulation/assets/images/media/userbucket4.PNG new file mode 100644 index 0000000000..dd0a893625 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket4.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket7.PNG b/docs/modules/circulation/assets/images/media/userbucket7.PNG new file mode 100644 index 0000000000..8770491fb5 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket7.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket8.PNG b/docs/modules/circulation/assets/images/media/userbucket8.PNG new file mode 100644 index 0000000000..e2e7bc787d Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket8.PNG differ diff --git a/docs/modules/circulation/assets/images/media/userbucket9.PNG b/docs/modules/circulation/assets/images/media/userbucket9.PNG new file mode 100644 index 0000000000..32d0d34603 Binary files /dev/null and b/docs/modules/circulation/assets/images/media/userbucket9.PNG differ diff --git a/docs/modules/circulation/nav.adoc b/docs/modules/circulation/nav.adoc new file mode 100644 index 0000000000..92b03a56e3 --- /dev/null +++ b/docs/modules/circulation/nav.adoc @@ -0,0 +1,10 @@ +* xref:circulation:introduction.adoc[Circulation] +** xref:circulation:circulating_items_web_client.adoc[Circulating Items] +** xref:circulation:basic_holds.adoc[Holds Management] +** xref:circulation:booking.adoc[Booking Module] +** xref:circulation:circulation_patron_records_web_client.adoc[Circulation - Patron Record] +** xref:admin:patron_self_registration.adoc[Patron Self-Registration Administration] +** xref:circulation:triggered_events.adoc[Triggered Events and Notices] +** xref:circulation:offline_circ_webclient.adoc[Offline Circulation] +** xref:circulation:self_check.adoc[Self Checkout] + diff --git a/docs/modules/circulation/pages/README b/docs/modules/circulation/pages/README new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/modules/circulation/pages/_attributes.adoc b/docs/modules/circulation/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/circulation/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/circulation/pages/basic_holds.adoc b/docs/modules/circulation/pages/basic_holds.adoc new file mode 100644 index 0000000000..7e24a89b92 --- /dev/null +++ b/docs/modules/circulation/pages/basic_holds.adoc @@ -0,0 +1,485 @@ += Holds Management = +:toc: + +== Placing Holds == + +Holds can be placed by staff in the _Staff Client_ and by patrons in the OPAC. In this chapter we demonstrate placing holds in the _Staff Client_. + +== Holds Levels == + +Evergreen has different levels of holds. Library staff can place holds at all levels, while patrons can only place title-level holds, and parts-level holds. The chart below summarizes the levels of holds. + +|============================== +|*Hold level* |*Abbreviation* |*When to use* |*How to use* |*Who can use* |*Hold tied to* +|Title |T |Patron wants first available copy of a title | Staff or patron click on _Place Hold_ next to title. | Patron or staff | Holdings attached to a single MARC (title) record +|Parts |P |Patron wants a particular part of title (e.g. volume or disk number) | Staff or patron selects part on the create/edit hold screen when setting holds notification options. |Patron or staff |Holdings with identical parts attached to a single MARC (title) record. +|Volume |V |Patron or staff want any title associated with a particular call number | In the staff client, click on _Volume Hold_ under _Holdable?_ |Staff only |Holdings attached to a single call number (volume) +|Copy |C |Patron or staff want a specific copy of an item |In the staff client, click on _Copy Hold_ under _Holdable?_ |Staff only |A specific copy (barcode) +|============================== + + +== Title Level Hold == + +[TIP] +==================== +A default hold expiration date will be displayed if the library has set up a default holds expiration period in their library settings. Uncaptured holds will not be targeted after the expiration date. + +If you select the _Suspend this Hold_ checkbox, the hold will be suspended and not be captured until you activate it. +==================== + +. To place a title level hold, retrieve the title record on the catalog and click the _Place Hold_ link beside the title on the search results list, or click the _Place Hold_ link on the title summary screen. ++ +image::media/holds_title_searchresults.png[Search Results with Place Hold link] ++ +. Scan or type patron's barcode into the _Place hold for patron by +barcode_ box, or choose _Place this hold for me_. +. If this title contains multiple parts, you can specify which part to +request. If you do not select a part, the hold will target any of the +other copies on this record, that is, those with no parts attached. +Those copies are usually the complete set, containing all the parts. +. Edit patron hold notification and expiration date fields as required. +Be sure to choose a valid _Pickup location_. +. Click _Submit_. ++ +image::media/holds_title_options.png[Place Holds screen with Basic Options] ++ +. A confirmation screen appears with the message "Hold was successfully placed". ++ +image::media/holds_title_success.png[Place Holds confirmation screen] + +*Advanced Hold Options* + +Clicking the *Advanced Hold Options* link will take you into the +metarecord level hold feature, where you can select multiple formats +and/or languages, if available. + +Selecting multiple formats will not place all of these formats on hold. +For example, selecting CD Audiobook and Book implies that either the CD +format or the book format is the acceptable format to fill the hold. If +no format is selected, then any of the available formats may be used to +fill the hold. The same holds true for selecting multiple languages. + +image::media/holds_title_options_adv.png[Place Hold screen with Advanced Options] + + +== Patron Search from Place Hold == +Patron Search from Place Hold allows staff members, when placing a hold on behalf of a patron in the web staff client, to search for patrons by names and other searchable patron information, rather than relying on barcode alone. + + +=== To use Patron Search From Place Holds: === +1. After performing a search in the catalog, staff will retrieve a bibliographic record. +2. Click *Place Hold* either in the search results or within the detailed bibliographic record. The Place Hold Screen will appear. Note: this feature also appears when placing volume level holds and copy level holds. ++ +image::media/PlaceHold-0.JPG[] ++ +3. Next to _Place Hold for patron by barcode_, click on *Patron Search*. Please note that Patron Search will only appear in this interface when using the web-based staff client. It will not appear in the patron-facing OPAC. ++ +image::media/PlaceHold-1.JPG[] ++ +4. A dialog box will appear with the patron search interface used elsewhere in the staff client. By default, the search scopes to your workstation org unit, and you can search by patron last name, first name, and middle name. ++ +image::media/PlaceHold-2.JPG[] ++ +Clicking the *arrow icon* to the right of _Clear Form_ can either expand or condense the searchable fields display which includes other patron information. ++ +image::media/PlaceHold-3.JPG[] ++ +5. To search for a patron, fill out the relevant search fields, and click *Search* or hit ENTER on your keyboard. Results will appear below in Patron Search Results in the lower half of the screen. ++ +image::media/PlaceHold-4.JPG[] ++ +6. Click the row of the desired patron account, and click *Select*. ++ +image::media/PlaceHold-5.JPG[] ++ +7. The dialog box will close and the selected patron's barcode will appear next to _Place Hold for patron by barcode_. This will cause the patron's hold notification preferences to appear in the relevant fields in the bottom half of the screen. Changes to the Hold Notification preferences can be made before clicking *Submit* to finish placing a hold for the patron. ++ +image::media/PlaceHold-6.JPG[] + +== Parts Level Hold == + +. To place a parts level hold, retrieve a record with parts-level items +attached to the title, such as a multi-disc DVD, an annual travel guide, +or a multi-volume book set. +. Place the hold as you would for a title-level hold, including patron +barcode, notification details, and a valid pickup location. +. Select the applicable part from the _Select a Part_ dropdown menu. +. Click _Submit_. ++ +image::media/holds_title_options.png[Place Holds screen with Basic Options] ++ +[TIP] +=============== +Requested formats are listed in the _Holdable Part_ column in hold records. Use the _Column Picker_ to display it when the hold record is displayed. +=============== + +== Placing Holds in Patron Records == + +. Holds can be placed from patron records too. In the patron record on the _Holds_ screen, click the _Place Hold_ button on the left top corner. + +. The catalog is displayed in the _Holds_ screen to search for the title on which you want to place a hold. + +. Search for the title and click the _Place Hold_ link. + +. The patron’s account information is retrieved automatically. Set up the notification and expiration date fields. Click _Place Hold_ and confirm your action in the pop-up window. + +. You may continue to search for more titles. Once you are done, click the _Holds_ button on the top to go back to the _Holds_ screen. Click the _Refresh_ button to display your newly placed holds. + +=== Placing Multiple Holds on Same Title === + +After a successful hold placement, staff have the option to place another hold on the same title by clicking the link _Place another hold for this title_. This returns to the hold screen, where a different patron's information can be entered. + +image::media/place-another-hold-1.png[place-another-hold-1] + +This feature can be useful for book groups or new items where a list of waiting patrons needs to be transferred into the system. + + +== Managing Holds == + +Holds can be cancelled at any time by staff or patrons. Before holds are captured, staff or patrons can suspend them or set them as inactive for a period of time without losing the hold queue position, activate suspended holds, change +notification method, phone number, pick-up location (for multi-branch libraries only), expiration date, activation date for inactive holds, etc. Once a hold is captured, staff can change the pickup location and extend the hold shelf +time if required. + +Staff can edit holds in either patron’s records or the title records. Patrons can edit their holds in their account on the OPAC. + +[TIP] +============== +If you use the column picker to change the holds display from one area of the staff client (e.g. the patron record), it will change the display for all parts of the staff client that deal with holds, including the title record holds +display, the holds shelf display, and the pull list display. +============== + + +[#actions_for_selected_holds] +=== Actions for Selected Holds === + +. Retrieve the patron record and go to the _Holds_ screen. +. Highlight the hold record, then select _Actions_. ++ +image::media/holds-managing-1.png[holds-managing-1] ++ +. Manage the hold by choosing an action on the list. +.. If you want to cancel the hold, click _Cancel Hold_ from the menu. You are prompted to select a reason and put in a note if required. To finish, click _Apply_. ++ +image::media/holds-managing-2.JPG[holds-managing-2] ++ +[NOTE] +============= +A captured hold with a status of _On Hold Shelf_ can be cancelled by either staff or patrons. But the status of the item will not change until staff check it in. +============= +.. If you want to suspend a hold or activate a suspended hold, click the appropriate action on the list. You will be prompted to confirm your action. Suspended holds have a _No_ value in the _Active?_ column. ++ +[NOTE] +=============== +Suspended holds will not be filled but its hold position will be kept. They will automatically become active on the activation date if there is an activation date in the record. Without an activation date, the holds will remain inactive until staff or a patron activates them manually. +=============== + +.. You may edit the _Activation Date_ and _Expiration Date_ by using the corresponding action on the _Actions_ dropdown menu. You will be prompted to enter the new date. Use the calendar widget to choose a date, then click _Apply_. Use the _Clear_ button to unset the date. ++ +image::media/holds-managing-4.JPG[holds-managing-4] ++ + +.. Hold shelf expire time is automatically recorded in the hold record when a hold is filled. You may edit this time by using the _Edit Shelf Expire Time_ on the _Actions_ dropdown menu. You will be prompted to enter the new date. Use the calendar widget to choose a date, then click _Apply_. + +.. If you want to enable or disable phone notification or change the phone number, click _Edit Notification Settings_. You will be prompted to enter the new phone number. Make sure you enter a valid and complete phone number. The phone number is used for this hold only and can be different from the one in the patron account. It has no impact on the patron account. If you leave it blank, no phone number will be printed on the hold slip. If you want to enable or disable email notification for the hold, check _Send Emails_ on the prompt screen. ++ +image::media/holds-managing-5_and_6.JPG[holds-managing-5_and_6] ++ + +.. Pickup location can be changed by clicking _Edit Pickup Library_. Click the dropdown list of all libraries and choose the new pickup location. Click _Submit_. ++ +image::media/holds-managing-7.JPG[holds-managing-7] ++ +[NOTE] +============== +Staff can change the pickup location for holds with in-transit status. Item will be sent in transit to the new destination. Staff cannot change the pickup location once an item is on the holds shelf. +============== + +.. The item’s physical condition is recorded in the copy record as _Good_ or _Mediocre_ in the _Quality_ field. You may request that your holds be filled with copies of good quality only. Click _Set Desired Copy Quality_ on the +_Actions_ list. Make your choice in the pop-up window. ++ +image::media/holds-managing-8.JPG[holds-managing-8] + + +=== Transferring Holds === + +. Holds on one title can be transferred to another with the hold request +time preserved. To do so, you need to find the destination title and +click _Mark for:_ -> _Title Hold Transfer_. ++ +image::media/holds-managing-9.png[holds-managing-9] ++ +. Select the hold you want to transfer. Click _Actions_ -> _Transfer to Marked Title_. ++ +image::media/holds-managing-10.JPG[holds-managing-10] + +=== Cancelled Holds === + +. Cancelled holds can be displayed. Click the _Recently Cancelled Holds_ button on the _Holds_ screen. ++ +image::media/holds-managing-11.JPG[holds-managing-11] ++ +. You can un-cancel holds. ++ +image::media/holds-managing-12.JPG[holds-managing-12] ++ +Based on your library’s setting, hold request time can be reset when a hold is un-cancelled. + + +=== Viewing Details & Adding Notes to Holds === + +. You can view details of a hold by selecting a hold then clicking the _Detail View_ button on the _Holds_ screen. ++ +image::media/holds-managing-13.JPG[holds-managing-13] ++ +. You may add a note to a hold in the _Detail View_. ++ +image::media/holds-managing-14.JPG[holds-managing-14] ++ +. Notes can be printed on the hold slip if the _Print on slip?_ checkbox +is selected. Enter the message, then click _OK_. ++ +image::media/holds-managing-15.JPG[holds-managing-15] + + +=== Displaying Queue Position === + +Using the Column Picker, you can display _Queue Position_ and _Total number of Holds_. + +image::media/holds-managing-16.png[holds-managing-16] + + +=== Managing Holds in Title Records === + +. Retrieve and display the title record in the catalog. +. Click _Actions_ -> _View Holds_. ++ +image::media/holds-managing-17.png[holds-managing-17] ++ +. All holds on this title to be picked up at your library are displayed. Use the _Pickup Library_ to view holds to be picked up at other libraries. ++ +image::media/holds-managing-18.png[holds-managing-18] ++ +. Highlight the hold you want to edit. Choose an action from the +_Actions_ menu. For more information see the +xref:#actions_for_selected_holds[Actions for Selected Holds] section. For +example, you can retrieve the hold requestor’s account by selecting +_Retrieve Patron_ from this menu. ++ +image::media/holds-managing-19.png[holds-managing-19] + + +=== Retargeting Holds === + +Holds need to be retargeted whenever a new item is added to a record, or after some types of item status changes, for instance when an item is changed from _On Order_ to _In Process_. The system does not automatically recognize the newly added items as available to fill holds. + +. View the holds for the item. + +. Highlight all the holds for the record, which have a status of _Waiting for Copy_. If there are a lot of holds, it may be helpful to sort the holds by _Status_. + +. Click on the head of the status column. + +. Under _Actions_, select _Find Another Target_. + +. A window will open asking if you are sure you would like to reset the holds for these items. + +. Click _Yes_. Nothing may appear to happen, or if you are retargeting a lot of holds at once, your screen may go blank or seem to freeze for a moment while the holds are retargeted. + +. When the screen refreshes, the holds will be retargeted. The system will now recognize the new items as available for holds. + + +=== Pulling & Capturing Holds === + +==== Holds Pull List ==== + +There are usually four statuses a hold may have: _Waiting for Copy_, _Waiting for Capture_, _In Transit_ and _Ready for Pickup_. + +. *Waiting-for-copy*: all holdable copies are checked out or not available. + +. *Waiting-for-capture*: an available copy is assigned to the hold. The item shows up on the _Holds Pull List_ waiting for staff to search the shelf and capture the hold. + +. *In Transit*: holds are captured at a non-pickup branch and on the way to the pick-up location. + +. *Ready-for-pick-up*: holds are captured and items are on the _Hold Shelf_ waiting for patrons to pick up. Besides capturing holds when checking in items, Evergreen matches holds with available items in your library at regular +intervals. Once a matching copy is found, the item’s barcode number is assigned to the hold and the item is put on the _Holds Pull List_. Staff can print the _Holds Pull List_ and search for the items on shelves. + +. To retrieve your _Holds Pull List_, select _Circulation_ -> _Pull List for Hold Requests_. ++ +image::media/holds-pull-1.png[holds-pull-1] ++ +. The _Holds Pull List_ is displayed. You may re-sort it by clicking the column labels, e.g. _Title_. You can also add fields to the display by using the column picker. ++ +image::media/holds-pull-2.png[holds-pull-2] ++ +[NOTE] +=========== +Column adjustments will only affect the screen display and the CSV download for the holds pull list. It will not affect the printable holds pull list. +=========== + +. The following options are available for printing the pull list: + +* _Print Full Pull List_ prints _Title_, _Author_, _Shelving Location_, _Call Number_ and _Item Barcode_. This method uses less paper than the alternate strategy. + +* _Print Full Pull List (Alternate Strategy)_ prints the same fields as the above option but also includes a patron barcode. This list will also first sort by copy location, as ordered under _Admin_ -> _Local Administration_ -> _Copy Location Order_. + +* _Download CSV_ – This option is available from the _List Actions_ button (adjacent to the _Page "#"_ button) and saves all fields in the screen display to a CSV file. This file can then be opened in Excel or another spreadsheet program. This option provides more flexibility in identifying fields that should be printed. ++ +image::media/holds-pull-4.png[holds-pull-4] ++ +With the CSV option, if you are including barcodes in the holds pull list, you will need to take the following steps to make the barcode display properly: in Excel, select the entire barcode column, right-click and select _Format Cells_, click _Number_ as the category and then reduce the number of decimal places to 0. + +. You may perform hold management tasks by using the _Actions_ dropdown list. + +The _Holds Pull List_ is updated constantly. Once an item on the list is no longer available or a hold on the list is captured, the items will disappear from the list. The _Holds Pull List_ should be printed at least once a day. + +==== Capturing Holds ==== + +Holds can be captured when a checked-out item is returned (checked in) or an item on the _Holds Pull List_ is retrieved and captured. When a hold is captured, the hold slip will be printed and if the patron has chosen to be notified by email, the email notification will be sent out. The item should be put on the hold shelf. + +. To capture a hold, select _Circulation_ -> _Capture Holds_ (or press +_Shift-F2_). ++ +image::media/holds-pull-5.png[holds-pull-5] ++ +. Scan or type barcode and click _Submit_. ++ +image::media/holds-pull-6.png[holds-pull-6] ++ +. The following hold slip is automatically printed. If your workstation +is not setup for silent printing (via Hatch), then a print window will appear. ++ +image::media/holds-pull-7.png[holds-pull-7] ++ +. If the item should be sent to another location, a hold transit slip +will be printed. If your workstation is not setup for silent printing +(via Hatch), then another print window will appear. ++ +[TIP] +=============== +If a patron has an _OPAC/Staff Client Holds Alias_ in his/her account, it will be used on the hold slip instead of the patron’s name. Holds can also be captured on the _Circulation_ -> _Check In Items_ screen where you have more control over automatic slip printing. +=============== + + +=== Handling Missing and Damaged Item === + +If an item on the holds pull list is missing or damaged, you can change its status directly from the holds pull list. + +. From the _Holds Pull List_, right-click on the item and either select _Mark Item Missing_ or _Mark Item Damaged_. ++ +image::media/holds-pull-9.png[holds-pull-9] ++ +. Evergreen will update the status of the item and will immediately retarget the hold. + + +=== Holds Notification Methods === + +. In Evergreen, patrons can set up their default holds notification method in the _Account Preferences_ area of _My Account_. Staff cannot set these preferences for patrons; the patrons must do it when they are logged into the public catalog. ++ +image::media/holds-notifications-1.png[holds-notifications-1] ++ +. Patrons with a default notification preference for phone will see their phone number at the time they place a hold. The checkboxes for email and phone notification will also automatically be checked (if an email or phone number has been assigned to the account). ++ +image::media/holds-notifications-2.png[holds-notifications-2] ++ +. The patron can remove these checkmarks at the time they place the hold or they can enter a different phone number if they prefer to be contacted at a different number. The patron cannot change their e-mail address at this time. + +. When the hold becomes available, the holds slip will display the patron’s e-mail address only if the patron selected the _Notify by Email by default when a hold is ready for pickup?_ checkbox. It will display a phone number only if the patron selected the _Notify by Phone by default when a hold is ready for pickup?_ checkbox. + +[NOTE] +If the patron changes their contact telephone number when placing the hold, this phone number will display on the holds slip. It will not necessarily be the same phone number contained in the patron’s record. + + +=== Clearing Shelf-Expired Holds === + +. Items with _Ready-for-Pickup_ status are on the _Holds Shelf_. The _Holds Shelf_ can help you manage items on the holds shelf. To see the holds shelf list, select _Circulation_ -> _Holds Shelf_. ++ +image::media/holds-clearing-1.png[holds-clearing-1] ++ +. The _Holds Shelf_ is displayed. Note the _Actions_ menu is available, as in the patron record. ++ +You can cancel stale holds here. ++ +image::media/holds-clearing-2.png[holds-clearing-2] ++ +. Use the column picker to add and remove fields from this display. Two fields you may want to display are _Shelf Expire Time_ and _Shelf Time_. ++ +image::media/holds-clearing-3.png[holds-clearing-3] ++ +. Click the _Show Clearable Holds_ button to list expired holds, wrong-shelf holds and canceled holds only. Expired holds are holds that expired before today's date. ++ +image::media/holds-clearing-4.png[holds-clearing-4] ++ +. Click the _Print Full List_ button if you need a printed list. To format the printout customize the *Holds Shelf* receipt template. This can be done in _Administration_ -> _Workstation_ -> _Print Templates_. + +. The _Clear These Holds_ button becomes enabled when viewing clearable +holds. Click it and the expired holds will be canceled. + +. Bring items down from the hold shelf and check them in. + +[IMPORTANT] +============= +If you cancel a ready-for-pickup hold, you must check in the item to make it available for circulation or trigger the next hold in line. +============= + +Hold shelf expire time is inserted when a hold achieves on-hold-shelf status. It is calculated based on the interval entered in _Local Admin_ -> _Library Settings_ -> _Default hold shelf expire interval_. + +[NOTE] +=========== +The clear-hold-shelf function cancels shelf-expired holds only. It does not include holds canceled by patron. Staff needs to trace these items manually according to the hold slip date. +=========== + + +== Alternate Hold Pick up Location == + +*Abstract* + +This feature enables libraries to configure an alternate hold pick up +location. The alternate pick up location will appear in the staff +client to inform library staff that a patron has a hold waiting at that +location. In the stock Evergreen code, the default alternate location +is called "Behind Desk". + +*Configuration* + +The alternate pick up location is disabled in Evergreen by default. It +can be enabled by setting *Holds: Behind Desk Pickup Supported* to +'True' in the Library Settings Editor. + +Libraries can also choose to give patrons the ability to opt-in to pick up holds at the alternate location through their OPAC account. To add this option, set the *OPAC/Patron Visible* field in the User Setting Type *Hold is behind Circ Desk* to 'True'. The User Setting Types can be found under *Administration -> Server Administration -> User Setting Types*. + +*Display* + +When enabled, the alternate pick up location will be displayed under the +Holds button in the patron account. + +image::media/custom_hold_pickup_location1.png[Custom Hold Pickup Location] + + +If configured, patrons will see the option to opt-in to the alternate location in the _Account Preferences_ section of their OPAC Account. + +image::media/custom_hold_pickup_location2.jpg[OPAC Account] + + +== Display Hold Types on Pull Lists == + +This feature ensures that the hold type can be displayed on all hold interfaces. + +You will find the following changes to the hold type indicator: + +. The hold type indicator will display by default on all XUL-based hold +interfaces. XUL-based hold interfaces are those that number the items on the +interface. This can be overridden by saving column configurations that remove +the _Type_ column. +. The hold type indicator will display by default on the HTML-based pull list. +To access, click _Circulation_ -> _Pull List for Hold Requests_ -> _Print Full +Pull List (Alternate Strategy)_. +. The hold type indicator can be added to the Simplified Pull List. To access, +click _Circulation_ -> _Pull List for Hold Requests_ -> _Simplified Pull List +Interface_. + +To add the hold type indicator to the simplified pull list, click _Simplified +Pull List Interface_, and right click on any of the column headers. The Column +Picker appears in a pop up window. Click the box adjacent to _Hold Type_, and +Click _Save_. The _Simplified Pull List Interface_ will now include the hold +type each time that you log into the staff client. + +image::media/Display_Hold_Types_on_Pull_Lists1.jpg[Display_Hold_Types_on_Pull_Lists1] diff --git a/docs/modules/circulation/pages/booking.adoc b/docs/modules/circulation/pages/booking.adoc new file mode 100644 index 0000000000..c3a2453e4d --- /dev/null +++ b/docs/modules/circulation/pages/booking.adoc @@ -0,0 +1,180 @@ += Booking Module = +:toc: + +== Creating a Booking Reservation == + +indexterm:[scheduling,resources using the booking module] +indexterm:[booking,reserving a resource] +indexterm:[booking,creating a reservation] +indexterm:[reserving a bookable resource] + +[NOTE] +The "Create a booking reservation" screen uses your library's timezone. If you create a reservation at a library +in a different timezone, Evergreen will alert you and provide the time in both your timezone and the other library's +timezone. + +Only staff members may create reservations. A reservation can be started from a patron record, or a booking resource. +To reserve catalogued items, you may start from searching the catalogue, if you do not know the booking item's barcode. + +=== To create a reservation from a patron record === + +. Retrieve the patron's record. +. Select Other -> Booking -> Create Reservations. This takes you to the Create Reservations Screen. +. If you want to create a reservation that lasts less than a day (such as for a study room), select _Single-day reservation_ +as the reservation type. If your reservation will last several days (such as for a video camera needed for a class project), +select _Multiple-day reservation_. +. In the area labeled "Reservation details", select the _Choose resource by barcode_ tab if you know the specific barcode +of a resource you'd like to reserve. Otherwise, select the _Choose resource by type_ tab. +. A schedule grid will display on the bottom part of the screen. +. If necessary, adjust the day or days that are displayed. You can also make other adjustments using the _Schedule settings_ +tab. +. For non-catalogued resources, patrons may wish to specify certain attributes. The _Attributes_ tab allows you to do this. +For example, if a patron is booking a laptop, they can choose between PC and Mac laptops if they need to. +. When you have found the days or times that work the best, you can proceed with creating the reservation, by doing one +of the following: +** Double click the appropriate row in the grid. +** Use the tab and space keys to select the appropriate rows, +then press Shift+F10 to open the actions menu. Select +"Create Reservation". +** Select the appropriate rows in the grid, then right click +to open the actions menu. Select "Create Reservation". +** Select the appropriate rows in the grid, then select the +actions button. Select "Create Reservation". +. Adjust the values in this screen as necessary. +. Select the "Confirm reservation" button. +. The screen will refresh, and the new reservation will appear in the schedule. + + +=== Search the catalogue to create a reservation === + +If you would like to reserve a catalogued item but do not know the item barcode, you may start with a catalogue search. + +. Select Cataloguing -> Search the Catalogue to search for the item you wish to reserve. You may search by any +bibliographic information. +. Select the _Holdings View_ tab. +. Right-click on the row that you want to reserve. Select _Book Item Now_. This takes you to the Create Reservations Screen. +. If you want to create a reservation that lasts less than a day (such as for a study room), select _Single-day reservation_ +as the reservation type. If your reservation will last several days (such as for a video camera needed for a class project), +select _Multiple-day reservation_. +. A schedule grid will display on the bottom part of the screen. +. If necessary, adjust the day or days that are displayed. You can also make other adjustments using the _Schedule settings_ +tab. +. When you have found the days or times that work the best, you can proceed with creating the reservation, by doing one +of the following: +.* Double click the appropriate row in the grid. +.* Use the tab and space keys to select the appropriate rows, +then press Shift+F10 to open the actions menu. Select +"Create Reservation". +.* Select the appropriate rows in the grid, then right click +to open the actions menu. Select "Create Reservation". +.* Select the appropriate rows in the grid, then select the +actions button. Select "Create Reservation". +. Enter the patron's barcode. +. Adjust the values in this screen as necessary. +. Select the "Confirm reservation" button. +. The screen will refresh, and the new reservation will appear in the schedule. + + +[NOTE] +Reservations on catalogued items can be created on Item Status (F5) screen. Select the item, then Actions -> Book Item Now. + +== Reservation Pull List == + +indexterm:[booking,pull list] +indexterm:[pull list,booking] + +Reservation pull list can be generated dynamically on the Staff Client. + +. To create a pull list, select Booking -> Pull List. + +. You can decide how many days in advance you would like to pull reserved items. Enter the number of days in the box +adjacent to Generate list for this many days hence. For example, if you would like to pull items that are needed today, +you can enter 1 in the box, and you will retrieve items that need to be pulled today. + +. The pull list will appear. Select the actions button, then _Print_ to print the pull list. + +== Capturing Items for Reservations == + +indexterm:[booking,capturing reservations] + +Depending on your library's workflow, reservations may need to be captured before they are ready to be picked up by the patron. + +[CAUTION] +Always capture reservations in Booking Module. Check In function in Circulation does not function the same as Capture Resources. + +1) In the staff client, select Booking -> Capture Resources. + +image::media/booking-capture-1_web_client.png[] + +2) Scan the item barcode or type the barcode then click Capture. + +image::media/booking-capture-2_web_client.png[] + +3) The message Capture succeeded will appear to the right. Information about the item will appear below the message. Click Print button to print a slip for the reservation. + +image::media/booking-capture-3.png[] + + +== Picking Up Reservations == + +indexterm:[booking,picking up reservations] +indexterm:[booking,checkout] +indexterm:[checkout,booking resources] + +[CAUTION] +Always use the dedicated Booking Module interfaces for tasks related to reservations. Items that have been captured for a +reservation cannot be checked out using the Check Out interface, even if the patron is the reservation recipient. + +1) Ready-for-pickup reservations can be listed from Other -> Booking -> Pick Up Reservations within a patron record or Booking -> Pick Up Reservations. + +2) Scan the patron barcode if using Booking -> Pick Up Reservations. + +3) The reservation(s) available for pickup will display. Select those you want to pick up and double click them. + +4) The screen will refresh to show that the patron has picked up the reservation(s). + + +== Returning Reservations == + +indexterm:[booking,returning reservations] +indexterm:[booking,checkin] +indexterm:[checkin,booking resources] + +[CAUTION] +When a reserved item is brought back, staff must use the Booking Module to return the reservation. + +1) To return reservations, select Booking -> Return Reservations + +2) You can return the item by patron or item barcode. Here we choose Resource to return by item barcode. Scan or enter the barcode, and click Go. + +3) A pop up box will tell you that the item was returned. Click OK on the prompt. + +4) If we select Patron on the above screen, after scanning the patron's barcode, reservations currently out to that patron are displayed. Highlight the reservations you want to return, and double click them. + +5) The screen will refresh to show any resources that remain out and the reservations that have been returned. + +[NOTE] +Reservations can be returned from within patron records by selecting Other -> Booking -> Return Reservations + +== Cancelling a Reservation == + +indexterm:[booking,canceling reservations] + +A reservation can be cancelled in a patron's record or reservation creation screen. + +=== Cancel a reservation from the patron record === + +1) Retrieve the patron's record. + +2) Select Other -> Booking -> Manage Reservations. + +3) The existing reservations will appear at the bottom of the screen. + +4) Highlight the reservation that you want to cancel. Select the Actions menu, then select _Cancel Selected_. + +5) A pop-up window will confirm the cancellation. Click OK on the prompt. + +6) The screen will refresh, and the cancelled reservation(s) will disappear. + + + diff --git a/docs/modules/circulation/pages/circulating_items_web_client.adoc b/docs/modules/circulation/pages/circulating_items_web_client.adoc new file mode 100644 index 0000000000..d1d7e0eeab --- /dev/null +++ b/docs/modules/circulation/pages/circulating_items_web_client.adoc @@ -0,0 +1,472 @@ += Circulating Items = +:toc: + +== Check Out == + +=== Regular Items === + +1) To check out an item click *Check Out Items* from the Circulation and Patrons toolbar, or select *Circulation* -> *Check Out*. + +image::media/checkout_menu_web_client.png[] + +2) Scan or enter patron's barcode and click *Submit* if entering barcode manually. If scanning, number is submitted automatically. + +image::media/retrieve_patron_web_client.png[] + +3) Scan or enter item barcode manually, clicking *Submit* if manual. + +image::media/checkout_item_barcode_web_client.png[] + +4) Due date is now displayed. + +image::media/due_date_display_web_client.png[] + +5) When all items are scanned, click the *Done* button to generate slip receipt or to exit patron record if not printing slip receipts. + +=== Pre-cataloged Items === + +1) Go to patron's *Check Out* screen by clicking *Circulation* -> *Check Out Items*. + +2) Scan the item barcode. + +3) At prompt, enter the required information click *Precat Checkout*. + +image::media/precat_web_client.png[] + +[TIP] +On check-in, Evergreen will prompt staff to re-route the item to cataloging. + +[NOTE] +This screen does not respond to the enter key or carriage return provided +by a barcode scanner when the cursor is in the ISBN field. This behavior +prevents pre-cataloged items from being checked out before you are done +entering all the desired information. + +[NOTE] +This requires the _CREATE_PRECAT_ permission. All form elements in the +dialog other than the Cancel button will be disabled if the current user +lacks the CREATE_PRECAT permission. + +=== Due Dates === + +Circulation periods are pre-set. When items are checked out, due dates are automatically calculated and inserted into circulation records if the *Specific Due Date* checkbox is not selected on the Check Out screen. The *Specific Due Date* checkbox allows you to set a different due date to override the pre-set loan period. + +Before you scan the item, select the *Specific Due Date* checkbox. Enter the date in yyyy-mm-dd format. This date applies to all items until you change the date, de-select the *Specific Due Date* checkbox, or quit the patron record. + +image::media/specify_due_date1_web_client.png[] + + +=== Email Checkout Receipts === + +This feature allows patrons to receive checkout receipts through email at the circulation desk and in the Evergreen self-checkout interface. Patrons need to opt in to receive email receipts by default and must have an email address associated with their account. Opt in can be staff mediated at the time of account creation or in existing accounts. Patrons can also opt in directly in their OPAC account or through patron self-registration. This feature does not affect the behavior of checkouts from SIP2 devices. + +==== Staff Client Check Out ==== + +When a patron has opted to receive email checkout receipts by default, an envelope icon representing email will appear next to the receipt options in the Check Out screen. A printer icon representing a physical receipt appears if the patron has not opted in to the default email receipts. + +image::media/ereceipts5_web_client.PNG[] + +Staff can click *Quick Receipt* and the default checkout receipt option will be triggered—an email will be sent or the receipt will print out. The Quick Receipt option allows staff to stay in the patron account after completing the transaction. Alternatively, staff can click *Done* to trigger the default checkout receipt and close out the patron account. By clicking on the arrow next to the Quick Receipt or Done buttons, staff can select which receipt option to use, regardless of the selected default. The email receipt option will be disabled if the patron account does not have an email address. + +==== Self Checkout ==== + +In the Self Checkout interface, patrons will have the option to select a print or email checkout receipt, or no receipt. The radio button for the patron's default receipt option will be selected automatically in the interface. Patrons can select a different receipt option if desired. The email receipt radio button will be disabled if there is no email address associated with the patron's account. + +image::media/ereceipts6_web_client.PNG[] + +==== Opt In ==== + +*Staff Mediated Opt In At Registration* + +Patrons can be opted in to receive email checkout receipts by default by library staff upon the creation of their library account. Within the patron registration form, there is a new option below the Email Address field to select _Email checkout receipts by default?_. Select this option if the patron wants email checkout receipts to be their default. Save any changes. + +image::media/ereceipts1_web_client.PNG[] + +*Staff Mediated Opt In After Registration* + +Staff can also select email checkout receipts as the default option in a patron account after initial registration. Within the patron account go to *Edit* and select _Email checkout receipts by default?_. Make sure the patron also has an email address associated with their account. Save any changes. + +image::media/ereceipts2_web_client.PNG[] + +*Patron Opt In – Self-Registration Form* + +If your library offers patrons the ability to request a library card through the patron self-registration form, they can select email checkout receipts by default in the initial self-registration form: + +image::media/ereceipts3_web_client.PNG[] + +*Patron Opt In - OPAC Account* + +Patrons can also opt in to receive email checkout receipts by default directly in their OPAC account. After logging in, patrons can go to *Account Preferences->Notification Preferences* and enable _Email checkout receipts by default?_ and click *Save*. + +image::media/ereceipts4_web_client.PNG[] + + +==== Email Checkout Receipt Configuration ==== + +Email checkout receipts will be sent out through a Notifications/Action Trigger called Email Checkout Receipt. The email template and action trigger can be customized by going to *Administration->Local Administration->Notifications/Action Trigger->Email Checkout Receipt*. + + +== Check In == + +=== Regular check in === + +1) To check in an item click *Check In Items* from the Circulation and Patrons toolbar, or select *Circulation* -> *Check In*. + +image::media/check_in_menu_web_client.png[] + +2) Scan item barcode or enter manually and click *Submit*. + +image::media/checkin_barcode_web_client.png[] + +3) If there is an overdue fine associated with the checkin, an alert will appear at the top of the screen with a fine tally for the current checkin session. To immediately handle fine payment, click the alert to jump to the patron's bill record. + +image::media/overdue_checkin_web_client.png[] + +4) If the checkin is an item that can fill a hold, a pop-up box will appear with patron contact information or routing information for the hold. + +5) Print out the hold or transit slip and place the item on the hold shelf or route it to the proper library. + +6) If the item is not in a state acceptable for hold/transit (for instance, it is damaged), select the line of the item, and choose *Actions* -> *Cancel Transit*. The item will then have a status of _Canceled Transit_ rather than _In Transit_. + +image::media/Check_In-Cancel_Transit.png[Actions Menu - Cancel Transit] + +=== Backdated check in === + +This is useful for clearing a book drop. + +1) To change effective check-in date, select *Circulation* -> *Check In Items*. In *Effective Date* field enter the date in yyyy-mm-dd format. + +image::media/backdate_checkin_web_client.png[] + +2) The new effective date is now displayed in the red bar above the Barcode field. + +image::media/backdate_red_web_client.png[] + +3) Move the cursor to the *Barcode* field. Scan the items. When finishing backdated check-in, change the *Effective Date* back to today's date. + +=== Backdate Post-Checkin === + +After an item has been checked in, you may use the Backdate Post-Checkin function to backdate the check-in date. + +1) Select the item on the Check In screen, click *Actions* -> *Backdate Post-Checkin*. + +image::media/backdate_post_checkin_web_client.png[] + +2) In *Effective Date* field enter the date in yyyy-mm-dd format. The check-in date will be adjusted according to the new effective check-in date. + +image::media/backdate_post_date_web_client.png[] + +.Checkin Modifiers +[TIP] +=================================================== +At the right bottom corner there is a *Checkin Modifiers* pop-up list. The options are: + +- *Ignore Pre-cat Items*: No prompt when checking in a pre-cat item. Item will be routed to Cataloguing with Cataloguing status. + +- *Suppress Holds and Transit*: Item will not be used to fill holds or sent in transit. Item has Reshelving status. + +- *Amnesty Mode/Forgive Fines*: Overdue fines will be voided if already created or not be inserted if not yet created (e.g. hourly loans). + +- *Auto-Print Hold and Transit Slips*: Slips will be automatically printed without prompt for confirmation. + +- *Clear Holds Shelf*: Checking in hold-shelf-expired items will clear the items from the hold shelf (holds to be cancelled). + +- *Retarget Local Holds*: When checking in in process items that are owned by the library, attempt to find a local hold to retarget. This is intended to help with proper targeting of newly-catalogued items. + +- *Retarget All Statuses*: Similar to Retarget Local Holds, this modifier will attempt to find a local hold to retarget, regardless of the status of the item being checked in. This modifier must be used in conjunction with the Retarget Local Holds modifier. + +- *Capture Local Holds as Transits*: With this checkin modifier, any local holds will be given an in transit status instead of on holds shelf. The intent is to stop the system from sending holds notifications before the item is ready to be placed on the holds shelf and item will have a status of in-transit until checked in again. If you wish to simply delay notification and allow time for staff to process item to holds shelf, you may wish to use the Hold Shelf Status Delay setting in Library Settings Editor instead. See Local Administration section for more information. + +- *Manual Floating Active*: Floating Groups must be configured for this modifier to function. The manual flag in Floating Groups dictates whether or not the "Manual Floating Active" checkin modifier needs to be active for a copy to float. This allows for greater control over when items float. + +- *Update Inventory*: When this checkin modifier is selected, scanned barcodes will have the current date/time added as the inventory date while the item is checked in. + +These options may be selected simultaneously. The selected option is displayed in the header area. + +image::media/checkinmodifiers-with-inventory2.png[Web client check-in modifiers] +=================================================== + +== Renewal and Editing the Item's Due Date == + +Checked-out items can be renewed if your library's policy allows it. The new due date is calculated from the renewal date. Existing loans can also be extended to a specific date by editing the due date or renewing with a specific due date. + +=== Renewing via a Patron's Account === + +1) Retrieve the patron record and go to the *Items Out* screen. + +image::media/items_out_click_web_client.png[] + +2) Select the item you want to renew. Click on *Actions* -> *Renew*. If you want to renew all items in the account, click *Renew All* instead. + +image::media/renew_action_web_client.png[] + +3) If you want to specify the due date, click *Renew with Specific Due Date*. You will be prompted to select a due date. Once done, click *Apply*. + +//image::media/renew_specific_date_web_client.png[] + + +=== Renewing by Item Barcode === +1) To renew items by barcode, select *Circulation* -> *Renew Items*. + +2) Scan or manually entire the item barcode. + +image::media/renew_item_web_client.png[] + +3) If you want to specify the due date, click *Specific Due Date* and enter a new due date in yyyy-mm-dd format. + +image::media/renew_item_calendar_web_client.png[] + +=== Editing Due Date === + +1) Retrieve the patron record and go to the *Items Out* screen. + +2) Select the item you want to renew. Click on *Actions* -> *Edit Due Date*. + +image::media/edit_due_date_action_web_client.png[] + +3) Enter a new due date in yyyy-mm-dd format in the pop-up window, then click *OK*. + +[NOTE] +Editing a due date is not included in the renewal count. + +== Marking Items Lost and Claimed Returned == + +=== Lost Items === +1) To mark items Lost, retrieve patron record and click *Items Out*. + +2) Select the item. Click on *Actions* -> *Mark Lost (by Patron)*. + +image::media/mark_lost_web_client.png[] + +3) The lost item now displays as lost in the *Items Checked Out* section of the patron record. + +image::media/lost_section_web_client.png[] + +4) The lost item also adds to the count of *Lost* items in the patron summary on the left (or top) of the screen. + +image::media/patron_summary_checkouts_web_client.png[] + +[NOTE] +Lost Item Billing +======================== +- Marking an item Lost will automatically bill the patron the replacement cost of the item as recorded in the price field in the item record, and a processing fee as determined by your local policy. If the lost item has overdue charges, the overdue charges may be voided or retained based on local policy. +- A lost-then-returned item will disappear from the Items Out screen only when all bills linked to this particular circulation have been resolved. Bills may include replacement charges, processing fees, and manual charges added to the existing bills. +- The replacement fee and processing fee for lost-then-returned items may be voided if set by local policy. Overdue fines may be reinstated on lost-then-returned items if set by local policy. +======================== + +=== Refunds for Lost Items === + +If an item is returned after a lost bill has been paid and the library's policy is to void the replacement fee for lost-then-returned items, there will be a negative balance in the bill. A refund needs to be made to close the bill and the circulation record. Once the outstanding amount has been refunded, the bill and circulation record will be closed and the item will disappear from the Items Out screen. + +If you need to balance a bill with a negative amount, you need to add two dummy bills to the existing bills. The first one can be of any amount (e.g. $0.01), while the second should be of the absolute value of the negative amount. Then you need to void the first dummy bill. The reason for using a dummy bill is that Evergreen will check and close the circulation record only when payment is applied or bills are voided. + +=== Claimed Returned Items === + +1) To mark an item Claimed Returned, retrieve the patron record and go to the *Items Out* screen. + +2) Select the item, then select *Actions* -> *Mark Claimed Returned* from the dropdown menu. + +image::media/mark_claims_returned_web_client.png[] + +3) Enter date in yyyy-mm-dd format and click *Submit*. + +image::media/claimed_date_web_client.png[] + +4) The Claimed Returned item now displays in the *Other/Special Circulations* section of the patron record. + +image::media/cr_section_web_client.png[] + +5) The Claimed Returned item adds to the count of items that are Claimed Returned in the patron summary on the left (or top) of the screen. It also adds to the total *Other/Special Circulations* that is displayed when editing the patron's record. + +image::media/patron_summary_checkouts_web_client.png[] + +[NOTE] +More on Claimed Returned Items +==================================== +- The date entered for a Claimed Returned item establishes the fine. If the date given has passed, bills will be adjusted accordingly. +- When a Claimed Returned item is returned, if there is an outstanding bill associated with it, the item will not disappear from the *Items Out* screen. It will disappear when the outstanding bills are resolved. +- When an item is marked Claimed Returned, the value in *Claims-returned Count* field in the patron record is automatically increased. Staff can manually adjust this count by editing the patron record. +==================================== + +== In-house Use (F6) == +1) To record in-house use, select *Circulation* -> *Record-In House Use*, click *Check Out* -> *Record In-House Use* on the circulation toolbar , or press *F6*. + +image::media/record_in_house_action_web_client.png[] + +2) To record in-house use for cataloged items, enter number of uses, scan + barcode or type barcode and click *Submit*. + +image::media/in_house_use_web_client.png[] + +[NOTE] +==================================== +There are two independent library settings that will allow copy alerts to display when scanned in In-house Use: +*Display copy alert for in-house-use* set to true will cause an alert message to appear, if it has one, when recording in-house-use for the copy. +*Display copy location check in alert for in-house-use* set to true will cause an alert message indicating that the item needs to be routed to its location if the location has check in alert set to true. +==================================== + +3) To record in-house use for non-cataloged items, enter number of uses, choose non-cataloged type from drop-down menu, and click *Submit*. + +image::media/in_house_use_non_cat.png[] + +[NOTE] +The statistics of in-house use are separated from circulation statistics. The in-house use count of cataloged items is not included in the items' total use count. + +[[itemstatus_web_client]] +== Item Status == + +The Item Status screen is very useful. Many actions can be taken by either circulation staff or catalogers on this screen. Here we will cover some circulation-related functions, namely checking item status, viewing past circulations, inserting item alert messages, marking items missing or damaged, etc. + +=== Checking item status === + +1) To check the status of an item, select *Search* -> *Search for copies by Barcode*. + +image::media/item_status_menu_web_client.png[] + +2) Scan the barcode or type it and click *Submit*. The current status of the item is displayed with selected other fields. You can use the column picker to select more fields to view. + +image::media/item_status_barcode_web_client.png[] + +3) Click the *Detail View* button and the item summary and circulation history will be displayed. + +image::media/item_status_altview_web_client.png[] + +4) Click *List View* to go back. + +image::media/item_status_list_view_web_client.png[] + +[NOTE] +If the item's status is "Available", the displayed due date refers to the previous circulation's due date. + +[TIP] +Upload From File allows you to load multiple items saved in a file on your local computer. The file contains a list of the barcodes in text format. To ensure smooth uploading and further processing on the items, it is recommended that the list contains no more than 100 items. + +=== Viewing past circulations === +1) To view past circulations, retrieve the item on the *Item Status* screen as described above. + +2) Select *Detail view*. + +image::media/last_few_circs_action_web_client.png[] + +3) Choose *Recent Circ History*. The item’s recent circulation history is displayed. + +image::media/last_few_circs_display_web_client.png[] + +4) To retrieve the patron(s) of the last circulations, click on the name of the patron. The patron record will be displayed. + +[TIP] +The number of items that displays in the circulation history can be set in Local *Administration* -> *Library Settings Editor*. + +[NOTE] +You can also retrieve the past circulations on the patron's Items Out screen and from the Check In screen. + +=== Marking items damaged or missing and other functions === +1) To mark items damaged or missing, retrieve the item on the *Item Status* screen. + +2) Select the item. Click on *Actions for Selected Items* -> *Mark Item Damaged* or *Mark Item Missing*. + +// image::media/mark_missing_damaged_web_client.png[] + +[NOTE] +Depending on the library's policy, when marking an item damaged, bills (cost and/or processing fee) may be inserted into the last borrower's account. + +3) Following the above procedure, you can check in and renew items by using the *Check in Items* and *Renew Items* on the dropdown menu. + +=== Item alerts === + +The *Edit Item Attributes* function on the *Actions for Selected Items* dropdown list allows you to edit item records. Here, we will show you how to insert item alert messages by this function. See cataloging instructions for more information on item editing. +1) Retrieve record on *Item Status* screen. + +2) Once item is displayed, highlight it and select *Actions for Selected Items* -> *Edit Item Attributes*. + +3) The item record is displayed in the *Copy Editor*. + +//image::media/copy_edit_alert_web_client.png[] + +4) Click *Alert Message* in the *Miscellaneous* column. The background color of the box changes. Type in the message then click *Apply*. + +//image::media/copy_alert_message_web_client.png[] + +5) Click *Modify Copies*, then confirm the action. + + +== Long Overdue Items == + +*Items Marked Long Overdue* + +Once an item has been overdue for a configurable amount of time, Evergreen will mark the item long overdue in the borrowing patron’s account. This will be done automatically through a Notification/Action Trigger. When the item is marked long overdue, several actions will take place: + +. The item will go into the status of “Long Overdue” + +. The accrual of overdue fines will be stopped + +Optionally the patron can be billed for the item price, a long overdue +processing fee, and any overdue fines can be voided from the account. Patrons +can also be sent a notification that the item was marked long overdue. And +long-overdue items can be included on the "Items Checked Out" or "Other/Special +Circulations" tabs of the "Items Out" view of a patron's record. These are all +controlled by <>. + +image::media/long_overdue1.png[Patron Account-Long Overdue] + + +*Checking in a Long Overdue item* + +If an item that has been marked long overdue is checked in, an alert will appear on the screen informing the staff member that the item was long overdue. Once checked in, the item will go into the status of “In process”. Optionally, the item price and long overdue processing fee can be voided and overdue fines can be reinstated on the patron’s account. If the item is checked in at a library other than its home library, a library setting controls whether the item can immediately fill a hold or circulate, or if it needs to be sent to its home library for processing. + +image::media/long_overdue2.png[Long Overdue Checkin] + +*Notification/Action Triggers* + +Evergreen has two sample Notification/Action Triggers that are related to marking items long overdue. The sample triggers are configured for 6 months. These triggers can be configured for any amount of time according to library policy and will need to be activated for use. + +* Sample Triggers + +** 6 Month Auto Mark Long-Overdue—will mark an item long overdue after the configured period of time + +** 6 Month Long Overdue Notice—will send patron notification that an item has been marked long overdue on their account + +[[longoverdue_library_settings]] +*Library Settings* + +The following Library Settings enable you to set preferences related to long overdue items: + +* *Circulation: Long-Overdue Check-In Interval Uses Last Activity Date* —Use the + long-overdue last-activity date instead of the due_date to determine whether + the item has been checked out too long to perform long-overdue check-in + processing. If set, the system will first check the last payment time, + followed by the last billing time, followed by the due date. See also the + "Long-Overdue Max Return Interval" setting. + +* *Circulation: Long-Overdue Items Usable on Checkin* —Long-overdue items are usable on checkin instead of going "home" first + +* *Circulation: Long-Overdue Max Return Interval* —Long-overdue check-in processing (voiding fees, re-instating overdues, etc.) will not take place for items that have been overdue for (or have last activity older than) this amount of time + +* *Circulation: Restore Overdues on Long-Overdue Item Return* + +* *Circulation: Void Long-Overdue item Billing When Returned* + +* *Circulation: Void Processing Fee on Long-Overdue Item Return* + +* *Finances: Leave transaction open when long overdue balance equals zero* —Leave transaction open when long-overdue balance equals zero. This leaves the lost copy on the patron record when it is paid + +* *Finances: Long-Overdue Materials Processing Fee* + +* *Finances: Void Overdue Fines When Items are Marked Long-Overdue* + +* *GUI: Items Out Long-Overdue display setting* + +[TIP] +Learn more about these settings in the chapter about the +Library Settings Editor. + +*Permissions to use this Feature* + +The following permissions are related to this feature: + +* COPY_STATUS_LONG_OVERDUE.override + +** Allows the user to check-in long-overdue items thus removing the long-overdue status on the item + + + diff --git a/docs/modules/circulation/pages/circulation_patron_records_web_client.adoc b/docs/modules/circulation/pages/circulation_patron_records_web_client.adoc new file mode 100644 index 0000000000..693d180199 --- /dev/null +++ b/docs/modules/circulation/pages/circulation_patron_records_web_client.adoc @@ -0,0 +1,643 @@ += Circulation - Patron Record = +:toc: + +[[searching_patrons]] +== Searching Patrons == + +indexterm:[patrons, searching for] + +To search for a patron, select _Search -> Search for Patrons_ from the menu bar. + +The Patron Search screen will display. It will contain options to search on the +following fields: + +* Last Name +* First Name +* Middle Name + +image::media/circulation_patron_records-1a_web_client.png[circulation_patron_records 1a] + + +Next to the _Clear Form_ button there is a button with an arrow pointing down that will display the following additional search fields: + +* Barcode +* Alias +* Username +* Email +* Identification +* database ID +* Phone +* Street 1 +* Street 2 +* City +* State +* Postal Code +* Profile Group +* Home Library +* DOB (date of birth) year +* DOB month +* DOB day + +To include patrons marked ``inactive'', click on the _Include Inactive?_ checkbox. + + +image::media/circulation_patron_records-1b_web_client.png[circulation_patron_records 1b] + +.Tips for searching +[TIP] +=================== +* Search one field or combine fields for more precise results. +* Truncate search terms for more search results. +* Search ignores punctuation such as diacritics, apostrophes, hyphens and commas. +* Searching by Date of Birth: Year searches are "contains" searches. E.g. year + "15" matches 2015, 1915, 1599, etc. For exact matches use the full 4-digit + year. Day and month values are exact matches. E.g. month "1" (or "01") matches + January, "12" matches December. +=================== + +Once you have located the desired patron, click on the entry row for this patron in +the results screen. A summary for this patron will display on the left hand side. + +image::media/circulation_patron_records-2_web_client.png[circulation_patron_records 2] + +The _Patron Search_ button on the upper right may be used to resume searching for patrons. + +== Retrieve Recent Patrons == + +indexterm:[patrons, retrieving recent] + +=== Setting up Retrieve Recent Patrons === + +* This feature must be configured in the _Library Settings Editor_ +(_Administration -> Local Administration -> Library Settings Editor_). The +library setting is called "Number of Retrievable Recent Patrons" and is located +in the Circulation settings group. +** A value of zero (0) means no recent patrons can be retrieved. +** A value greater than 1 means staff will be able to retrieve multiple recent +patrons via a new _Circulation -> Retrieve Recent Patrons_ menu entry. +** The default value is 1 for backwards compatibility. (The _Circulation -> +Retrieve Last Patron_ menu entry will be available.) + +=== Retrieving Recent Patrons === +* Once the library setting has been configured to a number greater than 1, the +option Retrieve Recent Patrons will appear below the Retrieve Last patron +option in the Circulation drop-down from the Menu Bar (_Circulation -> +Retrieve Recent Patrons_). + +* When selected, a grid will appear listing patrons accessed by that workstation +in the current session. The length of the list will be limited by the value +configured in the _Library Settings Editor_. If no patrons have been accessed, +the grid will display "No Items To Display." + + +== Registering New Patrons == + +indexterm:[patrons, registering] + +To register a new patron, select _Circulation -> Register Patron_ from the menu bar. The Patron +Registration form will display. + +image::media/circulation_patron_records-4.JPG[Patron registration form] + +Mandatory fields display in yellow. + +image::media/circulation_patron_records-5.JPG[circulation_patron_records 5] + +The _Show: Required Fields_ and _Show: Suggested Fields_ links may be used to limit +the options on this page. + +image::media/circulation_patron_records-6.JPG[circulation_patron_records 6] + +When finished entering the necessary information, select _Save_ to save the new +patron record or _Save & Clone_ to register a patron with the same address. +When _Save & Clone_ is selected, the address information is copied into the +resulting patron registration screen. It is linked to the original patron. +Address information may only be edited through the original record. + +image::media/circulation_patron_records-8.JPG[circulation_patron_records 8] + +[TIP] +============================================================================ +* Requested fields may be configured in the _Library Settings Editor_ +(_Administration -> Local Administration -> Library Settings Editor_). +* Statistical categories may be created for information tracked by your library +that is not in the default patron record. +* These may be configured in the _Statistical Categories Editor_ +(_Administration -> Local Administration -> Statistical Categories Editor_). +* Staff accounts may also function as patron accounts. +* You must select a _Main (Profile) Permission Group_ before the _Update Expire +Date_ button will work, since the permission group determines the expiration date. +============================================================================ + +=== Email field === + +indexterm:[patrons,email addresses] +indexterm:[email] + +It's possible for administrators to set up the email field to allow or disallow +multiple email addresses for a single patron (usually separated by a comma). +If you'd like to make changes to whether multiple email addresses +are allowed here or not, ask your system administrator to change the +`ui.patron.edit.au.email.regex` library setting. + + +== Patron Self-Registration == +*Abstract* + +Patron Self-Registration allows patrons to initiate registration for a library account through the OPAC. Patrons can fill out a web-based form with basic information that will be stored as a “pending patron” in Evergreen. Library staff can review pending patrons in the staff-client and use the pre-loaded account information to create a full patron account. Pending patron accounts that are not approved within a configurable amount of time will be automatically deleted. + +*Patron Self-Registration* + +. In the OPAC, click on the link to *Request Library Card* + +. Fill out the self-registration form to request a library card, and click *Submit Registration*. + +. Patrons will see a confirmation message: “Registration successful! Please see library staff to complete your registration.” + +image::media/patron_self_registration2.jpg[Patron Self-Registration form] + +*Managing Pending Patrons* + +. In the staff client select *Circulation* -> *Pending Patrons*. + +. Select the patron you would like to review. In this screen you have the option to *Load* the pending patron information to create a permanent library account. + +. To create a permanent library account for the patron, click on the patron’s row, click on the *Load Patron* button at the top of the screen. This will load the patron self-registration information into the main *Patron Registration* form. + +. Fill in the necessary patron information for your library, and click *Save* to create the permanent patron account. + + +[[updating_patron_information]] +== Updating Patron Information == + +indexterm:[patrons, updating] + +Retrieve the patron record as described in the section +<>. + +Click on _Edit_ from the options that display at the top of the patron record. + +image::media/circulation_patron_records-9_web_client.png[Patron edit with summary display] + +Edit information as required. When finished, select _Save_. + +After selecting _Save_, the page will refresh. The edited information will be +reflected in the patron summary pane. + +[TIP] +======= +* To quickly renew an expired patron, click the _Update Expire Date_ button. +You will need a _Main (Profile) Permission Group_ selected for this to work, +since the permission group determines the expiration date. +======= + + +== Renewing Library Cards == + +indexterm:[library cards, renewing] + +Expired patron accounts when initially retrieved – an alert +stating that the ``Patron account is EXPIRED.'' + +image::media/circulation_patron_records-11_web_client.png[circulation_patron_records 11] + +Open the patron record in edit mode as described in the section +<>. + +Navigate to the information field labeled _Privilege Expiration Date_. Enter a +new date in this box. Or click the calendar icon, and a calendar widget +will display to help you easily navigate to the desired date. + +image::media/circulation_patron_records-12.JPG[circulation_patron_records 12] + +Select the date using the calendar widget or key the date in manually. Click +the _Save_ button. The screen will refresh and the ``expired'' alerts on the +account will be removed. + + +== Lost Library Cards == + +indexterm:[library cards, replacing] + +Retrieve the patron record as described in the section +<>. + +Open the patron record in edit mode as described in the section +<>. + +Next to the _Barcode_ field, select the _Replace Barcode_ button. + +image::media/circulation_patron_records_13.JPG[circulation_patron_records 13] + +This will clear the barcode field. Enter a new barcode and _Save_ the record. +The screen will refresh and the new barcode will display in the patron summary +pane. + +If a patron’s barcode is mistakenly replaced, the old barcode may be reinstated. +Retrieve the patron record as described in the section +<>. Open the patron record in +edit mode as described in the section <>. + +Select the _See All_ button next to the _Replace Barcode_ button. This will +display the current and past barcodes associated with this account. + +image::media/circulation_patron_records_14.JPG[circulation_patron_records 14] + +Check the box(es) for all barcodes that should be ``active'' for the patron. An +``active'' barcode may be used for circulation transactions. A patron may have +more than one ``active'' barcode. Only one barcode may be designated +``primary.'' The ``primary'' barcode displays in the patron’s summary +information in the _Library Card_ field. + +Once you have modified the patron barcode(s), _Save_ the patron record. If you +modified the ``primary'' barcode, the new primary barcode will display in the +patron summary screen. + +== Resetting Patron's Password == + +indexterm:[patrons, passwords] + +A patron’s password may be reset from the OPAC or through the staff client. To +reset the password from the staff client, retrieve the patron record as +described in the section <>. + +Open the patron record in edit mode as described in the section +<>. + +Select the _Generate Password_ button next to the _Password_ field. + +image::media/circulation_patron_records_15.JPG[circulation_patron_records 15] + +NOTE: The existing password is not displayed in patron records for security +reasons. + +A new number will populate the _Password_ text box. +Make note of the new password and _Save_ the patron record. The screen will +refresh and the new password will be suppressed from view. + + +== Barring a Patron == + +indexterm:[patrons, barring] + +A patron may be barred from circulation activities. To bar a patron, retrieve +the patron record as described in the section +<>. + +Open the patron record in edit mode as described in the section +<>. + +Check the box for _Barred_ in the patron account. + +image::media/circulation_patron_records-16.JPG[circulation_patron_records 16] + +_Save_ the user. The screen will refresh. + +NOTE: Barring a patron from one library bars that patron from all consortium +member libraries. + +To unbar a patron, uncheck the Barred checkbox. + + +== Barred vs. Blocked == + +indexterm:[patrons, barring] + +*Barred*: Stops patrons from using their library cards; alerts the staff that +the patron is banned/barred from the library. The ``check-out'' functionality is +disabled for barred patrons (NO option to override – the checkout window is +unusable and the bar must be removed from the account before the patron is able +to checkout items).  These patrons may still log in to the OPAC to view their +accounts. + +indexterm:[patrons, blocking] + +*Blocked*: Often, these are system-generated blocks on patron accounts.  + +Some examples: + +* Patron exceeds fine threshold +* Patron exceeds max checked out item threshold + +A notice appears when a staff person tries to checkout an item to blocked +patrons, but staff may be given permissions to override blocks. + + +== Staff-Generated Messages == + +[[staff_generated_messages]] +indexterm:[patrons, messages] + +There are several types of messages available for staff to leave notes on patron records. + +*Patron Notes*: These notes are added via _Other_ -> _Notes_ in the patron record. These notes can be viewable by staff only or shared with the patron. Staff initials can be required. (See the section <> for more.) + +*Patron Alerts*: This type of alert is added via _Edit_ button in the patron record. There is currently no way to require staff initials for this type of alert. (See the section <> for more.) + +*Staff-Generated Penalties/Messages*: These messages are added via the _Messages_ button in the patron record. They can be a note, alert or block. Staff initials can be required. (See the section <> for more.) + +== Patron Alerts == + +[[circulation_patron_alerts]] +indexterm:[patrons, Alerts] + +When an account has an alert on it, a Stop sign is displayed when the record is +retrieved. + +image::media/circulation_patron_records-18_web_client.png[circulation_patron_records 18] + +Navigating to an area of the patron record using the navigation buttons at the +top of the record (for example, Edit or Bills) will clear the message from view. + +If you wish to view these alerts after they are cleared from view, they may be +retrieved. Use the Other menu to select _Display Alert_ and _Messages_. + +image::media/circulation_patron_records-19_web_client.png[circulation_patron_records 19] + +There are two types of Patron Alerts: + +*System-generated alerts*: Once the cause is resolved (e.g. patron's account has +been renewed), the message will disappear automatically. + +*Staff-generated alerts*: Must be added and removed manually. + +To add an alert to a patron account, retrieve the patron record as described +in the section <>. + +Open the patron record in edit mode as described in the section +<>. + +Enter the alert text in the Alert Message field. + +image::media/circulation_patron_records-20.png[circulation_patron_records 20] + +_Save_ the record. The screen will refresh and the alert will display. + +To remove the alert, retrieve the patron record as described in the section +<>. + +Open the patron record in edit mode as described in the section +<>. + +Delete the alert text in the _Alert Message_ field. + +_Save_ the record. + +The screen will refresh and the indicators for the alert will be removed from +the account. + +== Patron Notes == + +[[circulation_patron_notes]] +indexterm:[patrons, Notes] + +Notes are strictly communicative and may be made visible to the patron via their +account on the OPAC. In the OPAC, these notes display on the account summary +screen in the OPAC. + +image::media/circulation_patron_records-23_web_client.png[circulation_patron_records 23] + +To insert or remove a note, retrieve the patron record as described in the +section <>. + +Open the patron record in edit mode as described in the section +<>. + +Use the Other menu to navigate to _Notes_. + +image::media/circulation_patron_records-24_web_client.png[circulation_patron_records 24] + +Select the _Add New Note_ button. A _Create a new note_ window displays. + +[TIP] +================================================ +Your system administrator can add a box in the _Add Note_ window for staff initials and +require those initials to be entered. They can do so using the "Require staff initials..." +settings in the Library Settings Editor. +================================================ + +Enter note information. + +Select the check box for _Patron Visible_ to display the note in the OPAC. + +image::media/circulation_patron_records-25_web_client.png[circulation_patron_records 25] + +Select _OK_ to save the note to the patron account. + +To delete a note, go to _Other -> Notes_ and use the _Delete_ button +on the right of each note. + +image::media/circulation_patron_records-26_web_client.png[circulation_patron_records 26] + +== Staff-Generated Penalties/Messages == + +[[staff_generated_penalties_web_client]] +To access this feature, use the _Messages_ button in the patron record. + +image::media/staff-penalties-1_web_client.png[Messages screen] + +=== Add a Message === + +Click *Apply Penalty/Message* to begin the process of adding a message to the patron. + +image::media/staff-penalties-2_web_client.png[Apply Penalty Dialog Box] + +There are three options: Notes, Alerts, Blocks + +* *Note*: This will create a non-blocking, non-alerting note visible to staff. Staff can view the message by clicking the _Messages_ button on the patron record. (Notes created in this fashion will not display via _Other_ -> _Notes_, and cannot be shared with the patron. See the <> section for notes which can be shared with the patron.) + +* *Alert*: This will create a non-blocking alert which appears when the patron record is first retrieved. The alert will cause the patron name to display in red, rather than black, text. Alerts may be viewed by clicking the _Messages_ button on the patron record or by selecting _Other_ -> _Display Alerts and Messages_. + +* *Block*: This will create a blocking alert which appears when the patron record is first retrieved, and which behaves much as the non-blocking alert described previously. The patron will be also blocked from circulation, holds and renewals until the block is cleared by staff. + +After selecting the type of message to create, enter the message body into the box. If Staff Initials are required, they must be entered into the _Initials_ box before the message can be added. Otherwise, fill in the optional _Initials_ box and click *OK* + +The message should now be visible in the _Staff-Generated Penalties/Messages_ list. If a blocking or non-blocking alert, the message will also display immediately when the patron record is retrieved. + +image::media/staff-penalties-3_web_client.png[[Messages on a record] + +=== Modify a Message === + +Messages can be edited by staff after they are created. + +image::media/staff-penalties-4_web_client.png[[Actions menu] + +Click to select the message to be modified, then click _Actions_ -> _Modify Penalty/Message_. This menu can also be accessed by right-clicking in the message area. + +image::media/staff-penalties-5_web_client.png[Modify penalty dialog box] + +To change the type of message, click on *Note*, *Alert*, *Block* to select the new type. Edit or add new text in the message body. Enter Staff Initials into the _Initials_ box (may be required.) and click *OK* to submit the alterations. + +image::media/staff-penalties-6_web_client.png[Modified message in the list] + +=== Archive a Message === + +Messages which are no longer current can be archived by staff. This action will remove any alerts or blocks associated with the message, but retains the information contained there for future reference. + +image::media/staff-penalties-4_web_client.png[[Actions menu] + +Click to select the message to be archived, then click _Actions_ -> _Archive Penalty/Message_. This menu can also be accessed by right-clicking in the message area. + +image::media/staff-penalties-7_web_client.png[Archived messages] + +Archived messages will be shown in the section labelled _Archived Penalties/Messages_. To view messages, click *Retrieve Archived Penalties*. By default, messages archived within the past year will be retrieved. To retrieve messages from earlier dates, change the start date to the desired date before clicking *Retrieve Archived Penalties*. + +=== Remove a Message === + +Messages which are no longer current can be removed by staff. This action removes any alerts or blocks associated with the message and deletes the information from the system. + +image::media/staff-penalties-4_web_client.png[[Actions menu] + +Click to select the message to be removed, then click _Actions_ -> _Remove Penalty/Message_. This menu can also be accessed by right-clicking in the message area. + + +== User Buckets == + +User Buckets allow staff to batch delete and make batch modifications to user accounts in Evergreen. Batch modifications can be made to selected fields in the patron account: + +* Home Library +* Profile Group +* Network Access Level +* Barred flag +* Active flag +* Juvenile flag +* Privilege Expiration Date +* Statistical Categories + +Batch modifications and deletions can be rolled back or reversed, with the exception of batch changes to statistical categories. Batch changes made in User Buckets will not activate any Action/Trigger event definitions that would normally be activated when editing an individual account. + +User accounts can be added to User Buckets by scanning individual user barcodes or by uploading a file of user barcodes directly in the User Bucket interface. They can also be added to a User Bucket from the Patron Search screen. Batch changes and batch edit sets are tied to the User Bucket itself, not to the login of the bucket owner. + +=== Create a User Bucket === + +*To add users to a bucket via the Patron Search screen:* + +. Go to *Search->Search for Patrons*. +. Enter your search and select the users you want to add to the user bucket by checking the box next to each user row. You can also hold down the CTRL or SHIFT on your keyboard and select multiple users. +. Click *Add to Bucket* and select an existing bucket from the drop down menu or click *New Bucket* to create a new user bucket. +.. If creating a new user bucket, a dialog box called _Create Bucket_ will appear where you can enter a bucket _Name_ and _Description_ and indicate if the bucket is _Staff Shareable?_. Click *Create Bucket*. +. After adding users to a bucket, an update will appear at the bottom-right hand corner of the screen that says _"Successfully added # users to bucket [Name]"_. + +image::media/userbucket1.PNG[] + +image::media/userbucket2.PNG[] + +*To add users to a bucket by scanning user barcodes in the User Bucket interface:* + +. Go to *Circulation->User Buckets* and select the *Pending Users* tab at the top of the screen. +. Click on *Buckets* and select an existing bucket from the drop down menu or click *New Bucket* to create a new user bucket. +.. If creating a new user bucket, a dialog box called _Create Bucket_ will appear where you can enter a bucket _Name_ and _Description_ and indicate if the bucket is _Staff Shareable?_. Click *Create Bucket*. +.. After selecting or creating a bucket, the Name, Description, number of items, and creation date of the bucket will appear above the _Scan Card_ field. +. Scan in the barcodes of the users that you want to add to the selected bucket into the _Scan Card_ field. Each user account will be added to the Pending Users tab. Hit ENTER on your keyboard after manually typing in a barcode to add it to the list of Pending Users. +. Select the user accounts that you want to add to the bucket by checking the box next to each user row or by using the CTRL or SHIFT key on your keyboard to select multiple users. +. Go to *Actions->Add To Bucket* or right-click on a selected user account to view the _Actions_ menu and select *Add To Bucket*. The user accounts will move to the Bucket View tab and are now in the selected User Bucket. + +image::media/userbucket3.PNG[] + +*To add users to a bucket by uploading a file of user barcodes:* + +. Go to *Circulation->User Buckets* and select the *Pending Users* tab at the top of the screen. +. Click on *Buckets* and select an existing bucket from the drop down menu or click *New Bucket* to create a new user bucket. +.. If creating a new user bucket, a dialog box called _Create Bucket_ will appear where you can enter a bucket _Name_ and _Description_ and indicate if the bucket is _Staff Shareable?_. Click *Create Bucket*. +.. After selecting or creating a bucket, the Name, Description, number of items, and creation date of the bucket will appear above the Scan Card field. +. In the Pending Users tab, click *Choose File* and select the file of barcodes to be uploaded. +.. The file that is uploaded must be a .txt file that contains a single barcode per row. +. The user accounts will automatically appear in the list of Pending Users. +. Select the user accounts that you want to add to the bucket by checking the box next to each user row or by using the CTRL or SHIFT key on your keyboard to select multiple users. +. Go to *Actions->Add To Bucket* or right-click on a selected user account to view the _Actions_ menu and select *Add To Bucket*. The user accounts will move to the Bucket View tab and are now in the selected User Bucket. + +=== Batch Edit All Users === + +To batch edit all users in a user bucket: + +. Go to *Circulation->User Buckets* and select the *Bucket View* tab. +. Click *Buckets* and select the bucket you want to modify from the list of existing buckets. +.. After selecting a bucket, the Name, Description, number of items, and creation date of the bucket will appear at the top of the screen. +. Verify the list of users in the bucket and click *Batch edit all users*. A dialog box called _Update all users_ will appear where you can select the batch modifications to be made to the user accounts. +. Assign a _Name for edit set_. This name will allow staff to identify the batch edit for future verification or rollbacks. +. Set the values that you want to modify. The following fields can be modified in batch: + +* Home Library +* Profile Group +* Network Access Level +* Barred flag +* Active flag +* Juvenile flag +* Privilege Expiration Date + +. Click *Apply Changes*. The modification(s) will be applied in batch. + +image::media/userbucket4.PNG[] + +=== Batch Modify Statistical Categories === + +To batch modify statistical categories for all users in a bucket: + +. Go to *Circulation->User Buckets* and select the *Bucket View* tab. +. Click *Buckets* and select the bucket you want to modify from the list of existing buckets. +.. After selecting a bucket, the Name, Description, number of items, and creation date of the bucket will appear at the top of the screen. +. Verify the list of users in the bucket and click *Batch modify statistical categories*. A dialog box called _Update statistical categories_ will appear where you can select the batch modifications to be made to the user accounts. The existing patron statistical categories will be listed and staff can choose: +.. To leave the stat cat value unchanged in the patron accounts. +.. To select a new stat cat value for the patron accounts. +.. Check the box next to Remove to delete the current stat cat value from the patron accounts. +. Click *Apply Changes*. The stat cat modification(s) will be applied in batch. + +image::media/userbucket12.PNG[] + +=== Batch Delete Users === + +To batch delete users in a bucket: +. Go to *Circulation->User Buckets* and select the *Bucket View* tab. +. Click on *Buckets* and select the bucket you want to modify from the list of existing buckets. +.. After selecting a bucket, the Name, Description, number of items, and creation date of the bucket will appear at the top of the screen. +. Verify the list of users in the bucket and click *Delete all users*. A dialog box called _Delete all users_ will appear. +. Assign a _Name for delete set_. This name will allow staff to identify the batch deletion for future verification or rollbacks. +. Click *Apply Changes*. All users in the bucket will be marked as deleted. + +NOTE: Batch deleting patrons from a user bucket does not use the Purge User functionality, but instead marks the users as deleted. + +image::media/userbucket7.PNG[] + +=== View Batch Changes === + +. The batch changes that have been made to User Buckets can be viewed by going to *Circulation->User Buckets* and selecting the *Bucket View* tab. +. Click *Buckets* to select an existing bucket. +. Click *View batch changes*. A dialog box will appear that lists the _Name_, date _Completed_, and date _Rolled back_ of any batch changes made to the bucket. There is also an option to _Delete_ a batch change. This will remove this batch change from the list of actions that can be rolled back. It will not delete or reverse the batch change. +. Click *OK* to close the dialog box. + +image::media/userbucket8.PNG[] + +=== Roll Back Batch Changes === + +. Batch Changes and Batch Deletions can be rolled back or reversed by going to *Circulation->User Buckets* and selecting the *Bucket View* tab. +. Click *Buckets* to select an existing bucket. +. Click *Roll back batch edit*. A dialog box will appear that contains a drop down menu that lists all batch edits that can be rolled back. Select the batch edit to roll back and click *Roll Back Changes*. The batch change will be reversed and the roll back is recorded under _View batch changes_. + +NOTE: Batch statistical category changes cannot be rolled back. + +image::media/userbucket10.png[] + +image::media/userbucket9.PNG[] + +=== Sharing Buckets === +If a User Bucket has been made Staff Shareable, it can be retrieved via bucket ID by another staff account. The ID for each bucket can be found at the end of the URL for the bucket. For example, in the screenshot below, the bucket ID is 32. + +image::media/userbucket11.PNG[] + +A shared bucket can be retrieved by going to *Circulation->User Buckets* and selecting the *Bucket View* tab. Next, click *Buckets* and select *Shared Bucket*. A dialog box called _Load Shared Bucket by Bucket ID_ will appear. Enter the ID of the bucket you wish to retrieve and click *Load Bucket*. The shared bucket will load in the Bucket View tab. + +=== Permissions === + +All permissions must be granted at the organizational unit that the workstation is registered to or higher and are checked against the users' Home Library at when a batch modification or deletion is executed. + +Permissions for Batch Edits: + +* To batch edit a user bucket, staff accounts must have the VIEW_USER, UPDATE_USER, and CONTAINER_BATCH_UPDATE permissions for all users in the bucket. +* To make a batch changes to Profile Group, staff accounts must have the appropriate group application permissions for the profile groups. +* To make batch changes to the Home Library, staff accounts must have the UPDATE_USER permission at both the old and new Home Library. +* To make batch changes to the Barred Flag, staff accounts must have the appropriate BAR_PATRON or UNBAR_PATRON permission. + +Permissions for Batch Deletion: + +* To batch delete users in a user bucket, staff accounts must have the UPDATE_USER and DELETE_USER permissions for all users in the bucket. + diff --git a/docs/modules/circulation/pages/introduction.adoc b/docs/modules/circulation/pages/introduction.adoc new file mode 100644 index 0000000000..1a39f02bed --- /dev/null +++ b/docs/modules/circulation/pages/introduction.adoc @@ -0,0 +1,5 @@ += Introduction = +:toc: +Use this section for understanding the circulation procedures in the Evergreen +system. + diff --git a/docs/modules/circulation/pages/offline_circ_webclient.adoc b/docs/modules/circulation/pages/offline_circ_webclient.adoc new file mode 100644 index 0000000000..8acc35b874 --- /dev/null +++ b/docs/modules/circulation/pages/offline_circ_webclient.adoc @@ -0,0 +1,210 @@ += Offline Circulation = +:toc: + +== Introduction == + +Evergreen's Offline Circulation interface is designed to log transactions during a network or server outage. Transactions can be uploaded and processed once connectivity is restored. + +Offline Circulation in the Web Staff Client relies on the use of web service workers to store information for offline use. Prior to using Offline Circulation you must have access to your production server and register your workstation on the computer and in the browser you intend to use. You must also log in from that browser at least once and visit *Search -> Search for Patrons*. Perform a search, select a user from the results, and open the *Patron Edit* interface. This will allow the Offline interface to collect the information it needs, such as workstation information and the patron registration form. + +The web service workers will refresh the cache every 24 hours under normal use. Offline Circulation information is stored via IndexedDB. + +== Using Offline Circulation == + +The Offline Circulation interface can be found by navigating to *Circulation -> Offline Circulation*. + +The permanent link for the Offline Circulation is *https:///eg/staff/offline-interface* and it is recommended that this link be bookmarked on staff workstations. This is the location for both entering transactions while offline as well as processing them later. You will see a slightly different version of this interface depending on whether or not you are logged in. + +If you are logged out, you will see the tab default to *Checkout* and the button on the top-right will read *Export Transactions*. + +image::media/offline_homepage_loggedout.png[Offline homepage logged out] + +If you are logged in, you will see an additional tab on the left for *Session Management* and this will be the default tab. The top-right button will read *Download Block List*. + +image::media/offline_homepage_loggedin.png[Offline homepage logged in] + +If you are logged in and attempt to click on any tab other than *Session Management*, you will see a warning alerting you that you are about to enter offline mode. + +image::media/offline_logout_warning.png[Logout warning] + +This warning is not network-aware and it will appear regardless of network connection state. You must be logged out to record offline transactions. If you see this warning and wish to record offline transactions, click *Proceed* in order to log out. + +== Checkout == + +To check out items in Offline Circulation: + +. Click the *Checkout* tab. +. If you wish to use Strict Barcode for patron and item barcodes, check the box labelled *Strict Barcode*. +. Enter a value in the *Due Date* field or select a date from the Calendar widget. You may also select an option from the *Offset Dropdown*. The date field entry will honor the format set in the Library Settings Editor. +. Scan the Patron Barcode in the box labelled *Patron Barcode*. +. Check out items: +.. For cataloged items, scan the item barcode in the box labelled *Item Barcode*. Each item barcode will appear on the right side of the screen, along with its due date and the patron barcode. If you are manually typing barcodes, you need to click the *Checkout* button or hit the *Enter* key on your keyboard after each Item Barcode entry in order to record the transaction. +.. For non-cataloged items, select a *Non-cataloged Type* from the dropdown and enter the number of items you wish to check out. Click *Checkout*. In the list to the right, the item barcode will appear blank since this item is unbarcoded. The due date and patron barcode will appear, however. +.. If you make an error in entry, click *Clear* to reset the Patron Barcode and Item Barcode fields. +. To print a receipt, check the box labelled *Print Receipt*. +. Click *Save Transactions* in the upper-right of the screen to complete the checkout. + +Note that *Save Transactions* will save any unsaved transactions across the Offline tabs Checkout, Renew, In-House Use, and Checkin. + +In the screenshot, the first two items in the right-hand list are regular checkout items. The third item is a non-cataloged item. + +image::media/offline_checkout.png[Offline checkout] + +A value entered in the Due Date field will take precedence over an existing value in the Offset Dropdown; however, if you change the Offset after setting the Due Date field, the Due Date field will update to reflect the Offset value. + +Due Date and Offset values are sticky between the Checkout and Renew tabs, and also sticky between transactions. Strict Barcode and Print Receipt are sticky among the Checkout, Renew, In-House Use, and Checkin tabs and are also sticky between transactions. + +Pre-cataloged item checkout is not available in Offline Circulation. Any pre-cataloged item checked out through Offline Circulation will result in an entry in the Exception List and will not successfully check out. Pre-cataloged items which are checked in through offline will also result in an entry in the Exception List, but will successfully check in. + +== Renew == + +To renew an item, you must know the item's barcode number. The patron's barcode is optional. + +To renew items in Offline Circulation: + +. Click the *Renew* tab. +. Ensure that the *Due Date* value is correct. +. _(Optional)_: Enter the patron's library card barcode in the *Patron Barcode* field by scanning or typing the barcode. +. For each item to be renewed, scan the item's barcode in the *Item Barcode* field. If you are typing the item barcode, click the *Renew* button or hit the *Enter* key on your keyboard after each item barcode. +. The item barcode, due date, and patron barcode (if entered) appear on the right side of the screen. +. To print a receipt, check the box labelled *Print Receipt*. +. Click *Save Transactions* in the upper-right of the screen to complete the renewal. + +image::media/offline_renew.png[Offline renewal] + +== In-House Use == + +To record in-house use transactions in *Offline Circulation*: + +. Click the *In-House Use* tab. +. Enter the number of uses to record for the item in the *Use Count* field. +. For each item to be recorded as in-house use, scan the item's barcode in the *Item Barcode* field. If you are typing the item barcode, click the *Record Use* button or hit the *Enter* key on your keyboard after each item barcode. +. The item barcode and use count will appear on the right side of the screen. +. To print a receipt, check the box labelled *Print Receipt*. +. Click *Save Transactions* in the upper-right of the screen to record the in-house use. The date of the in-house use is automatically recorded. + +image::media/offline_inhouse.png[Offline in house use] + +== Checkin == + +To checkin items in Offline Circulation: + +. Click the *Checkin* tab. +. Ensure that the *Due Date* value is correct. It will default to today's date. +. For each item to be checked in, scan the item's barcode in the *Item Barcode* field. If you are typing the item barcode, click the *Checkin* button or hit the *Enter* key on your keyboard after each item barcode. +. To print a receipt, check the box labelled *Print Receipt*. +. Click *Save Transactions* in the upper-right of the screen when you are finished entering checkins. + +image::media/offline_checkin.png[Offline checkin] + +Note that existing pre-cataloged items can be checked in through the Offline interface, but they will generate an entry in the Exceptions list when offline transactions are uploaded and processed. + +Items targeted for holds will be captured for their holds when the offline transactions are uploaded and processed; however, there will be no indication in the Exceptions list about this unless the item is also transiting. + +== Patron Registration == + +Patron registration in Evergreen Offline Circulation records patron information for later upload. In the web staff client, the Patron Registration form in Offline is the same as the regular Patron Registration interface. + +image::media/offline_patron_registration.png[Patron registration] + +All fields in the normal Patron Registration interface are available for entry. Required fields are marked in yellow and adhere to Required Fields set in the *Library Settings Editor*. Patron Registration defaults also adhere to settings in the *Library Settings Editor*. Stat cats are not recognized by the Offline Interface, even if they are required. + +Enter patron information and click the *Save* button in the top-right of the Patron Registration interface. You may checkout items to this patron right away, even if you are still in offline mode. + +== Managing Offline Transactions == + +[#offline_block_list] +=== Offline Block List === + +While logged in and still online, you may download an *Offline Block List*. This will locally store a list of all patrons with blocks at the time of the download. If this list is present, the Offline Circulation interface will check transactions against this list. + +To download the block list, navigate to *Circulation -> Offline Circulation* and click the *Download Block List* button in the top-right of the screen. + +If you attempt a checkout or a renewal for a patron on the block list, you will get a modal informing you that the patron has penalties. Click the *Allow* button to override this and proceed with the transaction. Click the *Reject* button to cancel the checkout or renewal. + +image::media/offline_patron_blocked.png[Patron blocked modal] + +=== Exporting Offline Transactions === + +If you anticipate a multi-day closing or if you plan to process your offline transactions at a different workstation, you will want to export your offline transactions. + +To export transactions while you are offline, navigate to *Circulation -> Offline Circulation* and click *Export Transactions* in the top-right of the screen. This will save a file entitled pending.xacts to your browser's default download location. If you will be processing these transactions on another workstation, move this file to an external device like a thumb drive. + +To export transactions while you are logged in, navigate to *Circulation -> Offline Circulation* and click on the *Session Management* tab. Click on the *Export Transactions* button to generate the pending.xacts file as above. If you wish, you can at this point click *Clear Transactions* to clear the list of pending transactions. + +[#processing_offline_transactions] +=== Processing Offline Transactions === + +Once connectivity is restored, navigate back to your *Evergreen Login Page*. You will see a message telling you that there are unprocessed Offline Transactions waiting for upload. + +image::media/offline_unprocessed.png[Login alert about unprocessed transactions] + +Sign in and navigate to *Circulation -> Offline Circulation*. Since you are logged in, you will now see a *Session Management* tab to the left of the Register Patron tab. The Session Management tab includes *Pending Transactions* and *Offline Sessions*. + +In the *Pending Transactions* tab you will see a list of all transactions recorded on that browser. + +image::media/offline_pending_xacts.png[Offline pending transactions] + +If you click *Clear Transactions*, you will be prompted with a warning. + +image::media/offline_clear_pending.png[Warning to clear offline transactions] + +If you are processing transactions right away and from the same browser you recorded them in, follow the steps below: + +. Click on the *Offline Sessions* tab and then on the *Create Session* button. +. Enter a descriptive name for your session in the modal and click *OK/Continue* to proceed. You will see your new session at the top of the *Session List*. The Session List may be sorted ascending or descending by clicking on one of the following column headers: *Organization*, *Created By*, *Description*, *Date Created*, or *Date Completed*. The default sort is descending by Date Created. ++ +image::media/offline_session_list.png[Offline session list] ++ +. Click *Upload* to upload everything listed in the *Pending Transactions* tab. +. Once all transactions are uploaded, the *Upload Count* column will update to show the number of uploaded transactions. +. Click *Process* to process the offline transactions. Click *Refresh* to see the processing progress. Once all transactions are processed the *Date Completed* column will be updated. ++ +image::media/offline_processing_complete.png[Offline processing complete] ++ +. Scroll to the bottom of the screen to see if there are any entries in the xref:#exceptions[*Exception List*]. Some of these may require staff follow up. + +=== Uploading Previously Exported Transactions === + +If you had previous exported your offline transactions you can upload them for processing. + +To import transactions: + +. Log in to the staff client via your *Login Page* +. Navigate to *Circulation -> Offline Circulation* +. Click on the *Session Management* tab. +. Click on the *Import Transactions* button. +. Navigate to the location on your computer where the pending.xacts file is saved. +. Select the file for importing. +. The *Pending Transactions* list will populate with your imported transactions. +. You may now proceed according to the instructions under xref:#processing_offline_transactions[Processing Offline Transactions]. + +[#exceptions] +==== Exceptions ==== + +Exceptions are problems that were encountered during processing. For example, a mis-scanned patron barcode, an open circulation, or an item that was not checked in before it was checked out to another patron would all be listed as exceptions. Those transactions causing exceptions might not be loaded into Evergreen database. Staff should examine the exceptions and take necessary action. + +These are a few notes about possible exceptions. It is not an all-inclusive list. + +* Checking out a item with the wrong date (i.e. the Offline Checkout date is +2 weeks and the item's regular circulation period is +1 week) does not cause an exception. +* Overdue books are not flagged as exceptions. +* Checking out a reference book or another item set to not circulate does not cause an exception. +* Checking out an item belonging to another library does not cause an exception. +* An item that is targeted for a patron hold and captured via offline checkin will not cause an exception unless that item also goes to an In Transit status. +* An item that is on hold for Patron A but checked out to Patron B will not cause an exception. Patron A's hold will be reset and will retarget the next time the hold targeter is run. In order to avoid this it is recommended to not check out holds to other patrons. +* If you check out a book to a patron using a previous barcode for that patron, it will cause an exception and you will have to retrieve that patron while online and re-enter the item barcode in order to checkout the item. +* The Offline Interface can recognize blocked, barred, and expired patrons if you have downloaded the Offline Block List in the browser you are using. You will get an error message indicating the patron status from within the Standalone Interface at check-out time. See the section on the xref:#offline_block_list[Offline Block List] for more information. + +image::media/offline_exceptions.png[Offline exception list] + +At the right side of each exception are buttons for *Item*, *Patron*, and *Debug*. Clicking the *Item* button will retrieve the associated item in a new browser window. Clicking on the *Patron* button will retrieve the associated patron in a new browser window. Clicking the *Debug* button will result in a modal with detailed debugging information. + +Common event names in the Exceptions List include: + +* +ROUTE-ITEM+ - Indicates the book should be routed to another branch or library system. You'll need to find the book and re-check it in while online to get the Transit Slip to print. +* +COPY_STATUS_LOST+ - Indicates a book previously marked as lost was found and checked in. You will need to find the book and re-check it in while online to correctly clear it from the patron's account. +* +CIRC_CLAIMS_RETURNED+ - Indicates a book previously marked as claimed-returned was found and checked in. You will need to find the book and re-check it in while online to correctly clear it from the patron's account. +* +ASSET_COPY_NOT_FOUND+ - Indicates the item barcode was mis-scanned/mis-typed. +* +ACTOR_CARD_NOT_FOUND+ - Indicates the patron's library barcode was mis-scanned, mis-typed, or nonexistent. +* +OPEN_CIRCULATION_EXISTS+ - Indicates a book was checked out that had never been checked in. +* +MAX_RENEWALS_REACHED+ - Indicates the item has already been renewed the maximum times allowed. Note that if the staff member processing the offline transaction set has the +MAX_RENEWALS_REACHED.override+ permission at the appropriate level, the system will automatically override the error and will allow the renewal. diff --git a/docs/modules/circulation/pages/self_check.adoc b/docs/modules/circulation/pages/self_check.adoc new file mode 100644 index 0000000000..853f22251e --- /dev/null +++ b/docs/modules/circulation/pages/self_check.adoc @@ -0,0 +1,93 @@ += Self checkout = +:toc: + +== Introduction == + +Evergreen includes a self check interface designed for libraries that simply +want to record item circulation without worrying about security mechanisms like +magnetic strips or RFID tags. + +== Initializing the self check == +The self check interface runs in a web browser. Before patrons can use the self +check station, a staff member must initialize the interface by logging in. + +. Open your self check interface page in a web browser. By default, the URL is + `https://[hostname]/eg/circ/selfcheck/main`, where _[hostname]_ + represents the host name of your Evergreen web server. +. Log in with a staff account with circulation permissions. + +image::media/self-check-admin-login.png[Self Check Admin Login] + +== Basic Check Out == + +. Patron scans their barcode. ++ +image::media/self_check_check_out_1.png[self check] ++ +. _Optional_ Patron enters their account password. ++ +image::media/self_check_check_out_2.png[self check] ++ +. Patron scans the barcodes for their items +_OR_ +Patron places items, one at a time, on the RFID pad. ++ +image::media/self_check_check_out_3.png[self check] ++ +. Items will be listed below with a check out confirmation message. ++ +image::media/self_check_check_out_4.png[self check] ++ +. If a check out fails a message will advise patrons. ++ +image::media/self_check_error_1.png[self check] ++ +. Patron clicks *Logout* to print a checkout receipt and logout. +_OR_ +Patron clicks *Logout (No Receipt)* to logout with no receipt. ++ +image::media/self_check_check_out_5.png[self check] ++ +[NOTE] +========== +If the patron forgets to logout the system will automatically log out after the time +period specified in the library setting *Patron Login Timeout (in seconds)*. An inactivity pop-up +will appear to warn patrons 20 seconds before logging out. + +image::media/self_check_check_out_6.png[self check] +========== + +== View Items Out == + +. Patrons are able to view the items they currently have checked out by clicking *View Items Out* ++ +image::media/self_check_view_items_out_1.png[self check] ++ +. The items currently checked out will display with their due dates. +Using the *Print List* button patrons can +print out a receipt listing all of the items they currently have checked out. + +image::media/self_check_view_items_out_2.png[self check] + + +== View Holds == + +. Patrons are able to view their current holds by clicking *View Holds* ++ +image::media/self_check_view_holds_1.png[self check] ++ +. Items currently on hold display. Patrons can also see which, if any, items are ready for pickup. ++ +Using the *Print List* button patrons can print out a receipt listing all of the items they currently have on hold. ++ +image::media/self_check_view_holds_2.png[self check] + +== View Fines == + +. Patrons are able to view the fines they currently owe by clicking *View Details* ++ +image::media/self_check_view_fines_1.png[self check] ++ +. Current fines owed by the patron display. + +image::media/self_check_view_fines_2.png[self check] diff --git a/docs/modules/circulation/pages/self_check_configuration.adoc b/docs/modules/circulation/pages/self_check_configuration.adoc new file mode 100644 index 0000000000..d7bf15c1c4 --- /dev/null +++ b/docs/modules/circulation/pages/self_check_configuration.adoc @@ -0,0 +1,51 @@ += Self checkout = +:toc: + +== Introduction == + +Evergreen includes a self check interface designed for libraries that simply +want to record item circulation without worrying about security mechanisms like +magnetic strips or RFID tags. + +== Initializing the self check == +The self check interface runs in a web browser. Before patrons can use the self +check station, a staff member must initialize the interface by logging in. + +. Open your self check interface page in a web browser. By default, the URL is + `https://[hostname]/eg/circ/selfcheck/main`, where _[hostname]_ + represents the host name of your Evergreen web server. +. Log in with a staff account with circulation permissions. + +image::media/self-check-admin-login.png[Self Check Admin Login] + +=== Setting library hours of operation === +When the self check prints a receipt, the default template includes the +library's hours of operation in the receipt. If the library has no configured +hours of operation, the attempt to print a receipt fails and the browser hangs. + +=== Configuring self check behavior === +Several library settings control the behavior of the self check: + +* *Block copy checkout status*: Prevent the staff user's permission override + from enabling patrons to check out items that they would not normally be able + to check out, such as the "On reservation shelf" status. The status IDs are + found in the `config.copy_status` database table. +* *Patron Login Timeout*: Automatically logs the patron out of the self check + after a certain period of inactivity. *NOT CURRENTLY SUPPORTED* +* *Pop-up alert for errors*: In addition to displaying an alert message on the + screen, this setting raises patron awareness of possible problems by raising + an alert box that the patron must dismiss before they can check out another + item. +* *Require Patron Password*: By default, users can enter either their user name + or barcode, without having to enter their password, to access their account. + This setting requires patrons to enter their password for additional + security. +* *Workstation Required*: If set, the URL must either include a + `?ws=[workstation]` parameter, where _[workstation]_ is the name of a + registered Evergreen workstation, or the staff member must register a new + workstation when they login. The workstation parameter ensures that check outs + are recorded as occurring at the correct library. + +== Using the self check == + +See the circulation manual for documentation about using the self check interface. diff --git a/docs/modules/circulation/pages/triggered_events.adoc b/docs/modules/circulation/pages/triggered_events.adoc new file mode 100644 index 0000000000..dfdea9e2a3 --- /dev/null +++ b/docs/modules/circulation/pages/triggered_events.adoc @@ -0,0 +1,68 @@ += Triggered Events and Notices = +:toc: + +== Introduction == + +Improvements to the Triggered Events interface enables you to easily filter, +sort, and print triggered events from the patron's account or an item's details. +This feature is especially useful when tracking notice completion from a +patron's account. + +== Access and View == + +You can access *Triggered Events* from two Evergreen interfaces: a patron's +account or an item's details. + +To access this interface in the patron's account, open the patron's record and +click *Other* -> *Triggered Events / Notifications*. + +To access this interface from the item's details, enter the item barcode into +the *Item Status* screen, and click *Actions* -> *Show* -> *Triggered Events*. + +Information about the patron, the item, and the triggered event appear in the +center of the screen. Add or delete columns to the display by right clicking on +any column. The *Column Picker* appears in a pop up box and enables you to +select the columns that you want to display. + +image::media/Triggered_Events_and_Notices1.jpg[Triggered_Events_and_Notices1] + +== Filter == + +The triggered events that display are controlled by the filters on the right +side of the screen. By default, Evergreen displays completed circulation +events. Notice that the default filters display *Event State is Complete* and +*Core Type is Circ*. + +To view completed hold-related events, such as hold capture or hold notice +completion, choose *Event State is Complete* and *Core Type is Hold* from the +drop down menu. + +You can also use the *Event State* filter to view circs and holds that are +*pending* or have an *error*. + +Add and delete filters to customize the list of triggered events that displays. +To add another filter, click *Add Row*. To delete a filter, click the red _X_ +adjacent to a row. + +image::media/Triggered_Events_and_Notices2.jpg[Triggered_Events_and_Notices2] + +== Sort == + +You can sort your results by clicking the column name. + +image::media/Triggered_Events_and_Notices3.jpg[Triggered_Events_and_Notices3] + + +== Print == + +You can select the events that you want to print, or you can print all events. +To print selected events, check the boxes adjacent to the events that you want +to print, and click *Print Selected Events*. To print all events, simply click +*Print All Events*. + +== Reset == + +If the triggered event does not complete or the notice is not sent and the +trigger needs to be run again, then select the event, and click *Reset Selected +Events*. + diff --git a/docs/modules/circulation/pages/user_buckets.adoc b/docs/modules/circulation/pages/user_buckets.adoc new file mode 100644 index 0000000000..2bf401389d --- /dev/null +++ b/docs/modules/circulation/pages/user_buckets.adoc @@ -0,0 +1,86 @@ += User buckets = +:toc: + +== Introduction == +indexterm:[patron buckets] +indexterm:[patrons, batch operations] + +You can select and group a set of users into a User Bucket. +You can add users to a User Bucket from the Patron Search +interface or directly from the User Bucket interface by user barcode. +It is also possible to add users to a User +Bucket by uploading a text file that contains a list of user barcodes. + +From this interface it is possible to perform a set of specific batch update +operations on the group of users you have identified. + +== Editing users == +indexterm:[batch edit, patrons] + +You can change the following fields in batch: + + * Active flag + * Primary Permission Group (group application permissions consulted) + * Juvenile flag + * Home Library (if you have the UPDATE_USER permission for both the original and destination libraries) + * Privilege Expiration Date + * Barred flag (if you have the BAR_PATRON permission) + * Internet Access Level + +NOTE: You will need the UPDATE_USER permission. + +Each change set requires a name. Buckets may have multiple change sets. All +users in the Bucket at the time of processing are updated when the change +set is processed, and change sets are processed immediately upon successful +creation. The interface delivers progress information regarding the +processing stage and percent of completion. + +While processing the users, the original value for each field edited is +recorded for potential future rollback. Users can examine the success and +failure of applied change sets. + +The user will be able to rollback the entire change set, but not parts thereof. +The rollback will affect only those users that were successfully updated by the +original change set and may be different from the current set of users in the +Bucket. Users can manually discard change sets, removing them from the +interface but preventing future rollback. + +As a batch process, rather than a direct edit, this mechanism explicitly skips +processing of Action/Trigger event definitions for user update, so users will +not receive any notifications that they might otherwise receive when their accounts +are edited. + +== Deleting users == +indexterm:[batch delete, patrons] + +You may also delete users as a batch. + +NOTE: You will need the UPDATE_USER and DELETE_USER permissions. + +Each delete set requires a name. Buckets may have multiple delete sets. All +users in the Bucket at the time of processing are marked as deleted when +the delete set is processed. The interface delivers progress information +regarding the processing stage and percent of completion. + +While processing the users, the original value for the "deleted" field will be +recorded for potential future rollback. Users are able to examine the +success and failure of applied delete sets in the same interface used for the +above described change sets. + +As a batch process, rather than a direct edit, this mechanism explicitly skips +processing of Action/Trigger event definitions for user deletion. + +This mechanism does not completely purge the user from the database. User data +will still be available to system administrators with database access. + +== Editing Statistical Category Entries == + +All users in the bucket can have their Statistical Category Entries +modified. Unlike user data field updates, modification of Statistical +Category Entries is permanent and cannot be rolled back. No named change +sets are required. The interface will deliver progress information regarding +the processing stage and percent of completion. + +As a batch process, rather than a direct edit, this mechanism explicitly skips +processing of Action/Trigger event definitions for user update. + diff --git a/docs/modules/development/_attributes.adoc b/docs/modules/development/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/development/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/development/assets/images/media/CONNECT.png b/docs/modules/development/assets/images/media/CONNECT.png new file mode 100644 index 0000000000..b30d4028ac Binary files /dev/null and b/docs/modules/development/assets/images/media/CONNECT.png differ diff --git a/docs/modules/development/assets/images/media/REQUEST.png b/docs/modules/development/assets/images/media/REQUEST.png new file mode 100644 index 0000000000..0f6ac2d40e Binary files /dev/null and b/docs/modules/development/assets/images/media/REQUEST.png differ diff --git a/docs/modules/development/examples/python_client.py b/docs/modules/development/examples/python_client.py new file mode 100644 index 0000000000..d0c6dfcdbd --- /dev/null +++ b/docs/modules/development/examples/python_client.py @@ -0,0 +1,60 @@ +#!/usr/bin/env python +"""OpenSRF client example in Python""" +import osrf.system +import osrf.ses + +def osrf_substring(session, text, sub): + """substring: Accepts a string and a number as input, returns a string""" + request = session.request('opensrf.simple-text.substring', text, sub) + + # Retrieve the response from the method + # The timeout parameter is optional + response = request.recv(timeout=2) + + request.cleanup() + # The results are accessible via content() + return response.content() + +def osrf_split(session, text, delim): + """split: Accepts two strings as input, returns an array of strings""" + request = session.request('opensrf.simple-text.split', text, delim) + response = request.recv() + request.cleanup() + return response.content() + +def osrf_statistics(session, strings): + """statistics: Accepts an array of strings as input, returns a hash""" + request = session.request('opensrf.simple-text.statistics', strings) + response = request.recv() + request.cleanup() + return response.content() + + +if __name__ == "__main__": + file = '/openils/conf/opensrf_core.xml' + + # Pull connection settings from section of opensrf_core.xml + osrf.system.System.connect(config_file=file, config_context='config.opensrf') + + # Set up a connection to the opensrf.settings service + session = osrf.ses.ClientSession('opensrf.simple-text') + + result = osrf_substring(session, "foobar", 3) + print(result) + print + + result = osrf_split(session, "This is a test", " ") + print("Received %d elements: [" % len(result)), + print(', '.join(result)), ']' + + many_strings = ( + "First I think I'll have breakfast", + "Then I think that lunch would be nice", + "And then seventy desserts to finish off the day" + ) + result = osrf_statistics(session, many_strings) + print("Length: %d" % result["length"]) + print("Word count: %d" % result["word_count"]) + + # Cleanup connection resources + session.cleanup() diff --git a/docs/modules/development/nav.adoc b/docs/modules/development/nav.adoc new file mode 100644 index 0000000000..c8b4558c86 --- /dev/null +++ b/docs/modules/development/nav.adoc @@ -0,0 +1,6 @@ +* xref:development:introduction.adoc[Developer Resources] +** xref:development:support_scripts.adoc[Support Scripts] +** xref:development:pgtap.adoc[Developing with pgTAP tests] +** xref:development:intro_opensrf.adoc[Easing gently into OpenSRF] +** xref:development:updating_translations_launchpad.adoc[Updating translations using Launchpad] + diff --git a/docs/modules/development/pages/README b/docs/modules/development/pages/README new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/modules/development/pages/_attributes.adoc b/docs/modules/development/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/development/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/development/pages/data_opensearch.adoc b/docs/modules/development/pages/data_opensearch.adoc new file mode 100644 index 0000000000..9e2a1514d7 --- /dev/null +++ b/docs/modules/development/pages/data_opensearch.adoc @@ -0,0 +1,25 @@ += Using Opensearch as a developer = +:toc: + +== Introduction == + +Evergreen responds to OpenSearch requests. This can be a good way to get +search results delivered in a format that you prefer. + +Throughout this section, replace `` with the domain or subdomain +of your Evergreen installation to try these examples on your own system. + +Opensearch queries will be in the format +`http:///opac/extras/opensearch/1.1/-/html-full?searchTerms=item_type(r)&searchClass=keyword&count=25` + +In this example, + +* html-full is the format you would like. html-full is a good view for troubleshooting your query. +* searchTerms is a url-encoded search query. You can use limiters in the `limiter(value)` format. +For example, you can use a query like `item_lang(spa)` +* count is the number of results per page. The default is 10, and the maximum is 25. + +Other options include: + +* searchSort and searchSortDir, which can be used to display the results in a different order (e.g. for an RSS feed). + diff --git a/docs/modules/development/pages/data_supercat.adoc b/docs/modules/development/pages/data_supercat.adoc new file mode 100644 index 0000000000..ff4489c9b4 --- /dev/null +++ b/docs/modules/development/pages/data_supercat.adoc @@ -0,0 +1,252 @@ += Using Supercat = +:toc: + +== Introduction == + +You can use SuperCat to get data about ISBNs, metarecords, bibliographic +records, and authority records. + +Throughout this section, replace `` with the domain or subdomain +of your Evergreen installation to try these examples on your own system. + +== ISBNs == + +Given one ISBN, Evergreen can return a list of related records and ISBNs, +including alternate editions and translations. To use the Supercat +oISBN tool, use http or https to access the following URL. + +---- +http:///opac/extras/oisbn/ +---- + +For example, the URL http://gapines.org/opac/extras/oisbn/0439136350 returns +the following list of catalog record IDs and ISBNs: + +[source,xml] +---------------------------------------------------------------------------- + + + 9780606323475 + 9780780673809 + 9780807286029 + 9780780669642 + 043965548X + 8498386969 + 9780786222742 + 9788478885190 + 0736650962 + 8478885196 + 9780439554923 + 8478885196 + 0807282324 + 8478885196 + 1480614998 + 8478886559 + 9780613371063 + 9782070528189 + 0786222743 + 9780329232696 + 9780807282311 + 0807286028 + 9789500421157 + 9780613359580 + 9781594130021 + 0807283150 + 0747542155 + 8478886559 + +---------------------------------------------------------------------------- + +== Records == + +=== Record formats === + +First, determine which format you'd like to receive data in. To see the +available formats for bibliographic records, visit +---- +http:///opac/extras/supercat/formats/record +---- + +Similarly, authority record formats can be found at +http://libcat.linnbenton.edu/opac/extras/supercat/formats/authority +and metarecord formats can be found at +http://libcat.linnbenton.edu/opac/extras/supercat/formats/metarecord + +For example, http://gapines.org/opac/extras/supercat/formats/authority +shows that the Georgia Pines catalog can return authority records in the +formats _opac_, _marc21_, _marc21-full_, and _marc21-uris_. Supercat +also includes the MIME type of each format, and sometimes also refers +to the documentation for a particular format. + +[source,xml] +---------------------------------------------------------------------------- + + + + opac + text/html + + + marc21 + application/xml + http://www.loc.gov/marc/ + + + marc21-full + application/xml + http://www.loc.gov/marc/ + + + marc21-uris + application/xml + http://www.loc.gov/marc/ + + +---------------------------------------------------------------------------- + +[NOTE] +============================================================================ +atom-full is currently the only format that includes holdings and availability +data for a given bibliographic record. +============================================================================ + + +=== Retrieve records === + +You can retrieve records using URLs in the following format: +---- +http:///opac/extras/supercat/retrieve/// +---- + +For example, http://gapines.org/opac/extras/supercat/retrieve/mods/record/33333 +returns the following record. + +[source,xml] +---------------------------------------------------------------------------- + + + + + Words and pictures / + + + Dodd, Siobhan + + creator + + + text + + + mau + + + Cambridge, Mass + + Candlewick Press + 1992 + 1st U.S. ed. + monographic + + eng + +
print
+ 1 v. (unpaged) : col. ill. ; 26 cm. +
+ Simple text with picture cues accompany illustrations depicting scenes of everyday life familiar to children, such as getting dressed, attending a party, playing in the park, and taking a bath. + juvenile + Siobhan Dodds. + + Family life + Fiction + + + Vocabulary + Juvenile fiction + + + Rebuses + + + Picture puzzles + Juvenile literature + + + Picture books for children + + + Picture dictionaries, English + Juvenile literature + + + Vocabulary + Juvenile literature + + PZ7.D66275 Wo 1992 + PN6371.5 .D63 1992x + 793.73 + 1564020428 : + 9781564020420 + 91071817 + + DLC + 920206 + 20110608231047.0 + 33333 + +
+
+---------------------------------------------------------------------------- + +=== Recent records === + +SuperCat can return feeds of recently edited or created authority and bibliographic records: + +---- +http:///opac/extras/feed/freshmeat///// +---- + +Note the following features: + +* The limit records imported or edited following the supplied date will be returned. If you do not supply a date, then the most recent limit records will be returned. +* If you do not supply a limit, then up to 10 records will be returned. +* feed-type can be one of atom, html, htmlholdings, marcxml, mods, mods3, or rss2. + +Example: http://gapines.org/opac/extras/feed/freshmeat/atom/biblio/import/10/2008-01-01 + +==== Filtering by Org Unit ==== + +You can generate a similar list, with the added ability to limit by Org Unit, using the item-age browse axis. + +To produce an RSS feed by item date rather than bib date, and to restrict it to a particular system within a consortium: + +Example: http://gapines.org/opac/extras/browse/atom/item-age/ARL-BOG/1/10 + +Note the following: + +* ARL-BOG should be the short name of the org unit you're interested in +* 1 is the page (since you are browsing through pages of results) +* 10 is the number of results to return per page + +Modifying the 'atom' portion of the URL to 'atom-full' will include catalog links in the results: + +Example: http://gapines.org/opac/extras/browse/atom-full/item-age/ARL-BOG/1/10 + +Modifying the 'atom' portion of the URL to 'html-full' will produce an HTML page that is minimally formatted: + +Example: http://gapines.org/opac/extras/browse/html-full/item-age/ARL-BOG/1/10 + +==== Additional Filters ==== + +If you'd like to limit to a particular status, you can append `?status=0` +where `0` is the ID number of the status you'd like to limit to. If a +number of statuses, you can append multiple status parameters (for example, +`?status=0&status=1` will limit to items with a status of either 0 or 1). + +[TIP] +Limiting to status is a good way to weed out on-order items from your +feeds. + +You can also limit by item location (`?copyLocation=227` where 227 is the +ID of your item location). + diff --git a/docs/modules/development/pages/data_unapi.adoc b/docs/modules/development/pages/data_unapi.adoc new file mode 100644 index 0000000000..5d6fcb18b4 --- /dev/null +++ b/docs/modules/development/pages/data_unapi.adoc @@ -0,0 +1,68 @@ += Using UnAPI = +:toc: + +== URL format == + +Evergreen's unAPI support includes access to many +record types. For example, the following URL would fetch +bib 267 in MODS32 along with holdings and record attribute information: + +https://example.org/opac/extras/unapi?id=tag::U2@bre/267{holdings_xml,acn,acp,mra}&format=mods32 + +To access the new unAPI features, the unAPI ID should have the +following form: + + * +tag::U2@+ + * followed by class name, which may be + ** +bre+ (bibs) + ** +biblio_record_entry_feed+ (multiple bibs) + ** +acl+ (shelving locations) + ** +acn+ (call numbers) + ** +acnp+ (call number prefixes) + ** +acns+ (call number suffixes) + ** +acp+ (items) + ** +acpn+ (item notes) + ** +aou+ (org units) + ** +ascecm+ (item stat cat entries) + ** +auri+ (located URIs) + ** +bmp+ (monographic parts) + ** +cbs+ (bib sources) + ** +ccs+ (item statuses) + ** +circ+ (loan checkout and due dates) + ** +holdings_xml+ (holdings) + ** +mmr+ (metarecords) + ** +mmr_holdings_xml+ (metarecords with holdings) + ** +mmr_mra+ (metarecords with record attributes) + ** +mra+ (record attributes) + ** +sbsum+ (serial basic summaries) + ** +sdist+ (serial distributions) + ** +siss+ (serial issues) + ** +sisum+ (serial index summaries) + ** +sitem+ (serial items) + ** +sssum+ (serial supplement summaries) + ** +sstr+ (serial streams) + ** +ssub+ (serial subscriptions) + ** +sunit+ (serial units) + * followed by +/+ + * followed by a record identifier (or in the case of + the +biblio_record_entry_feed+ class, multiple IDs separated + by commas) + * followed, optionally, by limit and offset in square brackets + * followed, optionally, by a comma-separated list of "includes" + enclosed in curly brackets. The list of includes is + the same as the list of classes with the following addition: + ** +bre.extern+ (information from the non-MARC parts of a bib + record) + * followed, optionally, by +/+ and org unit; "-" signifies + the top of the org unit tree + * followed, optionally, by +/+ and org unit depth + * followed, optionally, by +/+ and a path. If the path + is +barcode+ and the class is +acp+, the record ID is taken + to be an item barcode rather than an item ID; for example, in + +tag::U2@acp/ACQ140{acn,bre,mra}/-/0/barcode+, +ACQ140+ is + meant to be an item barcode. + * followed, optionally, by +&format=+ and the format in which the record + should be retrieved. If this part is omitted, the list of available + formats will be retrieved. + + diff --git a/docs/modules/development/pages/intro_opensrf.adoc b/docs/modules/development/pages/intro_opensrf.adoc new file mode 100644 index 0000000000..d512978569 --- /dev/null +++ b/docs/modules/development/pages/intro_opensrf.adoc @@ -0,0 +1,1360 @@ += Easing gently into OpenSRF = +:toc: + +== Abstract == +The Evergreen open-source library system serves library consortia composed of +hundreds of branches with millions of patrons - for example, +http://www.georgialibraries.org/statelibrarian/bythenumbers.pdf[the Georgia +Public Library Service PINES system]. One of the claimed advantages of +Evergreen over alternative integrated library systems is the underlying Open +Service Request Framework (OpenSRF, pronounced "open surf") architecture. This +article introduces OpenSRF, demonstrates how to build OpenSRF services through +simple code examples, and explains the technical foundations on which OpenSRF +is built. + +== Introducing OpenSRF == +OpenSRF is a message routing network that offers scalability and failover +support for individual services and entire servers with minimal development and +deployment overhead. You can use OpenSRF to build loosely-coupled applications +that can be deployed on a single server or on clusters of geographically +distributed servers using the same code and minimal configuration changes. +Although copyright statements on some of the OpenSRF code date back to Mike +Rylander's original explorations in 2000, Evergreen was the first major +application to be developed with, and to take full advantage of, the OpenSRF +architecture starting in 2004. The first official release of OpenSRF was 0.1 in +February 2005 (http://evergreen-ils.org/blog/?p=21), but OpenSRF's development +continues a steady pace of enhancement and refinement, with the release of +1.0.0 in October 2008 and the most recent release of 1.2.2 in February 2010. + +OpenSRF is a distinct break from the architectural approach used by previous +library systems and has more in common with modern Web applications. The +traditional "scale-up" approach to serve more transactions is to purchase a +server with more CPUs and more RAM, possibly splitting the load between a Web +server, a database server, and a business logic server. Evergreen, however, is +built on the Open Service Request Framework (OpenSRF) architecture, which +firmly embraces the "scale-out" approach of spreading transaction load over +cheap commodity servers. The http://evergreen-ils.org/blog/?p=56[initial GPLS +PINES hardware cluster], while certainly impressive, may have offered the +misleading impression that Evergreen is complex and requires a lot of hardware +to run. + +This article hopes to correct any such lingering impression by demonstrating +that OpenSRF itself is an extremely simple architecture on which one can easily +build applications of many kinds – not just library applications – and that you +can use a number of different languages to call and implement OpenSRF methods +with a minimal learning curve. With an application built on OpenSRF, when you +identify a bottleneck in your application's business logic layer, you can +adjust the number of the processes serving that particular bottleneck on each +of your servers; or if the problem is that your service is resource-hungry, you +could add an inexpensive server to your cluster and dedicate it to running that +resource-hungry service. + +=== Programming language support === + +If you need to develop an entirely new OpenSRF service, you can choose from a +number of different languages in which to implement that service. OpenSRF +client language bindings have been written for C, Java, JavaScript, Perl, and +Python, and server language bindings have been written for C, Perl, and Python. +This article uses Perl examples as a lowest common denominator programming +language. Writing an OpenSRF binding for another language is a relatively small +task if that language offers libraries that support the core technologies on +which OpenSRF depends: + + * http://tools.ietf.org/html/rfc3920[Extensible Messaging and Presence +Protocol] (XMPP, sometimes referred to as Jabber) - provides the base messaging +infrastructure between OpenSRF clients and servers + * http://json.org[JavaScript Object Notation] (JSON) - serializes the content +of each XMPP message in a standardized and concise format + * http://memcached.org[memcached] - provides the caching service + * http://tools.ietf.org/html/rfc5424[syslog] - the standard UNIX logging +service + +Unfortunately, the +http://evergreen-ils.org/dokuwiki/doku.php?id=osrf-devel:primer[OpenSRF +reference documentation], although augmented by the +http://evergreen-ils.org/dokuwiki/doku.php?id=osrf-devel:primer[OpenSRF +glossary], blog posts like http://evergreen-ils.org/blog/?p=36[the description +of OpenSRF and Jabber], and even this article, is not a sufficient substitute +for a complete specification on which one could implement a language binding. +The recommended option for would-be developers of another language binding is +to use the Python implementation as the cleanest basis for a port to another +language. + +=== OpenSRF communication flows over XMPP === + +The XMPP messaging service underpins OpenSRF, requiring an XMPP server such +as http://www.ejabberd.im/[ejabberd]. When you start OpenSRF, the first XMPP +clients that connect to the XMPP server are the OpenSRF public and private +_routers_. OpenSRF routers maintain a list of available services and connect +clients to available services. When an OpenSRF service starts, it establishes a +connection to the XMPP server and registers itself with the private router. The +OpenSRF configuration contains a list of public OpenSRF services, each of which +must also register with the public router. Services and clients connect to the +XMPP server using a single set of XMPP client credentials (for example, +`opensrf@private.localhost`), but use XMPP resource identifiers to +differentiate themselves in the Jabber ID (JID) for each connection. For +example, the JID for a copy of the `opensrf.simple-text` service with process +ID `6285` that has connected to the `private.localhost` domain using the +`opensrf` XMPP client credentials could be +`opensrf@private.localhost/opensrf.simple-text_drone_at_localhost_6285`. + +[#OpenSRFOverHTTP] +=== OpenSRF communication flows over HTTP === +Any OpenSRF service registered with the public router is accessible via the +OpenSRF HTTP Translator. The OpenSRF HTTP Translator implements the +http://www.open-ils.org/dokuwiki/doku.php?id=opensrf_over_http[OpenSRF-over-HTTP +proposed specification] as an Apache module that translates HTTP requests into +OpenSRF requests and returns OpenSRF results as HTTP results to the initiating +HTTP client. + +.Issuing an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator +[source,bash] +-------------------------------------------------------------------------------- +# curl request broken up over multiple lines for legibility +curl -H "X-OpenSRF-service: opensrf.simple-text" \ # <1> + --data 'osrf-msg=[ \ # <2> + {"__c":"osrfMessage","__p":{"threadTrace":0,"locale":"en-CA", \ # <3> + "type":"REQUEST","payload": {"__c":"osrfMethod","__p": \ + {"method":"opensrf.simple-text.reverse","params":["foobar"]} \ + }} \ + }]' \ +http://localhost/osrf-http-translator \ # <4> +-------------------------------------------------------------------------------- + +<1> The `X-OpenSRF-service` header identifies the OpenSRF service of interest. + +<2> The POST request consists of a single parameter, the `osrf-msg` value, +which contains a JSON array. + +<3> The first object is an OpenSRF message (`"__c":"osrfMessage"`) with a set of +parameters (`"__p":{}`) containing: + + * the identifier for the request (`"threadTrace":0`); this value is echoed +back in the result + + * the message type (`"type":"REQUEST"`) + + * the locale for the message; if the OpenSRF method is locale-sensitive, it +can check the locale for each OpenSRF request and return different information +depending on the locale + + * the payload of the message (`"payload":{}`) containing the OpenSRF method +request (`"__c":"osrfMethod"`) and its parameters (`"__p:"{}`), which in turn +contains: + + ** the method name for the request (`"method":"opensrf.simple-text.reverse"`) + + ** a set of JSON parameters to pass to the method (`"params":["foobar"]`); in +this case, a single string `"foobar"` + +<4> The URL on which the OpenSRF HTTP translator is listening, +`/osrf-http-translator` is the default location in the Apache example +configuration files shipped with the OpenSRF source, but this is configurable. + +[#httpResults] +.Results from an HTTP POST request to an OpenSRF method via the OpenSRF HTTP Translator +[source,bash] +-------------------------------------------------------------------------------- +# HTTP response broken up over multiple lines for legibility +[{"__c":"osrfMessage","__p": \ # <1> + {"threadTrace":0, "payload": \ # <2> + {"__c":"osrfResult","__p": \ # <3> + {"status":"OK","content":"raboof","statusCode":200} \ # <4> + },"type":"RESULT","locale":"en-CA" \ # <5> + } +}, +{"__c":"osrfMessage","__p": \ # <6> + {"threadTrace":0,"payload": \ # <7> + {"__c":"osrfConnectStatus","__p": \ # <8> + {"status":"Request Complete","statusCode":205} \ # <9> + },"type":"STATUS","locale":"en-CA" \ # <10> + } +}] +-------------------------------------------------------------------------------- + +<1> The OpenSRF HTTP Translator returns an array of JSON objects in its +response. Each object in the response is an OpenSRF message +(`"__c":"osrfMessage"`) with a collection of response parameters (`"__p":`). + +<2> The OpenSRF message identifier (`"threadTrace":0`) confirms that this +message is in response to the request matching the same identifier. + +<3> The message includes a payload JSON object (`"payload":`) with an OpenSRF +result for the request (`"__c":"osrfResult"`). + +<4> The result includes a status indicator string (`"status":"OK"`), the content +of the result response - in this case, a single string "raboof" +(`"content":"raboof"`) - and an integer status code for the request +(`"statusCode":200`). + +<5> The message also includes the message type (`"type":"RESULT"`) and the +message locale (`"locale":"en-CA"`). + +<6> The second message in the set of results from the response. + +<7> Again, the message identifier confirms that this message is in response to +a particular request. + +<8> The payload of the message denotes that this message is an +OpenSRF connection status message (`"__c":"osrfConnectStatus"`), with some +information about the particular OpenSRF connection that was used for this +request. + +<9> The response parameters for an OpenSRF connection status message include a +verbose status (`"status":"Request Complete"`) and an integer status code for +the connection status (`"statusCode":205). + +<10> The message also includes the message type (`"type":"RESULT"`) and the +message locale (`"locale":"en-CA"`). + + +[TIP] +Before adding a new public OpenSRF service, ensure that it does +not introduce privilege escalation or unchecked access to data. For example, +the Evergreen `open-ils.cstore` private service is an object-relational mapper +that provides read and write access to the entire Evergreen database, so it +would be catastrophic to expose that service publicly. In comparison, the +Evergreen `open-ils.pcrud` public service offers the same functionality as +`open-ils.cstore` to any connected HTTP client or OpenSRF client, but the +additional authentication and authorization layer in `open-ils.pcrud` prevents +unchecked access to Evergreen's data. + +=== Stateless and stateful connections === + +OpenSRF supports both _stateless_ and _stateful_ connections. When an OpenSRF +client issues a `REQUEST` message in a _stateless_ connection, the router +forwards the request to the next available service and the service returns the +result directly to the client. + +.REQUEST flow in a stateless connection +image:media/REQUEST.png[REQUEST flow in a stateless connection] + +When an OpenSRF client issues a `CONNECT` message to create a _stateful_ connection, the +router returns the Jabber ID of the next available service to the client so +that the client can issue one or more `REQUEST` message directly to that +particular service and the service will return corresponding `RESULT` messages +directly to the client. Until the client issues a `DISCONNECT` message, that +particular service is only available to the requesting client. Stateful connections +are useful for clients that need to make many requests from a particular service, +as it avoids the intermediary step of contacting the router for each request, as +well as for operations that require a controlled sequence of commands, such as a +set of database INSERT, UPDATE, and DELETE statements within a transaction. + +.CONNECT, REQUEST, and DISCONNECT flow in a stateful connection +image:media/CONNECT.png[CONNECT, REQUEST, and DISCONNECT flow in a stateful connection] + +== Enough jibber-jabber: writing an OpenSRF service == +Imagine an application architecture in which 10 lines of Perl or Python, using +the data types native to each language, are enough to implement a method that +can then be deployed and invoked seamlessly across hundreds of servers. You +have just imagined developing with OpenSRF – it is truly that simple. Under the +covers, of course, the OpenSRF language bindings do an incredible amount of +work on behalf of the developer. An OpenSRF application consists of one or more +OpenSRF services that expose methods: for example, the `opensrf.simple-text` +http://git.evergreen-ils.org/?p=OpenSRF.git;a=blob_plain;f=src/perl/lib/OpenSRF/Application/Demo/SimpleText.pm[demonstration +service] exposes the `opensrf.simple-text.split()` and +`opensrf.simple-text.reverse()` methods. Each method accepts zero or more +arguments and returns zero or one results. The data types supported by OpenSRF +arguments and results are typical core language data types: strings, numbers, +booleans, arrays, and hashes. + +To implement a new OpenSRF service, perform the following steps: + + 1. Include the base OpenSRF support libraries + 2. Write the code for each of your OpenSRF methods as separate procedures + 3. Register each method + 4. Add the service definition to the OpenSRF configuration files + +For example, the following code implements an OpenSRF service. The service +includes one method named `opensrf.simple-text.reverse()` that accepts one +string as input and returns the reversed version of that string: + +[source,perl] +-------------------------------------------------------------------------------- +#!/usr/bin/perl + +package OpenSRF::Application::Demo::SimpleText; + +use strict; + +use OpenSRF::Application; +use parent qw/OpenSRF::Application/; + +sub text_reverse { + my ($self , $conn, $text) = @_; + my $reversed_text = scalar reverse($text); + return $reversed_text; +} + +__PACKAGE__->register_method( + method => 'text_reverse', + api_name => 'opensrf.simple-text.reverse' +); +-------------------------------------------------------------------------------- + +Ten lines of code, and we have a complete OpenSRF service that exposes a single +method and could be deployed quickly on a cluster of servers to meet your +application's ravenous demand for reversed strings! If you're unfamiliar with +Perl, the `use OpenSRF::Application; use parent qw/OpenSRF::Application/;` +lines tell this package to inherit methods and properties from the +`OpenSRF::Application` module. For example, the call to +`__PACKAGE__->register_method()` is defined in `OpenSRF::Application` but due to +inheritance is available in this package (named by the special Perl symbol +`__PACKAGE__` that contains the current package name). The `register_method()` +procedure is how we introduce a method to the rest of the OpenSRF world. + +[#serviceRegistration] +=== Registering a service with the OpenSRF configuration files === + +Two files control most of the configuration for OpenSRF: + + * `opensrf.xml` contains the configuration for the service itself as well as +a list of which application servers in your OpenSRF cluster should start +the service + * `opensrf_core.xml` (often referred to as the "bootstrap configuration" +file) contains the OpenSRF networking information, including the XMPP server +connection credentials for the public and private routers; you only need to touch +this for a new service if the new service needs to be accessible via the +public router + +Begin by defining the service itself in `opensrf.xml`. To register the +`opensrf.simple-text` service, add the following section to the `` +element (corresponding to the XPath `/opensrf/default/apps/`): + +[source,xml] +-------------------------------------------------------------------------------- + + + 3 + 1 + perl + OpenSRF::Application::Demo::SimpleText + 100 + + 1000 + opensrf.simple-text_unix.log + opensrf.simple-text_unix.sock + opensrf.simple-text_unix.pid + 5 + 15 + 2 + 5 + + + + + +-------------------------------------------------------------------------------- + +<1> The element name is the name that the OpenSRF control scripts use to refer +to the service. + +<2> Specifies the interval (in seconds) between checks to determine if the +service is still running. + +<3> Specifies whether OpenSRF clients can call methods from this service +without first having to create a connection to a specific service backend +process for that service. If the value is `1`, then the client can simply +issue a request and the router will forward the request to an available +service and the result will be returned directly to the client. + +<4> Specifies the programming language in which the service is implemented + +<5> Specifies the name of the library or module in which the service is implemented + +<6> (C implementations): Specifies the maximum number of requests a process +serves before it is killed and replaced by a new process. + +<7> (Perl implementations): Specifies the maximum number of requests a process +serves before it is killed and replaced by a new process. + +<8> The name of the log file for language-specific log messages such as syntax +warnings. + +<9> The name of the UNIX socket used for inter-process communications. + +<10> The name of the PID file for the master process for the service. + +<11> The minimum number of child processes that should be running at any given +time. + +<12> The maximum number of child processes that should be running at any given +time. + +<13> The minimum number of child processes that should be available to handle +incoming requests. If there are fewer than this number of spare child +processes, new processes will be spawned. + +<14> The maximum number of child processes that should be available to handle +incoming requests. If there are more than this number of spare child processes, +the extra processes will be killed. + +To make the service accessible via the public router, you must also +edit the `opensrf_core.xml` configuration file to add the service to the list +of publicly accessible services: + +.Making a service publicly accessible in `opensrf_core.xml` +[source,xml] +-------------------------------------------------------------------------------- + + + router + public.localhost + + opensrf.math + opensrf.simple-text + + +-------------------------------------------------------------------------------- + +<1> This section of the `opensrf_core.xml` file is located at XPath +`/config/opensrf/routers/`. + +<2> `public.localhost` is the canonical public router domain in the OpenSRF +installation instructions. + +<3> Each `` element contained in the `` element +offers their services via the public router as well as the private router. + +Once you have defined the new service, you must restart the OpenSRF Router +to retrieve the new configuration and start or restart the service itself. + +=== Calling an OpenSRF method === +OpenSRF clients in any supported language can invoke OpenSRF services in any +supported language. So let's see a few examples of how we can call our fancy +new `opensrf.simple-text.reverse()` method: + +==== Calling OpenSRF methods from the srfsh client ==== +`srfsh` is a command-line tool installed with OpenSRF that you can use to call +OpenSRF methods. To call an OpenSRF method, issue the `request` command and pass +the OpenSRF service and method name as the first two arguments; then pass a list +of JSON objects as the arguments to the method being invoked. + +The following example calls the `opensrf.simple-text.reverse` method of the +`opensrf.simple-text` OpenSRF service, passing the string `"foobar"` as the +only method argument: + +[source,sh] +-------------------------------------------------------------------------------- +$ srfsh +srfsh # request opensrf.simple-text opensrf.simple-text.reverse "foobar" + +Received Data: "raboof" + +=------------------------------------ +Request Completed Successfully +Request Time in seconds: 0.016718 +=------------------------------------ +-------------------------------------------------------------------------------- + +[#opensrfIntrospection] +==== Getting documentation for OpenSRF methods from the srfsh client ==== + +The `srfsh` client also gives you command-line access to retrieving metadata +about OpenSRF services and methods. For a given OpenSRF method, for example, +you can retrieve information such as the minimum number of required arguments, +the data type and a description of each argument, the package or library in +which the method is implemented, and a description of the method. To retrieve +the documentation for an opensrf method from `srfsh`, issue the `introspect` +command, followed by the name of the OpenSRF service and (optionally) the +name of the OpenSRF method. If you do not pass a method name to the `introspect` +command, `srfsh` lists all of the methods offered by the service. If you pass +a partial method name, `srfsh` lists all of the methods that match that portion +of the method name. + +[NOTE] +The quality and availability of the descriptive information for each +method depends on the developer to register the method with complete and +accurate information. The quality varies across the set of OpenSRF and +Evergreen APIs, although some effort is being put towards improving the +state of the internal documentation. + +[source,sh] +-------------------------------------------------------------------------------- +srfsh# introspect opensrf.simple-text "opensrf.simple-text.reverse" +--> opensrf.simple-text + +Received Data: { + "__c":"opensrf.simple-text", + "__p":{ + "api_level":1, + "stream":0, \ # <1> + "object_hint":"OpenSRF_Application_Demo_SimpleText", + "remote":0, + "package":"OpenSRF::Application::Demo::SimpleText", \ # <2> + "api_name":"opensrf.simple-text.reverse", \ # <3> + "server_class":"opensrf.simple-text", + "signature":{ \ # <4> + "params":[ \ # <5> + { + "desc":"The string to reverse", + "name":"text", + "type":"string" + } + ], + "desc":"Returns the input string in reverse order\n", \ # <6> + "return":{ \ # <7> + "desc":"Returns the input string in reverse order", + "type":"string" + } + }, + "method":"text_reverse", \ # <8> + "argc":1 \ # <9> + } +} +-------------------------------------------------------------------------------- + +<1> `stream` denotes whether the method supports streaming responses or not. + +<2> `package` identifies which package or library implements the method. + +<3> `api_name` identifies the name of the OpenSRF method. + +<4> `signature` is a hash that describes the parameters for the method. + +<5> `params` is an array of hashes describing each parameter in the method; +each parameter has a description (`desc`), name (`name`), and type (`type`). + +<6> `desc` is a string that describes the method itself. + +<7> `return` is a hash that describes the return value for the method; it +contains a description of the return value (`desc`) and the type of the +returned value (`type`). + +<8> `method` identifies the name of the function or method in the source +implementation. + +<9> `argc` is an integer describing the minimum number of arguments that +must be passed to this method. + +==== Calling OpenSRF methods from Perl applications ==== + +To call an OpenSRF method from Perl, you must connect to the OpenSRF service, +issue the request to the method, and then retrieve the results. + +[source,perl] +-------------------------------------------------------------------------------- +#/usr/bin/perl +use strict; +use OpenSRF::AppSession; +use OpenSRF::System; + +OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); # <1> + +my $session = OpenSRF::AppSession->create("opensrf.simple-text"); # <2> + +print "substring: Accepts a string and a number as input, returns a string\n"; +my $result = $session->request("opensrf.simple-text.substring", "foobar", 3); # <3> +my $request = $result->gather(); # <4> +print "Substring: $request\n\n"; + +print "split: Accepts two strings as input, returns an array of strings\n"; +$request = $session->request("opensrf.simple-text.split", "This is a test", " "); # <5> +my $output = "Split: ["; +my $element; +while ($element = $request->recv()) { # <6> + $output .= $element->content . ", "; # <7> +} +$output =~ s/, $/]/; +print $output . "\n\n"; + +print "statistics: Accepts an array of strings as input, returns a hash\n"; +my @many_strings = [ + "First I think I'll have breakfast", + "Then I think that lunch would be nice", + "And then seventy desserts to finish off the day" +]; + +$result = $session->request("opensrf.simple-text.statistics", @many_strings); # <8> +$request = $result->gather(); # <9> +print "Length: " . $result->{'length'} . "\n"; +print "Word count: " . $result->{'word_count'} . "\n"; + +$session->disconnect(); # <10> +-------------------------------------------------------------------------------- + +<1> The `OpenSRF::System->bootstrap_client()` method reads the OpenSRF +configuration information from the indicated file and creates an XMPP client +connection based on that information. + +<2> The `OpenSRF::AppSession->create()` method accepts one argument - the name +of the OpenSRF service to which you want to want to make one or more requests - +and returns an object prepared to use the client connection to make those +requests. + +<3> The `OpenSRF::AppSession->request()` method accepts a minimum of one +argument - the name of the OpenSRF method to which you want to make a request - +followed by zero or more arguments to pass to the OpenSRF method as input +values. This example passes a string and an integer to the +`opensrf.simple-text.substring` method defined by the `opensrf.simple-text` +OpenSRF service. + +<4> The `gather()` method, called on the result object returned by the +`request()` method, iterates over all of the possible results from the result +object and returns a single variable. + +<5> This `request()` call passes two strings to the `opensrf.simple-text.split` +method defined by the `opensrf.simple-text` OpenSRF service and returns (via +`gather()`) a reference to an array of results. + +<6> The `opensrf.simple-text.split()` method is a streaming method that +returns an array of results with one element per `recv()` call on the +result object. We could use the `gather()` method to retrieve all of the +results in a single array reference, but instead we simply iterate over +the result variable until there are no more results to retrieve. + +<7> While the `gather()` convenience method returns only the content of the +complete set of results for a given request, the `recv()` method returns an +OpenSRF result object with `status`, `statusCode`, and `content` fields as +we saw in <>. + +<8> This `request()` call passes an array to the +`opensrf.simple-text.statistics` method defined by the `opensrf.simple-text` +OpenSRF service. + +<9> The result object returns a hash reference via `gather()`. The hash +contains the `length` and `word_count` keys we defined in the method. + +<10> The `OpenSRF::AppSession->disconnect()` method closes the XMPP client +connection and cleans up resources associated with the session. + +=== Accepting and returning more interesting data types === + +Of course, the example of accepting a single string and returning a single +string is not very interesting. In real life, our applications tend to pass +around multiple arguments, including arrays and hashes. Fortunately, OpenSRF +makes that easy to deal with; in Perl, for example, returning a reference to +the data type does the right thing. In the following example of a method that +returns a list, we accept two arguments of type string: the string to be split, +and the delimiter that should be used to split the string. + +.Text splitting method - streaming mode +[source,perl] +-------------------------------------------------------------------------------- +sub text_split { + my $self = shift; + my $conn = shift; + my $text = shift; + my $delimiter = shift || ' '; + + my @split_text = split $delimiter, $text; + return \@split_text; +} + +__PACKAGE__->register_method( + method => 'text_split', + api_name => 'opensrf.simple-text.split' +); +-------------------------------------------------------------------------------- + +We simply return a reference to the list, and OpenSRF does the rest of the work +for us to convert the data into the language-independent format that is then +returned to the caller. As a caller of a given method, you must rely on the +documentation used to register to determine the data structures - if the developer has +added the appropriate documentation. + +=== Accepting and returning Evergreen objects === + +OpenSRF is agnostic about objects; its role is to pass JSON back and forth +between OpenSRF clients and services, and it allows the specific clients and +services to define their own semantics for the JSON structures. On top of that +infrastructure, Evergreen offers the fieldmapper: an object-relational mapper +that provides a complete definition of all objects, their properties, their +relationships to other objects, the permissions required to create, read, +update, or delete objects of that type, and the database table or view on which +they are based. + +The Evergreen fieldmapper offers a great deal of convenience for working with +complex system objects beyond the basic mapping of classes to database +schemas. Although the result is passed over the wire as a JSON object +containing the indicated fields, fieldmapper-aware clients then turn those +JSON objects into native objects with setter / getter methods for each field. + +All of this metadata about Evergreen objects is defined in the +fieldmapper configuration file (`/openils/conf/fm_IDL.xml`), and access to +these classes is provided by the `open-ils.cstore`, `open-ils.pcrud`, and +`open-ils.reporter-store` OpenSRF services which parse the fieldmapper +configuration file and dynamically register OpenSRF methods for creating, +reading, updating, and deleting all of the defined classes. + +.Example fieldmapper class definition for "Open User Summary" +[source,xml] +-------------------------------------------------------------------------------- + + + + + + + + + + + + + + + + + + +-------------------------------------------------------------------------------- + +<1> The `` element defines the class: + + * The `id` attribute defines the _class hint_ that identifies the class both +elsewhere in the fieldmapper configuration file, such as in the value of the +`field` attribute of the `` element, and in the JSON object itself when +it is instantiated. For example, an "Open User Summary" JSON object would have +the top level property of `"__c":"mous"`. + + * The `controller` attribute identifies the services that have direct access +to this class. If `open-ils.pcrud` is not listed, for example, then there is +no means to directly access members of this class through a public service. + + * The `oils_obj:fieldmapper` attribute defines the name of the Perl +fieldmapper class that will be dynamically generated to provide setter and +getter methods for instances of the class. + + * The `oils_persist:tablename` attribute identifies the schema name and table +name of the database table that stores the data that represents the instances +of this class. In this case, the schema is `money` and the table is +`open_usr_summary`. + + * The `reporter:label` attribute defines a human-readable name for the class +used in the reporting interface to identify the class. These names are defined +in English in the fieldmapper configuration file; however, they are extracted +so that they can be translated and served in the user's language of choice. + +<2> The `` element lists all of the fields that belong to the object. + + * The `oils_persist:primary` attribute identifies the field that acts as the +primary key for the object; in this case, the field with the name `usr`. + + * The `oils_persist:sequence` attribute identifies the sequence object +(if any) in this database provides values for new instances of this class. In +this case, the primary key is defined by a field that is linked to a different +table, so no sequence is used to populate these instances. + +<3> Each `` element defines a single field with the following attributes: + + * The `name` attribute identifies the column name of the field in the +underlying database table as well as providing a name for the setter / getter +method that can be invoked in the JSON or native version of the object. + + * The `reporter:datatype` attribute defines how the reporter should treat +the contents of the field for the purposes of querying and display. + + * The `reporter:label` attribute can be used to provide a human-readable name +for each field; without it, the reporter falls back to the value of the `name` +attribute. + +<4> The `` element contains a set of zero or more `` elements, +each of which defines a relationship between the class being described and +another class. + + * The `field` attribute identifies the field named in this class that links +to the external class. + + * The `reltype` attribute identifies the kind of relationship between the +classes; in the case of `has_a`, each value in the `usr` field is guaranteed +to have a corresponding value in the external class. + + * The `key` attribute identifies the name of the field in the external +class to which this field links. + + * The rarely-used `map` attribute identifies a second class to which +the external class links; it enables this field to define a direct +relationship to an external class with one degree of separation, to +avoid having to retrieve all of the linked members of an intermediate +class just to retrieve the instances from the actual desired target class. + + * The `class` attribute identifies the external class to which this field +links. + +<5> The `` element defines the permissions that must have been +granted to a user to operate on instances of this class. + +<6> The `` element is one of four possible children of the +`` element that define the permissions required for each action: +create, retrieve, update, and delete. + + * The `permission` attribute identifies the name of the permission that must +have been granted to the user to perform the action. + + * The `contextfield` attribute, if it exists, defines the field in this class +that identifies the library within the system for which the user must have +privileges to work. If a user has been granted a given permission, but has not been +granted privileges to work at a given library, they can not perform the action +at that library. + +<7> The rarely-used `` element identifies a linked field (`link` +attribute) in this class which links to an external class that holds the field +(`field` attribute) that identifies the library within the system for which the +user must have privileges to work. + +When you retrieve an instance of a class, you can ask for the result to +_flesh_ some or all of the linked fields of that class, so that the linked +instances are returned embedded directly in your requested instance. In that +same request you can ask for the fleshed instances to in turn have their linked +fields fleshed. By bundling all of this into a single request and result +sequence, you can avoid the network overhead of requiring the client to request +the base object, then request each linked object in turn. + +You can also iterate over a collection of instances and set the automatically +generated `isdeleted`, `isupdated`, or `isnew` properties to indicate that +the given instance has been deleted, updated, or created respectively. +Evergreen can then act in batch mode over the collection to perform the +requested actions on any of the instances that have been flagged for action. + +=== Returning streaming results === + +In the previous implementation of the `opensrf.simple-text.split` method, we +returned a reference to the complete array of results. For small values being +delivered over the network, this is perfectly acceptable, but for large sets of +values this can pose a number of problems for the requesting client. Consider a +service that returns a set of bibliographic records in response to a query like +"all records edited in the past month"; if the underlying database is +relatively active, that could result in thousands of records being returned as +a single network request. The client would be forced to block until all of the +results are returned, likely resulting in a significant delay, and depending on +the implementation, correspondingly large amounts of memory might be consumed +as all of the results are read from the network in a single block. + +OpenSRF offers a solution to this problem. If the method returns results that +can be divided into separate meaningful units, you can register the OpenSRF +method as a streaming method and enable the client to loop over the results one +unit at a time until the method returns no further results. In addition to +registering the method with the provided name, OpenSRF also registers an additional +method with `.atomic` appended to the method name. The `.atomic` variant gathers +all of the results into a single block to return to the client, giving the caller +the ability to choose either streaming or atomic results from a single method +definition. + +In the following example, the text splitting method has been reimplemented to +support streaming; very few changes are required: + +.Text splitting method - streaming mode +[source,perl] +-------------------------------------------------------------------------------- +sub text_split { + my $self = shift; + my $conn = shift; + my $text = shift; + my $delimiter = shift || ' '; + + my @split_text = split $delimiter, $text; + foreach my $string (@split_text) { # <1> + $conn->respond($string); + } + return undef; +} + +__PACKAGE__->register_method( + method => 'text_split', + api_name => 'opensrf.simple-text.split', + stream => 1 # <2> +); +-------------------------------------------------------------------------------- + +<1> Rather than returning a reference to the array, a streaming method loops +over the contents of the array and invokes the `respond()` method of the +connection object on each element of the array. + +<2> Registering the method as a streaming method instructs OpenSRF to also +register an atomic variant (`opensrf.simple-text.split.atomic`). + +=== Error! Warning! Info! Debug! === +As hard as it may be to believe, it is true: applications sometimes do not +behave in the expected manner, particularly when they are still under +development. The server language bindings for OpenSRF include integrated +support for logging messages at the levels of ERROR, WARNING, INFO, DEBUG, and +the extremely verbose INTERNAL to either a local file or to a syslogger +service. The destination of the log files, and the level of verbosity to be +logged, is set in the `opensrf_core.xml` configuration file. To add logging to +our Perl example, we just have to add the `OpenSRF::Utils::Logger` package to our +list of used Perl modules, then invoke the logger at the desired logging level. + +You can include many calls to the OpenSRF logger; only those that are higher +than your configured logging level will actually hit the log. The following +example exercises all of the available logging levels in OpenSRF: + +[source,perl] +-------------------------------------------------------------------------------- +use OpenSRF::Utils::Logger; +my $logger = OpenSRF::Utils::Logger; +# some code in some function +{ + $logger->error("Hmm, something bad DEFINITELY happened!"); + $logger->warn("Hmm, something bad might have happened."); + $logger->info("Something happened."); + $logger->debug("Something happened; here are some more details."); + $logger->internal("Something happened; here are all the gory details.") +} +-------------------------------------------------------------------------------- + +If you call the mythical OpenSRF method containing the preceding OpenSRF logger +statements on a system running at the default logging level of INFO, you will +only see the INFO, WARN, and ERR messages, as follows: + +.Results of logging calls at the default level of INFO +-------------------------------------------------------------------------------- +[2010-03-17 22:27:30] opensrf.simple-text [ERR :5681:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! +[2010-03-17 22:27:30] opensrf.simple-text [WARN:5681:SimpleText.pm:278:] Hmm, something bad might have happened. +[2010-03-17 22:27:30] opensrf.simple-text [INFO:5681:SimpleText.pm:279:] Something happened. +-------------------------------------------------------------------------------- + +If you then increase the the logging level to INTERNAL (5), the logs will +contain much more information, as follows: + +.Results of logging calls at the default level of INTERNAL +-------------------------------------------------------------------------------- +[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:277:] Hmm, something bad DEFINITELY happened! +[2010-03-17 22:48:11] opensrf.simple-text [WARN:5934:SimpleText.pm:278:] Hmm, something bad might have happened. +[2010-03-17 22:48:11] opensrf.simple-text [INFO:5934:SimpleText.pm:279:] Something happened. +[2010-03-17 22:48:11] opensrf.simple-text [DEBG:5934:SimpleText.pm:280:] Something happened; here are some more details. +[2010-03-17 22:48:11] opensrf.simple-text [INTL:5934:SimpleText.pm:281:] Something happened; here are all the gory details. +[2010-03-17 22:48:11] opensrf.simple-text [ERR :5934:SimpleText.pm:283:] Resolver did not find a cache hit +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:Cache.pm:125:] Stored opensrf.simple-text.test_cache.masaa => "here" in memcached server +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:579:] Coderef for [OpenSRF::Application::Demo::SimpleText::test_cache] has been run +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:586:] A top level Request object is responding de nada +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:Application.pm:190:] Method duration for [opensrf.simple-text.test_cache]: 10.005 +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:780:] Calling queue_wait(0) +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:769:] Resending...0 +[2010-03-17 22:48:21] opensrf.simple-text [INTL:5934:AppSession.pm:450:] In send +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending RESULT to opensrf@private.localhost/_dan-karmic-liblap_1268880489.752154_5943 with threadTrace [1] +[2010-03-17 22:48:21] opensrf.simple-text [DEBG:5934:AppSession.pm:506:] AppSession sending STATUS to opensrf@private.localhost/_dan-karmic-liblap_1268880489.752154_5943 with threadTrace [1] +... +-------------------------------------------------------------------------------- + +To see everything that is happening in OpenSRF, try leaving your logging level +set to INTERNAL for a few minutes - just ensure that you have a lot of free disk +space available if you have a moderately busy system! + +=== Caching results: one secret of scalability === +If you have ever used an application that depends on a remote Web service +outside of your control-say, if you need to retrieve results from a +microblogging service-you know the pain of latency and dependability (or the +lack thereof). To improve response time in OpenSRF applications, you can take +advantage of the support offered by the `OpenSRF::Utils::Cache` module for +communicating with a local instance or cluster of memcache daemons to store +and retrieve persistent values. + +[source,perl] +-------------------------------------------------------------------------------- +use OpenSRF::Utils::Cache; # <1> +sub test_cache { + my $self = shift; + my $conn = shift; + my $test_key = shift; + my $cache = OpenSRF::Utils::Cache->new('global'); # <2> + my $cache_key = "opensrf.simple-text.test_cache.$test_key"; # <3> + my $result = $cache->get_cache($cache_key) || undef; # <4> + if ($result) { + $logger->info("Resolver found a cache hit"); + return $result; + } + sleep 10; # <5> + my $cache_timeout = 300; # <6> + $cache->put_cache($cache_key, "here", $cache_timeout); # <7> + return "There was no cache hit."; +} +-------------------------------------------------------------------------------- + +This example: + +<1> Imports the OpenSRF::Utils::Cache module + +<2> Creates a cache object + +<3> Creates a unique cache key based on the OpenSRF method name and +request input value + +<4> Checks to see if the cache key already exists; if so, it immediately +returns that value + +<5> If the cache key does not exist, the code sleeps for 10 seconds to +simulate a call to a slow remote Web service, or an intensive process + +<6> Sets a value for the lifetime of the cache key in seconds + +<7> When the code has retrieved its value, then it can create the cache +entry, with the cache key, value to be stored ("here"), and the timeout +value in seconds to ensure that we do not return stale data on subsequent +calls + +=== Initializing the service and its children: child labour === +When an OpenSRF service is started, it looks for a procedure called +`initialize()` to set up any global variables shared by all of the children of +the service. The `initialize()` procedure is typically used to retrieve +configuration settings from the `opensrf.xml` file. + +An OpenSRF service spawns one or more children to actually do the work +requested by callers of the service. For every child process an OpenSRF service +spawns, the child process clones the parent environment and then each child +process runs the `child_init()` process (if any) defined in the OpenSRF service +to initialize any child-specific settings. + +When the OpenSRF service kills a child process, it invokes the `child_exit()` +procedure (if any) to clean up any resources associated with the child process. +Similarly, when the OpenSRF service is stopped, it calls the `DESTROY()` +procedure to clean up any remaining resources. + +=== Retrieving configuration settings === +The settings for OpenSRF services are maintained in the `opensrf.xml` XML +configuration file. The structure of the XML document consists of a root +element `` containing two child elements: + + * `` contains an `` element describing all +OpenSRF services running on this system -- see <> --, as +well as any other arbitrary XML descriptions required for global configuration +purposes. For example, Evergreen uses this section for email notification and +inter-library patron privacy settings. + * `` contains one element per host that participates in +this OpenSRF system. Each host element must include an `` element +that lists all of the services to start on this host when the system starts +up. Each host element can optionally override any of the default settings. + +OpenSRF includes a service named `opensrf.settings` to provide distributed +cached access to the configuration settings with a simple API: + + * `opensrf.settings.default_config.get`: accepts zero arguments and returns +the complete set of default settings as a JSON document + * `opensrf.settings.host_config.get`: accepts one argument (hostname) and +returns the complete set of settings, as customized for that hostname, as a +JSON document + * `opensrf.settings.xpath.get`: accepts one argument (an +http://www.w3.org/TR/xpath/[XPath] expression) and returns the portion of +the configuration file that matches the expression as a JSON document + +For example, to determine whether an Evergreen system uses the opt-in +support for sharing patron information between libraries, you could either +invoke the `opensrf.settings.default_config.get` method and parse the +JSON document to determine the value, or invoke the `opensrf.settings.xpath.get` +method with the XPath `/opensrf/default/share/user/opt_in` argument to +retrieve the value directly. + +In practice, OpenSRF includes convenience libraries in all of its client +language bindings to simplify access to configuration values. C offers +osrfConfig.c, Perl offers `OpenSRF::Utils::SettingsClient`, Java offers +`org.opensrf.util.SettingsClient`, and Python offers `osrf.set`. These +libraries locally cache the configuration file to avoid network roundtrips for +every request and enable the developer to request specific values without +having to manually construct XPath expressions. + +== Getting under the covers with OpenSRF == +Now that you have seen that it truly is easy to create an OpenSRF service, we +can take a look at what is going on under the covers to make all of this work +for you. + +=== Get on the messaging bus - safely === +One of the core innovations of OpenSRF was to use the Extensible Messaging and +Presence Protocol (XMPP, more colloquially known as Jabber) as the messaging +bus that ties OpenSRF services together across servers. XMPP is an "XML +protocol for near-real-time messaging, presence, and request-response services" +(http://www.ietf.org/rfc/rfc3920.txt) that OpenSRF relies on to handle most of +the complexity of networked communications. OpenSRF achieves a measure of +security for its services through the use of public and private XMPP domains; +all OpenSRF services automatically register themselves with the private XMPP +domain, but only those services that register themselves with the public XMPP +domain can be invoked from public OpenSRF clients. + +In a minimal OpenSRF deployment, two XMPP users named "router" connect to the +XMPP server, with one connected to the private XMPP domain and one connected to +the public XMPP domain. Similarly, two XMPP users named "opensrf" connect to +the XMPP server via the private and public XMPP domains. When an OpenSRF +service is started, it uses the "opensrf" XMPP user to advertise its +availability with the corresponding router on that XMPP domain; the XMPP server +automatically assigns a Jabber ID (JID) based on the client hostname to each +service's listener process and each connected drone process waiting to carry +out requests. When an OpenSRF router receives a request to invoke a method on a +given service, it connects the requester to the next available listener in the +list of registered listeners for that service. + +The opensrf and router user names, passwords, and domain names, along with the +list of services that should be public, are contained in the `opensrf_core.xml` +configuration file. + +=== Message body format === +OpenSRF was an early adopter of JavaScript Object Notation (JSON). While XMPP +is an XML protocol, the Evergreen developers recognized that the compactness of +the JSON format offered a significant reduction in bandwidth for the volume of +messages that would be generated in an application of that size. In addition, +the ability of languages such as JavaScript, Perl, and Python to generate +native objects with minimal parsing offered an attractive advantage over +invoking an XML parser for every message. Instead, the body of the XMPP message +is a simple JSON structure. For a simple request, like the following example +that simply reverses a string, it looks like a significant overhead: but we get +the advantages of locale support and tracing the request from the requester +through the listener and responder (drone). + +.A request for opensrf.simple-text.reverse("foobar"): +[source,xml] +-------------------------------------------------------------------------------- + + 1266781414.366573.12667814146288 + +[ + {"__c":"osrfMessage","__p": + {"threadTrace":"1","locale":"en-US","type":"REQUEST","payload": + {"__c":"osrfMethod","__p": + {"method":"opensrf.simple-text.reverse","params":["foobar"]} + } + } + } +] + + +-------------------------------------------------------------------------------- + +.A response from opensrf.simple-text.reverse("foobar") +[source,xml] +-------------------------------------------------------------------------------- + + 1266781414.366573.12667814146288 + +[ + {"__c":"osrfMessage","__p": + {"threadTrace":"1","payload": + {"__c":"osrfResult","__p": + {"status":"OK","content":"raboof","statusCode":200} + } ,"type":"RESULT","locale":"en-US"} + }, + {"__c":"osrfMessage","__p": + {"threadTrace":"1","payload": + {"__c":"osrfConnectStatus","__p": + {"status":"Request Complete","statusCode":205} + },"type":"STATUS","locale":"en-US"} + } +] + + +-------------------------------------------------------------------------------- + +The content of the `` element of the OpenSRF request and result should +look familiar; they match the structure of the <> that we previously dissected. + +=== Registering OpenSRF methods in depth === +Let's explore the call to `__PACKAGE__->register_method()`; most of the elements +of the hash are optional, and for the sake of brevity we omitted them in the +previous example. As we have seen in the results of the <>, a +verbose registration method call is recommended to better enable the internal +documentation. So, for the sake of completeness here, is the set of elements +that you should pass to `__PACKAGE__->register_method()`: + + * `method`: the name of the procedure in this module that is being registered as an OpenSRF method + * `api_name`: the invocable name of the OpenSRF method; by convention, the OpenSRF service name is used as the prefix + * `api_level`: (optional) can be used for versioning the methods to allow the use of a deprecated API, but in practical use is always 1 + * `argc`: (optional) the minimal number of arguments that the method expects + * `stream`: (optional) if this argument is set to any value, then the method supports returning multiple values from a single call to subsequent requests, and OpenSRF automatically creates a corresponding method with ".atomic" appended to its name that returns the complete set of results in a single request; streaming methods are useful if you are returning hundreds of records and want to act on the results as they return + * `signature`: (optional) a hash describing the method's purpose, arguments, and return value + ** `desc`: a description of the method's purpose + ** `params`: an array of hashes, each of which describes one of the method arguments + *** `name`: the name of the argument + *** `desc`: a description of the argument's purpose + *** `type`: the data type of the argument: for example, string, integer, boolean, number, array, or hash + ** `return`: a hash describing the return value of the method + *** `desc`: a description of the return value + *** `type`: the data type of the return value: for example, string, integer, boolean, number, array, or hash + +== Evergreen-specific OpenSRF services == + +Evergreen is currently the primary showcase for the use of OpenSRF as an +application architecture. Evergreen 2.6.0 includes the following +set of OpenSRF services: + + * `open-ils.acq` Supports tasks for managing the acquisitions process + * `open-ils.actor`: Supports common tasks for working with user accounts + and libraries. + * `open-ils.auth`: Supports authentication of Evergreen users. + * `open-ils.auth_proxy`: Support using external services such as LDAP + directories to authenticate Evergreen users + * `open-ils.cat`: Supports common cataloging tasks, such as creating, + modifying, and merging bibliographic and authority records. + * `open-ils.circ`: Supports circulation tasks such as checking out items and + calculating due dates. + * `open-ils.collections`: Supports tasks to assist collections services for + contacting users with outstanding fines above a certain threshold. + * `open-ils.cstore`: Supports unrestricted access to Evergreen fieldmapper + objects. This is a private service. + * `open-ils.fielder` + * `open-ils.justintime`: Support tasks for determining if an action/trigger + event is still valid + * `open-ils.pcrud`: Supports access to Evergreen fieldmapper objects, + restricted by staff user permissions. This is a private service. + objects. + * `open-ils.permacrud`: Supports access to Evergreen fieldmapper objects, + restricted by staff user permissions. This is a private service. + * `open-ils.reporter`: Supports the creation and scheduling of reports. + * `open-ils.reporter-store`: Supports access to Evergreen fieldmapper objects + for the reporting service. This is a private service. + * `open-ils.resolver` Support tasks for integrating with an OpenURL resolver. + * `open-ils.search`: Supports searching across bibliographic records, + authority records, serial records, Z39.50 sources, and ZIP codes. + * `open-ils.serial`: Support tasks for serials management + * `open-ils.storage`: A deprecated method of providing access to Evergreen + fieldmapper objects. Implemented in Perl, this service has largely been + replaced by the much faster C-based `open-ils.cstore` service. + * `open-ils.supercat`: Supports transforms of MARC records into other formats, + such as MODS, as well as providing Atom and RSS feeds and SRU access. + * `open-ils.trigger`: Supports event-based triggers for actions such as + overdue and holds available notification emails. + * `open-ils.url_verify`: Support tasks for validating URLs + * `open-ils.vandelay`: Supports the import and export of batches of + bibliographic and authority records. + * `opensrf.settings`: Supports communicating opensrf.xml settings to other services. + +Of some interest is that the `open-ils.reporter-store` and `open-ils.cstore` +services have identical implementations. Surfacing them as separate services +enables a deployer of Evergreen to ensure that the reporting service does not +interfere with the performance-critical `open-ils.cstore` service. One can also +direct the reporting service to a read-only database replica to, again, avoid +interference with `open-ils.cstore` which must write to the master database. + +There are only a few significant services that are not built on OpenSRF, such +as the SIP and Z39.50 servers. These services implement +different protocols and build on existing daemon architectures (Simple2ZOOM +for Z39.50), but still rely on the other OpenSRF services to provide access +to the Evergreen data. The non-OpenSRF services are reasonably self-contained +and can be deployed on different servers to deliver the same sort of deployment +flexibility as OpenSRF services, but have the disadvantage of not being +integrated into the same configuration and control infrastructure as the +OpenSRF services. + +== Evergreen after one year: reflections on OpenSRF == + +http://projectconifer.ca[Project Conifer] has been live on Evergreen for just +over a year now, and as one of the primary technologists I have had to work +closely with the OpenSRF infrastructure during that time. As such, I am in +a position to identify some of the strengths and weaknesses of OpenSRF based +on our experiences. + +=== Strengths of OpenSRF === + +As a service infrastructure, OpenSRF has been remarkably reliable. We initially +deployed Evergreen on an unreleased version of both OpenSRF and Evergreen due +to our requirements for some functionality that had not been delivered in a +stable release at that point in time, and despite this risky move we suffered +very little unplanned downtime in the opening months. On July 27, 2009 we +moved to a newer (but still unreleased) version of the OpenSRF and Evergreen +code, and began formally tracking our downtime. Since then, we have achieved +more than 99.9% availability - including scheduled downtime for maintenance. +This compares quite favourably to the maximum of 75% availability that we were +capable of achieving on our previous library system due to the nightly downtime +that was required for our backup process. The OpenSRF "maximum request" +configuration parameter for each service that kills off drone processes after +they have served a given number of requests provides a nice failsafe for +processes that might otherwise suffer from a memory leak or hung process. It +also helps that when we need to apply an update to a Perl service that is +running on multiple servers, we can apply the updated code, then restart the +service on one server at a time to avoid any downtime. + +As promised by the OpenSRF infrastructure, we have also been able to tune our +cluster of servers to provide better performance. For example, we were able to +change the number of maximum concurrent processes for our database services +when we noticed that we seeing a performance bottleneck with database access. +Making a configuration change go live simply requires you to restart the +`opensrf.setting` service to pick up the configuration change, then restart the +affected service on each of your servers. We were also able to turn off some of +the less-used OpenSRF services, such as `open-ils.collections`, on one of our +servers to devote more resources on that server to the more frequently used +services and other performance-critical processes such as Apache. + +The support for logging and caching that is built into OpenSRF has been +particularly helpful with the development of a custom service for SFX holdings +integration into our catalogue. Once I understood how OpenSRF works, most of +the effort required to build that SFX integration service was spent on figuring +out how to properly invoke the SFX API to display human-readable holdings. +Adding a new OpenSRF service and registering several new methods for the +service was relatively easy. The support for directing log messages to syslog +in OpenSRF has also been a boon for both development and debugging when +problems arise in a cluster of five servers; we direct all of our log messages +to a single server where we can inspect the complete set of messages for the +entire cluster in context, rather than trying to piece them together across +servers. + +=== Weaknesses === + +The primary weakness of OpenSRF is the lack of either formal or informal +documentation for OpenSRF. There are many frequently asked questions on the +Evergreen mailing lists and IRC channel that indicate that some of the people +running Evergreen or trying to run Evergreen have not been able to find +documentation to help them understand, even at a high level, how the OpenSRF +Router and services work with XMPP and the Apache Web server to provide a +working Evergreen system. Also, over the past few years several developers +have indicated an interest in developing Ruby and PHP bindings for OpenSRF, but +the efforts so far have resulted in no working code. Without a formal +specification, clearly annotated examples, and unit tests for the major OpenSRF +communication use cases that could be ported to the new language as a base set +of expectations for a working binding, the hurdles for a developer new to +OpenSRF are significant. As a result, Evergreen integration efforts with +popular frameworks like Drupal, Blacklight, and VuFind result in the best +practical option for a developer with limited time -- database-level +integration -- which has the unfortunate side effect of being much more likely +to break after an upgrade. + +In conjunction with the lack of documentation that makes it hard to get started +with the framework, a disincentive for new developers to contribute to OpenSRF +itself is the lack of integrated unit tests. For a developer to contribute a +significant, non-obvious patch to OpenSRF, they need to manually run through +various (undocumented, again) use cases to try and ensure that the patch +introduced no unanticipated side effects. The same problems hold for Evergreen +itself, although the +http://git.evergreen-ils.org/?p=working/random.git;a=shortlog;h=refs/heads/collab/berick/constrictor[Constrictor] stress-testing +framework offers a way of performing some automated system testing and +performance testing. + +These weaknesses could be relatively easily overcome with the effort through +contributions from people with the right skill sets. This article arguably +offers a small set of clear examples at both the networking and application +layer of OpenSRF. A technical writer who understands OpenSRF could contribute a +formal specification to the project. With a formal specification at their +disposal, a quality assurance expert could create an automated test harness and +a basic set of unit tests that could be incrementally extended to provide more +coverage over time. If one or more continual integration environments are set +up to track the various OpenSRF branches of interest, then the OpenSRF +community would have immediate feedback on build quality. Once a unit testing +framework is in place, more developers might be willing to develop and +contribute patches as they could sanity check their own code without an intense +effort before exposing it to their peers. + +== Summary == +In this article, I attempted to provide both a high-level and detailed overview +of how OpenSRF works, how to build and deploy new OpenSRF services, how to make +requests to OpenSRF method from OpenSRF clients or over HTTP, and why you +should consider it a possible infrastructure for building your next +high-performance system that requires the capability to scale out. In addition, +I surveyed the Evergreen services built on OpenSRF and reflected on the +strengths and weaknesses of the platform based on the experiences of Project +Conifer after a year in production, with some thoughts about areas where the +right application of skills could make a significant difference to the Evergreen +and OpenSRF projects. + +== Appendix: Python client == + +Following is a Python client that makes the same OpenSRF calls as the Perl +client: + +[source, python] +-------------------------------------------------------------------------------- +include::example$python_client.py[] +-------------------------------------------------------------------------------- + +NOTE: Python's `dnspython` module refuses to read `/etc/resolv.conf`, so to +access hostnames that are not served up via DNS, such as the extremely common +case of `localhost`, you may need to install a package like `dnsmasq` to act +as a local DNS server for those hostnames. + +// vim: set syntax=asciidoc: diff --git a/docs/modules/development/pages/introduction.adoc b/docs/modules/development/pages/introduction.adoc new file mode 100644 index 0000000000..8fd3a0a5de --- /dev/null +++ b/docs/modules/development/pages/introduction.adoc @@ -0,0 +1,5 @@ += Introduction = +:toc: +Developers can use this part to learn more about the programming languages, +communication protocols and standards used in Evergreen. + diff --git a/docs/modules/development/pages/perl_client.pl b/docs/modules/development/pages/perl_client.pl new file mode 100644 index 0000000000..7a47232242 --- /dev/null +++ b/docs/modules/development/pages/perl_client.pl @@ -0,0 +1,40 @@ +#/usr/bin/perl +use strict; +use OpenSRF::AppSession; +use OpenSRF::System; +use Data::Dumper; + +OpenSRF::System->bootstrap_client(config_file => '/openils/conf/opensrf_core.xml'); + +my $session = OpenSRF::AppSession->create("opensrf.simple-text"); + +print "substring: Accepts a string and a number as input, returns a string\n"; +my $request = $session->request("opensrf.simple-text.substring", "foobar", 3); + +my $response; +while ($response = $request->recv()) { + print "Substring: " . $response->content . "\n\n"; +} + +print "split: Accepts two strings as input, returns an array of strings\n"; +$request = $session->request("opensrf.simple-text.split", "This is a test", " ")->gather(); +my $output = "Split: ["; +foreach my $element (@$request) { + $output .= "$element, "; +} +$output =~ s/, $/]/; +print $output . "\n\n"; + +print "statistics: Accepts an array of strings as input, returns a hash\n"; +my @many_strings = [ + "First I think I'll have breakfast", + "Then I think that lunch would be nice", + "And then seventy desserts to finish off the day" +]; + +$request = $session->request("opensrf.simple-text.statistics", @many_strings)->gather(); +print "Length: " . $request->{'length'} . "\n"; +print "Word count: " . $request->{'word_count'} . "\n"; + +$session->disconnect(); + diff --git a/docs/modules/development/pages/pgtap.adoc b/docs/modules/development/pages/pgtap.adoc new file mode 100644 index 0000000000..0b8a15677c --- /dev/null +++ b/docs/modules/development/pages/pgtap.adoc @@ -0,0 +1,37 @@ += Developing with pgTAP tests = +:toc: + +== Setting up pgTAP on your development server == + +Currently, Evergreen pgTAP tests expect a version of pgTAP (0.93) +that is not yet available in the packages for most Linux distributions. +Therefore, you will have to install pgTAP from source as follows: + +. Download, make, and install pgTAP on your database server. pgTAP can + be downloaded from http://pgxn.org/dist/pgtap/ and the instructions + for building and installing the extension are available from + http://pgtap.org/documentation.html + +. Create the pgTAP extension in your Evergreen database. Using `psql`, + connect to your Evergreen database and then issue the command: ++ +[source,sql] +------------------------------------------------------------------------------ +CREATE EXTENSION pgtap; +------------------------------------------------------------------------------ + +== Running pgTAP tests == +The pgTAP tests can be found in subdirectories of `Open-ILS/src/sql/Pg/` +as follows: + +* `t`: contains pgTAP unit tests that can be run on a freshly installed + Evergreen database +* `live_t`: contains pgTAP unit tests meant to be run on an Evergreen + database that also has had the "concerto" sample data loaded on it + +To run the pgTAP unit and regression tests, use the `pg_prove` command. +For example, from the Evergreen source directory, you can issue the +command: +`pg_prove -U evergreen Open-ILS/src/sql/Pg/t Open-ILS/src/sql/Pg/t/regress` + + diff --git a/docs/modules/development/pages/support_scripts.adoc b/docs/modules/development/pages/support_scripts.adoc new file mode 100644 index 0000000000..04e993cb36 --- /dev/null +++ b/docs/modules/development/pages/support_scripts.adoc @@ -0,0 +1,401 @@ += Support Scripts = +:toc: + +Various scripts are included with Evergreen in the `/openils/bin/` directory +(and in the source code in `Open-ILS/src/support-scripts` and +`Open-ILS/src/extras`). Some of them are used during +the installation process, such as `eg_db_config`, while others are usually +run as cron jobs for routine maintenance, such as `fine_generator.pl` and +`hold_targeter.pl`. Others are useful for less frequent needs, such as the +scripts for importing/exporting MARC records. You may explore these scripts +and adapt them for your local needs. You are also welcome to share your +improvements or ask any questions on the +http://evergreen-ils.org/communicate/[Evergreen IRC channel or email lists]. + +Here is a summary of the most commonly used scripts. The script name links +to more thorough documentation, if available. + + * action_trigger_aggregator.pl + -- Groups together event output for already processed events. Useful for + creating files that contain data from a group of events. Such as a CSV + file with all the overdue data for one day. + * xref:admin:actiontriggers_process.adoc#processing_action_triggers[action_trigger_runner.pl] + -- Useful for creating events for specified hooks and running pending events + * authority_authority_linker.pl + -- Links reference headings in authority records to main entry headings + in other authority records. Should be run at least once a day (only for + changed records). + * xref:#authority_control_fields[authority_control_fields.pl] + -- Links bibliographic records to the best matching authority record. + Should be run at least once a day (only for changed records). + You can accomplish this by running _authority_control_fields.pl --days-back=1_ + * autogen.sh + -- Generates web files used by the OPAC, especially files related to + organization unit hierarchy, fieldmapper IDL, locales selection, + facet definitions, compressed JS files and related cache key + * clark-kent.pl + -- Used to start and stop the reporter (which runs scheduled reports) + * xref:installation:server_installation.adoc#creating_the_evergreen_database[eg_db_config] + -- Creates database and schema, updates config files, sets Evergreen + administrator username and password + * fine_generator.pl + * hold_targeter.pl + * xref:#importing_authority_records_from_command_line[marc2are.pl] + -- Converts authority records from MARC format to Evergreen objects + suitable for importing via pg_loader.pl (or parallel_pg_loader.pl) + * marc2bre.pl + -- Converts bibliographic records from MARC format to Evergreen objects + suitable for importing via pg_loader.pl (or parallel_pg_loader.pl) + * marc2sre.pl + -- Converts serial records from MARC format to Evergreen objects + suitable for importing via pg_loader.pl (or parallel_pg_loader.pl) + * xref:#marc_export[marc_export] + -- Exports authority, bibliographic, and serial holdings records into + any of these formats: USMARC, UNIMARC, XML, BRE, ARE + * osrf_control + -- Used to start, stop and send signals to OpenSRF services + * parallel_pg_loader.pl + -- Uses the output of marc2bre.pl (or similar tools) to generate the SQL + for importing records into Evergreen in a parallel fashion + +[#authority_control_fields] + +== authority_control_fields: Connecting Bibliographic and Authority records == + +indexterm:[authority control] + +This script matches headings in bibliographic records to the appropriate +authority records. When it finds a match, it will add a subfield 0 to the +matching bibliographic field. + +Here is how the matching works: + +[options="header",cols="1,1,3"] +|========================================================= +|Bibliographic field|Authority field it matches|Subfields that it examines + +|100|100|a,b,c,d,f,g,j,k,l,n,p,q,t,u +|110|110|a,b,c,d,f,g,k,l,n,p,t,u +|111|111|a,c,d,e,f,g,j,k,l,n,p,q,t,u +|130|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t +|600|100|a,b,c,d,f,g,h,j,k,l,m,n,o,p,q,r,s,t,v,x,y,z +|610|110|a,b,c,d,f,g,h,k,l,m,n,o,p,r,s,t,v,w,x,y,z +|611|111|a,c,d,e,f,g,h,j,k,l,n,p,q,s,t,v,x,y,z +|630|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t,v,x,y,z +|648|148|a,v,x,y,z +|650|150|a,b,v,x,y,z +|651|151|a,v,x,y,z +|655|155|a,v,x,y,z +|700|100|a,b,c,d,f,g,j,k,l,n,p,q,t,u +|710|110|a,b,c,d,f,g,k,l,n,p,t,u +|711|111|a,c,d,e,f,g,j,k,l,n,p,q,t,u +|730|130|a,d,f,g,h,j,k,m,n,o,p,r,s,t +|751|151|a,v,x,y,z +|800|100|a,b,c,d,e,f,g,j,k,l,n,p,q,t,u,4 +|830|130|a,d,f,g,h,k,l,m,n,o,p,r,s,t +|========================================================= + + +[#marc_export] + +== marc_export: Exporting Bibliographic Records into MARC files == + +indexterm:[marc_export] +indexterm:[MARC records,exporting,using the command line] + +The following procedure explains how to export Evergreen bibliographic +records into MARC files using the *marc_export* support script. All steps +should be performed by the `opensrf` user from your Evergreen server. + +[NOTE] +Processing time for exporting records depends on several factors such as +the number of records you are exporting. It is recommended that you divide +the export ID files (records.txt) into a manageable number of records if +you are exporting a large number of records. + + . Create a text file list of the Bibliographic record IDs you would like +to export from Evergreen. One way to do this is using SQL: ++ +[source,sql] +---- +SELECT DISTINCT bre.id FROM biblio.record_entry AS bre + JOIN asset.call_number AS acn ON acn.record = bre.id + WHERE bre.deleted='false' and owning_lib=101 \g /home/opensrf/records.txt; +---- ++ +This query creates a file called `records.txt` containing a column of +distinct IDs of items owned by the organizational unit with the id 101. + + . Navigate to the support-scripts folder ++ +---- +cd /home/opensrf/Evergreen-ILS*/Open-ILS/src/support-scripts/ +---- + + . Run *marc_export*, using the ID file you created in step 1 to define which + files to export. The following example exports the records into MARCXML format. ++ +---- +cat /home/opensrf/records.txt | ./marc_export --store -i -c /openils/conf/opensrf_core.xml \ + -x /openils/conf/fm_IDL.xml -f XML --timeout 5 > exported_files.xml +---- + +[NOTE] +==================== +`marc_export` does not output progress as it executes. +==================== + +=== Options === + +The *marc_export* support script includes several options. You can find a complete list +by running `./marc_export -h`. A few key options are also listed below: + +==== --descendants and --library ==== + +The `marc_export` script has two related options, `--descendants` and +`--library`. Both options take one argument of an organizational unit + +The `--library` option will export records with holdings at the specified +organizational unit only. By default, this only includes physical holdings, +not electronic ones (also known as located URIs). + +The `descendants` option works much like the `--library` option +except that it is aware of the org. tree and will export records with +holdings at the specified organizational unit and all of its descendants. +This is handy if you want to export the records for all of the branches +of a system. You can do that by specifying this option and the system's +shortname, instead of specifying multiple `--library` options for each branch. + +Both the `--library` and `--descendants` options can be repeated. +All of the specified org. units and their descendants will be included +in the output. You can also combine `--library` and `--descendants` +options when necessary. + +==== --items ==== + +The `--items` option will add an 852 field for every relevant item to the MARC +record. This 852 field includes the following information: + +[options="header",cols="2,3"] +|=================================== +|Subfield |Contents +|$b (occurrence 1) |Call number owning library shortname +|$b (occurrence 2) |Item circulating library shortname +|$c |Shelving location +|$g |Circulation modifier +|$j |Call number +|$k |Call number prefix +|$m |Call number suffix +|$p |Barcode +|$s |Status +|$t |Copy number +|$x |Miscellaneous item information +|$y |Price +|=================================== + + +==== --since ==== + +You can use the `--since` option to export records modified after a certain date and time. + +==== --store ==== + +By default, marc_export will use the reporter storage service, which should +work in most cases. But if you have a separate reporter database and you +know you want to talk directly to your main production database, then you +can set the `--store` option to `cstore` or `storage`. + +==== --uris ==== +The `--uris` option (short form: `-u`) allows you to export records with +located URIs (i.e. electronic resources). When used by itself, it will export +only records that have located URIs. When used in conjunction with `--items`, +it will add records with located URIs but no items/copies to the output. +If combined with a `--library` or `--descendants` option, this option will +limit its output to those records with URIs at the designated libraries. The +best way to use this option is in combination with the `--items` and one of the +`--library` or `--descendants` options to export *all* of a library's +holdings both physical and electronic. + +[#pingest_pl] + +== Parallel Ingest with pingest.pl == + +indexterm:[pgingest.pl] +indexterm:[MARC records,importing,using the command line] + +A program named pingest.pl allows fast bibliographic record +ingest. It performs ingest in parallel so that multiple batches can +be done simultaneously. It operates by splitting the records to be +ingested up into batches and running all of the ingest methods on each +batch. You may pass in options to control how many batches are run at +the same time, how many records there are per batch, and which ingest +operations to skip. + +NOTE: The browse ingest is presently done in a single process over all +of the input records as it cannot run in parallel with itself. It +does, however, run in parallel with the other ingests. + +=== Command Line Options === + +pingest.pl accepts the following command line options: + +--host:: + The server where PostgreSQL runs (either host name or IP address). + The default is read from the PGHOST environment variable or + "localhost." + +--port:: + The port that PostgreSQL listens to on host. The default is read + from the PGPORT environment variable or 5432. + +--db:: + The database to connect to on the host. The default is read from + the PGDATABASE environment variable or "evergreen." + +--user:: + The username for database connections. The default is read from + the PGUSER environment variable or "evergreen." + +--password:: + The password for database connections. The default is read from + the PGPASSWORD environment variable or "evergreen." + +--batch-size:: + Number of records to process per batch. The default is 10,000. + +--max-child:: + Max number of worker processes (i.e. the number of batches to + process simultaneously). The default is 8. + +--skip-browse:: +--skip-attrs:: +--skip-search:: +--skip-facets:: +--skip-display:: + Skip the selected reingest component. + +--attr:: + This option allows the user to specify which record attributes to reingest. +It can be used one or more times to specify one or more attributes to +ingest. It can be omitted to reingest all record attributes. This +option is ignored if the `--skip-attrs` option is used. ++ +The `--attr` option is most useful after doing something specific that +requires only a partial ingest of records. For instance, if you add a +new language to the `config.coded_value_map` table, you will want to +reingest the `item_lang` attribute on all of your records. The +following command line will do that, and only that, ingest: ++ +---- +$ /openils/bin/pingest.pl --skip-browse --skip-search --skip-facets \ + --skip-display --attr=item_lang +---- + +--rebuild-rmsr:: + This option will rebuild the `reporter.materialized_simple_record` +(rmsr) table after the ingests are complete. ++ +This option might prove useful if you want to rebuild the table as +part of a larger reingest. If all you wish to do is to rebuild the +rmsr table, then it would be just as simple to connect to the database +server and run the following SQL: ++ +[source,sql] +---- +SELECT reporter.refresh_materialized_simple_record(); +---- + + + + +[#importing_authority_records_from_command_line] +== Importing Authority Records from Command Line == + +indexterm:[marc2are.pl] +indexterm:[pg_loader.pl] +indexterm:[MARC records,importing,using the command line] + +The major advantages of the command line approach are its speed and its +convenience for system administrators who can perform bulk loads of +authority records in a controlled environment. For alternate instructions, +see the cataloging manual. + + . Run *marc2are.pl* against the authority records, specifying the user +name, password, MARC type (USMARC or XML). Use `STDOUT` redirection to +either pipe the output directly into the next command or into an output +file for inspection. For example, to process a file with authority records +in MARCXML format named `auth_small.xml` using the default user name and +password, and directing the output into a file named `auth.are`: ++ +---- +cd Open-ILS/src/extras/import/ +perl marc2are.pl --user admin --pass open-ils --marctype XML auth_small.xml > auth.are +---- ++ +[NOTE] +The MARC type will default to USMARC if the `--marctype` option is not specified. + + . Run *parallel_pg_loader.pl* to generate the SQL necessary for importing the +authority records into your system. This script will create files in your +current directory with filenames like `pg_loader-output.are.sql` and +`pg_loader-output.sql` (which runs the previous SQL file). To continue with the +previous example by processing our new `auth.are` file: ++ +---- +cd Open-ILS/src/extras/import/ +perl parallel_pg_loader.pl --auto are --order are auth.are +---- ++ +[TIP] +To save time for very large batches of records, you could simply pipe the +output of *marc2are.pl* directly into *parallel_pg_loader.pl*. + + . Load the authority records from the SQL file that you generated in the +last step into your Evergreen database using the psql tool. Assuming the +default user name, host name, and database name for an Evergreen instance, +that command looks like: ++ +---- +psql -U evergreen -h localhost -d evergreen -f pg_loader-output.sql +---- + +== Juvenile-to-adult batch script == + +The batch `juv_to_adult.srfsh` script is responsible for toggling a patron +from juvenile to adult. It should be set up as a cron job. + +This script changes patrons to adult when they reach the age value set in the +library setting named "Juvenile Age Threshold" (`global.juvenile_age_threshold`). +When no library setting value is present at a given patron's home library, the +value passed in to the script will be used as a default. + +== MARC Stream Importer == + +indexterm:[MARC records,importing,using the command line] + +The MARC Stream Importer can import authority records or bibliographic records. +A single running instance of the script can import either type of record, based +on the record leader. + +This support script has its own configuration file, _marc_stream_importer.conf_, +which includes settings related to logs, ports, uses, and access control. + +By default, _marc_stream_importer.pl_ will typically be located in the +_/openils/bin_ directory. _marc_stream_importer.conf_ will typically be located +in _/openils/conf_. + +The importer is even more flexible than the staff client import, including the +following options: + + * _--bib-auto-overlay-exact_ and _--auth-auto-overlay-exact_: overlay/merge on +exact 901c matches + * _--bib-auto-overlay-1match_ and _--auth-auto-overlay-1match_: overlay/merge +when exactly one match is found + * _--bib-auto-overlay-best-match_ and _--auth-auto-overlay-best-match_: +overlay/merge on best match + * _--bib-import-no-match_ and _--auth-import-no-match_: import when no match +is found + +One advantage to using this tool instead of the staff client Import interface +is that the MARC Stream Importer can load a group of files at once. + diff --git a/docs/modules/development/pages/updating_translations_launchpad.adoc b/docs/modules/development/pages/updating_translations_launchpad.adoc new file mode 100644 index 0000000000..9b177395f9 --- /dev/null +++ b/docs/modules/development/pages/updating_translations_launchpad.adoc @@ -0,0 +1,52 @@ += Updating translations using Launchpad = +:toc: + +This document describes how to update the translations in an Evergreen branch +by pulling them from Launchpad, as well as update the files to be translated +in Launchpad by updating the POT files in the Evergreen master branch. + +== Prerequisites == +You must install all of the Python prerequisites required for building +translations, per +http://evergreen-ils.org/dokuwiki/doku.php?id=evergreen-admin:customizations:i18n + +* https://bitbucket.org/izi/polib/wiki/Home[polib] +* http://translate.sourceforge.net[translate-toolkit] +* http://pypi.python.org/pypi/python-Levenshtein/[levenshtein] +* http://pypi.python.org/pypi/setuptools[setuptools] +* http://pypi.python.org/pypi/simplejson/[simplejson] +* http://lxml.de/[lxml] + +== Updating the translations == + +. Check out the latest translations from Launchpad by branching the Bazaar +repository: ++ +[source,bash] +------------------------------------------------------------------------------ +bzr branch lp:~denials/evergreen/translation-export +------------------------------------------------------------------------------ ++ +This creates a directory called "translation-export". ++ +. Ensure you have an updated Evergreen release branch. +. Run the `build/i18n/scripts/update_pofiles` script to copy the translations + into the right place and avoid any updates that are purely metadata (dates + generated, etc). +. Commit the lot! And backport to whatever release branches need the updates. +. Build updated POT files: ++ +[source,bash] +------------------------------------------------------------------------------ +cd build/i18n +make newpot +------------------------------------------------------------------------------ ++ +This will extract all of the strings from the latest version of the files in +Evergreen. ++ +. (This part needs automation): Then, via the magic of `git diff` and `git add`, +go through all of the changed files and determine which ones actually have +string changes. Recommended approach is to re-run `git diff` after each +`git add`. +. Commit the updated POT files and backport to the pertinent release branches. diff --git a/docs/modules/installation/_attributes.adoc b/docs/modules/installation/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/installation/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/installation/nav.adoc b/docs/modules/installation/nav.adoc new file mode 100644 index 0000000000..ec0db099ec --- /dev/null +++ b/docs/modules/installation/nav.adoc @@ -0,0 +1,6 @@ +* xref:installation:introduction.adoc[Software Installation] +** xref:installation:system_requirements.adoc[System Requirements] +** xref:installation:server_installation.adoc[Installing the Evergreen server] +** xref:installation:server_upgrade.adoc[Upgrading the Evergreen Server] +** xref:installation:edi_setup.adoc[Setting Up EDI Acquisitions] + diff --git a/docs/modules/installation/pages/_attributes.adoc b/docs/modules/installation/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/installation/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/installation/pages/edi_setup.adoc b/docs/modules/installation/pages/edi_setup.adoc new file mode 100644 index 0000000000..9b5bed17f4 --- /dev/null +++ b/docs/modules/installation/pages/edi_setup.adoc @@ -0,0 +1,202 @@ += Setting Up EDI Acquisitions = +:toc: + +== Introduction == + +Electronic Data Interchange (EDI) is used to exchange information between +participating vendors and Evergreen. This chapter contains technical +information for installation and configuration of the components necessary +to run EDI Acquisitions for Evergreen. + +== Installation == + +=== Install EDI Translator === + +The EDI Translator is used to convert data into EDI format. It runs +on localhost and listens on port 9191 by default. This is controlled via +the edi_webrick.cnf file located in the edi_translator directory. It should +not be necessary to edit this configuration if you install EDI Translator +on the same server used for running Action/Triggers events. + +[NOTE] +If you are running Evergreen with a multi-server configuration, make sure +to install EDI Translator on the same server used for Action/Trigger event +generation. + +.Steps for Installing + +1. As the *opensrf* user, copy the EDI Translator code found in + Open-ILS/src/edi_translator to somewhere accessible + (for example, /openils/var/edi): ++ +[source, bash] +-------------------------------------------------- +cp -r Open-ILS/src/edi_translator /openils/var/edi +-------------------------------------------------- +2. Navigate to where you have saved the code to begin next step: ++ +[source, bash] +------------------- +cd /openils/var/edi +------------------- +3. Next, as the *root* user (or a user with sudo rights), install the + dependencies, via "install.sh". This will perform some apt-get routines + to install the code needed for the EDI translator to function. + (Note: subversion must be installed first) ++ +[source, bash] +----------- +./install.sh +----------- +4. Now, we're ready to start "edi_webrick.bash" which is the script that calls + the "Ruby" code to translate EDI. This script needs to be started in + order for EDI to function so please take appropriate measures to ensure this + starts following reboots/upgrades/etc. As the *opensrf* user: ++ +[source, bash] +----------------- +./edi_webrick.bash +----------------- +5. You can check to see if EDI translator is running. + * Using the command "ps aux | grep edi" should show you something similar + if the script is running properly: ++ +[source, bash] +------------------------------------------------------------------------------------------ +root 30349 0.8 0.1 52620 10824 pts/0 S 13:04 0:00 ruby ./edi_webrick.rb +------------------------------------------------------------------------------------------ + * To shutdown EDI Translator you can use something like pkill (assuming + no other ruby processes are running on that server): ++ +[source, bash] +----------------------- +kill -INT $(pgrep ruby) +----------------------- + +=== Install EDI Scripts === + +The EDI scripts are "edi_pusher.pl" and "edi_fetcher.pl" and are used to +"push" and "fetch" EDI messages for configured EDI accounts. + +1. As the *opensrf* user, copy edi_pusher.pl and edi_fetcher.pl from + Open-ILS/src/support-scripts into /openils/bin: ++ +[source, bash] +-------------------------------------------------- +cp Open-ILS/src/support-scripts/edi_pusher.pl /openils/bin +cp Open-ILS/src/support-scripts/edi_fetcher.pl /openils/bin +-------------------------------------------------- +2. Setup the edi_pusher.pl and edi_fetcher.pl scripts to run as cron jobs + in order to regularly push and receive EDI messages. + * Add to the opensrf user's crontab the following entries: ++ +[source, bash] +----------------------------------------------------------------------- +10 * * * * cd /openils/bin && /usr/bin/perl ./edi_pusher.pl > /dev/null +0 1 * * * cd /openils/bin && /usr/bin/perl ./edi_fetcher.pl > /dev/null +----------------------------------------------------------------------- + * The example for edi_pusher.pl sets the script to run at + 10 minutes past the hour, every hour. + * The example for edi_fetcher.pl sets the script to run at + 1 AM every night. + +[NOTE] +You may choose to run the EDI scripts more or less frequently based on the +necessary response times from your vendors. + +== Configuration == + +=== Configuring Providers === + +Look in Administration -> Acquisitions Administration -> Providers + +[options="header"] +|====================================================================================== +|Column |Description/Notes +|Provider Name |A unique name to identify the provider +|Code |A unique code to identify the provider +|Owner |The org unit who will "own" the provider. +|Currency |The currency format the provider accepts +|Active |Whether or not the Provider is "active" for use +|Default Claim Policy|?? +|EDI Default |The default "EDI Account" to use (see EDI Accounts Configuration) +|Email |The email address for the provider +|Fax Phone |A fax number for the provider +|Holdings Tag |The holdings tag to be utilized (usually 852, for Evergreen) +|Phone |A phone number for the provider +|Prepayment Required |Whether or not prepayment is required +|SAN |The vendor provided, org unit specific SAN code +|URL |The vendor website +|====================================================================================== + +=== Configuring EDI Accounts === + +Look in Administration -> Acquisitions Administration -> EDI Accounts + +[options="header"] +|=============================================================================================================== +|Column |Description/Notes +|Label |A unique name to identify the provider +|Host |FTP/SFTP/SSH hostname - vendor assigned +|Username |FTP/SFTP/SSH username - vendor assigned +|Password |FTP/SFTP/SSH password - vendor assigned +|Account |Vendor assigned account number associated with your organization +|Owner |The organizational unit who owns the EDI account +|Last Activity |The date of last activity for the account +|Provider |This is a link to one of the "codes" in the "Providers" interface +|Path |The path on the vendor's server where Evergreen will send it's outgoing .epo files +|Incoming Directory |The path on the vendor's server where "incoming" .epo files are stored +|Vendor Account Number|Vendor assigned account number. +|Vendor Assigned Code |Usually a sub-account designation. Can be used with or without the Vendor Account Number. +|=============================================================================================================== + +=== Configuring Organizational Unit SAN code === + +Look in Administration -> Server Administration -> Organizational Units + +This interface allows a library to configure their SAN, alongside +their address, phone, etc. + +== Troubleshooting == + +=== PO JEDI Template Issues === + +Some libraries may run into issues with the action/trigger (PO JEDI). +The template has to be modified to handle different vendor codes that +may be used. For instance, if you use "ingra" instead of INGRAM this +may cause a problem because they are hardcoded in the template. The +following is an example of one modification that seems to work. + +.Original template has: + +[source, bash] +---------------------------------------------------------------------------------------------------------------------------------------------- +"buyer":[ + [% IF target.provider.edi_default.vendcode && (target.provider.code == 'BT' || target.provider.name.match('(?i)^BAKER & TAYLOR')) -%] + {"id-qualifier": 91, "id":"[% target.ordering_agency.mailing_address.san _ ' ' _ target.provider.edi_default.vendcode %]"} + [%- ELSIF target.provider.edi_default.vendcode && target.provider.code == 'INGRAM' -%] + {"id":"[% target.ordering_agency.mailing_address.san %]"}, + {"id-qualifier": 91, "id":"[% target.provider.edi_default.vendcode %]"} + [%- ELSE -%] + {"id":"[% target.ordering_agency.mailing_address.san %]"} + [%- END -%] +], +---------------------------------------------------------------------------------------------------------------------------------------------- + +.Modified template has the following where it matches on provider SAN instead of code: + +[source, bash] +------------------------------------------------------------------------------------------------------------------------------------------ +"buyer":[ + [% IF target.provider.edi_default.vendcode && (target.provider.san == '1556150') -%] + {"id-qualifier": 91, "id":"[% target.ordering_agency.mailing_address.san _ ' ' _ target.provider.edi_default.vendcode %]"} + {"id-qualifier": 91, "id":"[% target.ordering_agency.mailing_address.san _ ' ' _ target.provider.edi_default.vendcode %]"} + [%- ELSIF target.provider.edi_default.vendcode && (target.provider.san == '1697978') -%] + {"id":"[% target.ordering_agency.mailing_address.san %]"}, + {"id-qualifier": 91, "id":"[% target.provider.edi_default.vendcode %]"} + [%- ELSE -%] + {"id":"[% target.ordering_agency.mailing_address.san %]"} + [%- END -%] +], +------------------------------------------------------------------------------------------------------------------------------------------ + diff --git a/docs/modules/installation/pages/introduction.adoc b/docs/modules/installation/pages/introduction.adoc new file mode 100644 index 0000000000..c2e81fa90d --- /dev/null +++ b/docs/modules/installation/pages/introduction.adoc @@ -0,0 +1,4 @@ += Introduction = + +This part will guide you through the installation steps installation or +upgrading your Evergreen system. It is intended for system administrators. diff --git a/docs/modules/installation/pages/server_installation.adoc b/docs/modules/installation/pages/server_installation.adoc new file mode 100644 index 0000000000..44607b80b4 --- /dev/null +++ b/docs/modules/installation/pages/server_installation.adoc @@ -0,0 +1,642 @@ += Installing the Evergreen server = +:toc: + +== Preamble: referenced user accounts == + +In subsequent sections, we will refer to a number of different accounts, as +follows: + + * Linux user accounts: + ** The *user* Linux account is the account that you use to log onto the + Linux system as a regular user. + ** The *root* Linux account is an account that has system administrator + privileges. On Debian you can switch to this account from + your *user* account by issuing the `su -` command and entering the + password for the *root* account when prompted. On Ubuntu you can switch + to this account from your *user* account using the `sudo su -` command + and entering the password for your *user* account when prompted. + ** The *opensrf* Linux account is an account that you create when installing + OpenSRF. You can switch to this account from the *root* account by + issuing the `su - opensrf` command. + ** The *postgres* Linux account is created automatically when you install + the PostgreSQL database server. You can switch to this account from the + *root* account by issuing the `su - postgres` command. + * PostgreSQL user accounts: + ** The *evergreen* PostgreSQL account is a superuser account that you will + create to connect to the PostgreSQL database server. + * Evergreen administrator account: + ** The *egadmin* Evergreen account is an administrator account for + Evergreen that you will use to test connectivity and configure your + Evergreen instance. + +== Preamble: developer instructions == + +[NOTE] +Skip this section if you are using an official release tarball downloaded +from http://evergreen-ils.org/egdownloads + +Developers working directly with the source code from the Git repository, +rather than an official release tarball, must perform one step before they +can proceed with the `./configure` step. + +As the *user* Linux account, issue the following command in the Evergreen +source directory to generate the configure script and Makefiles: + +[source, bash] +------------------------------------------------------------------------------ +autoreconf -i +------------------------------------------------------------------------------ + +== Installing prerequisites == + + * **PostgreSQL**: The minimum supported version is 9.6. + * **Linux**: Evergreen has been tested on + Debian Buster (10), + Debian Stretch (9), + Debian Jessie (8), + Ubuntu Bionic Beaver (18.04), + and Ubuntu Xenial Xerus (16.04). + If you are running an older version of these distributions, you may want + to upgrade before upgrading Evergreen. For instructions on upgrading these + distributions, visit the Debian or Ubuntu websites. + * **OpenSRF**: The minimum supported version of OpenSRF is 3.2.0. + + +Evergreen has a number of prerequisite packages that must be installed +before you can successfully configure, compile, and install Evergreen. + +1. Begin by installing the most recent version of OpenSRF (3.2.0 or later). + You can download OpenSRF releases from http://evergreen-ils.org/opensrf-downloads/ ++ +2. Issue the following commands as the *root* Linux account to install + prerequisites using the `Makefile.install` prerequisite installer, + substituting `debian-buster`,`debian-stretch`,`debian-jessie`,`ubuntu-bionic`, or + `ubuntu-xenial` for below: ++ +[source, bash] +------------------------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install +------------------------------------------------------------------------------ ++ +[#optional_developer_additions] +3. OPTIONAL: Developer additions ++ +To perform certain developer tasks from a Git source code checkout, +including the testing of the Angular web client components, +additional packages may be required. As the *root* Linux account: ++ + * To install packages needed for retrieving and managing web dependencies, + use the -developer Makefile.install target. Currently, + this is only needed for building and installing the web + staff client. ++ +[source, bash] +------------------------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install -developer +------------------------------------------------------------------------------ ++ + * To install packages required for building Evergreen translations, use + the -translator Makefile.install target. ++ +[source, bash] +------------------------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install -translator +------------------------------------------------------------------------------ ++ + * To install packages required for building Evergreen release bundles, use + the -packager Makefile.install target. ++ +[source, bash] +------------------------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install -packager +------------------------------------------------------------------------------ + +== Extra steps for web staff client == + +[NOTE] +Skip this entire section if you are using an official release tarball downloaded +from http://evergreen-ils.org/downloads. Otherwise, ensure you have installed the +xref:#optional_developer_additions[optional developer additions] before proceeding. + +[[install_files_for_web_staff_client]] +=== Install AngularJS files for web staff client === + +1. Building, Testing, Minification: The remaining steps all take place within + the staff JS web root: ++ +[source,sh] +------------------------------------------------------------------------------ +cd $EVERGREEN_ROOT/Open-ILS/web/js/ui/default/staff/ +------------------------------------------------------------------------------ ++ +2. Install Project-local Dependencies. npm inspects the 'package.json' file + for dependencies and fetches them from the Node package network. ++ +[source,sh] +------------------------------------------------------------------------------ +npm install # fetch JS dependencies +------------------------------------------------------------------------------ ++ +3. Run the build script. ++ +[source,sh] +------------------------------------------------------------------------------ +# build, concat+minify +npm run build-prod +------------------------------------------------------------------------------ ++ +4. OPTIONAL: Test web client code if the -developer packages were installed. + CHROME_BIN should be set to the path to chrome or chromimum, e.g., + `/usr/bin/chromium`: ++ +[source,sh] +------------------------------------------------------------------------------ +CHROME_BIN=/path/to/chrome npm run test +------------------------------------------------------------------------------ + +[[install_files_for_angular_web_staff_client]] +=== Install Angular files for web staff client === + +1. Building, Testing, Minification: The remaining steps all take place within + the Angular staff root: ++ +[source,sh] +------------------------------------------------------------------------------ +cd $EVERGREEN_ROOT/Open-ILS/src/eg2/ +------------------------------------------------------------------------------ ++ +2. Install Project-local Dependencies. npm inspects the 'package.json' file + for dependencies and fetches them from the Node package network. ++ +[source,sh] +------------------------------------------------------------------------------ +npm install # fetch JS dependencies +------------------------------------------------------------------------------ ++ +3. Run the build script. ++ +[source,sh] +------------------------------------------------------------------------------ +# build and run tests +ng build --prod +------------------------------------------------------------------------------ ++ +4. OPTIONAL: Test eg2 web client code if the -developer packages were installed: + CHROME_BIN should be set to the path to chrome or chromimum, e.g., + `/usr/bin/chromium`: ++ +[source,sh] +------------------------------------------------------------------------------ +CHROME_BIN=/path/to/chrome npm run test +------------------------------------------------------------------------------ + +== Configuration and compilation instructions == + +For the time being, we are still installing everything in the `/openils/` +directory. From the Evergreen source directory, issue the following commands as +the *user* Linux account to configure and build Evergreen: + +[source, bash] +------------------------------------------------------------------------------ +PATH=/openils/bin:$PATH ./configure --prefix=/openils --sysconfdir=/openils/conf +make +------------------------------------------------------------------------------ + +These instructions assume that you have also installed OpenSRF under `/openils/`. +If not, please adjust PATH as needed so that the Evergreen `configure` script +can find `osrf_config`. + +== Installation instructions == + +1. Once you have configured and compiled Evergreen, issue the following + command as the *root* Linux account to install Evergreen and copy + example configuration files to `/openils/conf`. ++ +[source, bash] +------------------------------------------------------------------------------ +make install +------------------------------------------------------------------------------ + +== Change ownership of the Evergreen files == + +All files in the `/openils/` directory and subdirectories must be owned by the +`opensrf` user. Issue the following command as the *root* Linux account to +change the ownership on the files: + +[source, bash] +------------------------------------------------------------------------------ +chown -R opensrf:opensrf /openils +------------------------------------------------------------------------------ + +== Run ldconfig == + +On Ubuntu 18.04 or Debian Stretch / Buster, run the following command as the root user: + +[source, bash] +------------------------------------------------------------------------------ +ldconfig +------------------------------------------------------------------------------ + +== Additional Instructions for Developers == + +[NOTE] +Skip this section if you are using an official release tarball downloaded +from http://evergreen-ils.org/egdownloads + +Developers working directly with the source code from the Git repository, +rather than an official release tarball, need to install the Dojo Toolkit +set of JavaScript libraries. The appropriate version of Dojo is included in +Evergreen release tarballs. Developers should install the Dojo 1.3.3 version +of Dojo by issuing the following commands as the *opensrf* Linux account: + +[source, bash] +------------------------------------------------------------------------------ +wget http://download.dojotoolkit.org/release-1.3.3/dojo-release-1.3.3.tar.gz +tar -C /openils/var/web/js -xzf dojo-release-1.3.3.tar.gz +cp -r /openils/var/web/js/dojo-release-1.3.3/* /openils/var/web/js/dojo/. +------------------------------------------------------------------------------ + + +== Configure the Apache Web server == + +. Use the example configuration files to configure your Web server for +the Evergreen catalog, web staff client, Web services, and administration +interfaces. Issue the following commands as the *root* Linux account: ++ +[source,bash] +------------------------------------------------------------------------------------ +cp Open-ILS/examples/apache_24/eg_24.conf /etc/apache2/sites-available/eg.conf +cp Open-ILS/examples/apache_24/eg_vhost_24.conf /etc/apache2/eg_vhost.conf +cp Open-ILS/examples/apache_24/eg_startup /etc/apache2/ +# Now set up SSL +mkdir /etc/apache2/ssl +cd /etc/apache2/ssl +------------------------------------------------------------------------------------ ++ +. The `openssl` command cuts a new SSL key for your Apache server. For a +production server, you should purchase a signed SSL certificate, but you can +just use a self-signed certificate and accept the warnings in the +and browser during testing and development. Create an SSL key for the Apache +server by issuing the following command as the *root* Linux account: ++ +[source,bash] +------------------------------------------------------------------------------ +openssl req -new -x509 -days 365 -nodes -out server.crt -keyout server.key +------------------------------------------------------------------------------ ++ +. As the *root* Linux account, edit the `eg.conf` file that you copied into +place. + a. To enable access to the offline upload / execute interface from any + workstation on any network, make the following change (and note that + you *must* secure this for a production instance): + * Replace `Require host 10.0.0.0/8` with `Require all granted` +. Change the user for the Apache server. + * As the *root* Linux account, edit + `/etc/apache2/envvars`. Change `export APACHE_RUN_USER=www-data` to + `export APACHE_RUN_USER=opensrf`. +. As the *root* Linux account, configure Apache with KeepAlive settings + appropriate for Evergreen. Higher values can improve the performance of a + single client by allowing multiple requests to be sent over the same TCP + connection, but increase the risk of using up all available Apache child + processes and memory. + * Edit `/etc/apache2/apache2.conf`. + a. Change `KeepAliveTimeout` to `1`. + b. Change `MaxKeepAliveRequests` to `100`. +. As the *root* Linux account, configure the prefork module to start and keep + enough Apache servers available to provide quick responses to clients without + running out of memory. The following settings are a good starting point for a + site that exposes the default Evergreen catalogue to the web: ++ +.`/etc/apache2/mods-available/mpm_prefork.conf` +[source,bash] +------------------------------------------------------------------------------ + + StartServers 15 + MinSpareServers 5 + MaxSpareServers 15 + MaxRequestWorkers 75 + MaxConnectionsPerChild 500 + +------------------------------------------------------------------------------ ++ +. As the *root* user, enable the mpm_prefork module: ++ +[source,bash] +------------------------------------------------------------------------------ +a2dismod mpm_event +a2enmod mpm_prefork +------------------------------------------------------------------------------ ++ +. As the *root* Linux account, enable the Evergreen site: ++ +[source,bash] +------------------------------------------------------------------------------ +a2dissite 000-default # OPTIONAL: disable the default site (the "It Works" page) +a2ensite eg.conf +------------------------------------------------------------------------------ ++ +. As the *root* Linux account, enable Apache to write + to the lock directory; this is currently necessary because Apache + is running as the `opensrf` user: ++ +[source,bash] +------------------------------------------------------------------------------ +chown opensrf /var/lock/apache2 +------------------------------------------------------------------------------ + +Learn more about additional Apache options in the following sections: + + * xref:admin:apache_rewrite_tricks.adoc#apache_rewrite_tricks[Apache Rewrite Tricks] + * xref:admin:apache_access_handler.adoc#apache_access_handler_perl_module[Apache Access Handler Perl Module] + +== Configure OpenSRF for the Evergreen application == + +There are a number of example OpenSRF configuration files in `/openils/conf/` +that you can use as a template for your Evergreen installation. Issue the +following commands as the *opensrf* Linux account: + +[source, bash] +------------------------------------------------------------------------------ +cp -b /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml +cp -b /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml +------------------------------------------------------------------------------ + +When you installed OpenSRF, you created four Jabber users on two +separate domains and edited the `opensrf_core.xml` file accordingly. Please +refer back to the OpenSRF README and, as the *opensrf* Linux account, edit the +Evergreen version of the `opensrf_core.xml` file using the same Jabber users +and domains as you used while installing and testing OpenSRF. + +[NOTE] +The `-b` flag tells the `cp` command to create a backup version of the +destination file. The backup version of the destination file has a tilde (`~`) +appended to the file name, so if you have forgotten the Jabber users and +domains, you can retrieve the settings from the backup version of the files. + +`eg_db_config`, described in xref:#creating_the_evergreen_database[Creating the Evergreen database], sets the database connection information in `opensrf.xml` for you. + +=== Configure action triggers for the Evergreen application === +_Action Triggers_ provide hooks for the system to perform actions when a given +event occurs; for example, to generate reminder or overdue notices, the +`checkout.due` hook is processed and events are triggered for potential actions +if there is no checkin time. + +To enable the default set of hooks, issue the following command as the +*opensrf* Linux account: + +[source, bash] +------------------------------------------------------------------------------ +cp -b /openils/conf/action_trigger_filters.json.example /openils/conf/action_trigger_filters.json +------------------------------------------------------------------------------ + +For more information about configuring and running action triggers, see +xref:admin:actiontriggers_process.adoc#processing_action_triggers[Notifications / Action Triggers]. + +[#creating_the_evergreen_database] +== Creating the Evergreen database == + +=== Setting up the PostgreSQL server === + +For production use, most libraries install the PostgreSQL database server on a +dedicated machine. Therefore, by default, the `Makefile.install` prerequisite +installer does *not* install the PostgreSQL 9 database server that is required +by every Evergreen system. You can install the packages required by Debian or +Ubuntu on the machine of your choice using the following commands as the +*root* Linux account: + +. Installing PostgreSQL server packages + +Each OS build target provides the postgres server installation packages +required for each operating system. To install Postgres server packages, +use the make target 'postgres-server-'. Choose the most appropriate +command below based on your operating system. This will install PostgreSQL 9.6, +the minimum supported version. + +[source, bash] +------------------------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-buster +make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-stretch +make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-jessie +make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-xenial +make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-bionic +------------------------------------------------------------------------------ + +To install PostgreSQL version 10, use the following command for your operating +system: + +[source, bash] +------------------------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-buster-10 +make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-stretch-10 +make -f Open-ILS/src/extras/Makefile.install postgres-server-debian-jessie-10 +make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-xenial-10 +make -f Open-ILS/src/extras/Makefile.install postgres-server-ubuntu-bionic-10 +------------------------------------------------------------------------------ + +For a standalone PostgreSQL server, install the following Perl modules for your +distribution as the *root* Linux account: + +.(Debian and Ubuntu) +No extra modules required for these distributions. + +You need to create a PostgreSQL superuser to create and access the database. +Issue the following command as the *postgres* Linux account to create a new +PostgreSQL superuser named `evergreen`. When prompted, enter the new user's +password: + +[source, bash] +------------------------------------------------------------------------------ +createuser -s -P evergreen +------------------------------------------------------------------------------ + +.Enabling connections to the PostgreSQL database + +Your PostgreSQL database may be configured by default to prevent connections, +for example, it might reject attempts to connect via TCP/IP or from other +servers. To enable TCP/IP connections from localhost, check your `pg_hba.conf` +file, found in the `/etc/postgresql/` directory on Debian and Ubuntu. +A simple way to enable TCP/IP +connections from localhost to all databases with password authentication, which +would be suitable for a test install of Evergreen on a single server, is to +ensure the file contains the following entries _before_ any "host ... ident" +entries: + +------------------------------------------------------------------------------ +host all all ::1/128 md5 +host all all 127.0.0.1/32 md5 +------------------------------------------------------------------------------ + +When you change the `pg_hba.conf` file, you will need to reload PostgreSQL to +make the changes take effect. For more information on configuring connectivity +to PostgreSQL, see +http://www.postgresql.org/docs/devel/static/auth-pg-hba-conf.html + +=== Creating the Evergreen database and schema === + +Once you have created the *evergreen* PostgreSQL account, you also need to +create the database and schema, and configure your configuration files to point +at the database server. Issue the following command as the *root* Linux account +from inside the Evergreen source directory, replacing , , +, , and with the appropriate values for your +PostgreSQL database (where and are for the *evergreen* +PostgreSQL account you just created), and replace and +with the values you want for the *egadmin* Evergreen administrator account: + +[source, bash] +------------------------------------------------------------------------------ +perl Open-ILS/src/support-scripts/eg_db_config --update-config \ + --service all --create-database --create-schema --create-offline \ + --user --password --hostname --port \ + --database --admin-user --admin-pass +------------------------------------------------------------------------------ + +This creates the database and schema and configures all of the services in +your `/openils/conf/opensrf.xml` configuration file to point to that database. +It also creates the configuration files required by the Evergreen `cgi-bin` +administration scripts, and sets the user name and password for the *egadmin* +Evergreen administrator account to your requested values. + +You can get a complete set of options for `eg_db_config` by passing the +`--help` parameter. + +=== Loading sample data === + +If you add the `--load-all-sample` parameter to the `eg_db_config` command, +a set of authority and bibliographic records, call numbers, copies, staff +and regular users, and transactions will be loaded into your target +database. This sample dataset is commonly referred to as the _concerto_ +sample data, and can be useful for testing out Evergreen functionality and +for creating problem reports that developers can easily recreate with their +own copy of the _concerto_ sample data. + +=== Creating the database on a remote server === + +In a production instance of Evergreen, your PostgreSQL server should be +installed on a dedicated server. + +==== PostgreSQL 9.6 and later ==== + +To create the database instance on a remote database server running PostgreSQL +9.6 or later, simply use the `--create-database` flag on `eg_db_config`. + +== Starting Evergreen == + +1. As the *root* Linux account, start the `memcached` and `ejabberd` services +(if they aren't already running): ++ +[source, bash] +------------------------------------------------------------------------------ +/etc/init.d/ejabberd start +/etc/init.d/memcached start +------------------------------------------------------------------------------ ++ +2. As the *opensrf* Linux account, start Evergreen. The `-l` flag in the +following command is only necessary if you want to force Evergreen to treat the +hostname as `localhost`; if you configured `opensrf.xml` using the real +hostname of your machine as returned by `perl -ENet::Domain 'print +Net::Domain::hostfqdn() . "\n";'`, you should not use the `-l` flag. ++ +[source, bash] +------------------------------------------------------------------------------ +osrf_control -l --start-all +------------------------------------------------------------------------------ ++ + ** If you receive the error message `bash: osrf_control: command not found`, + then your environment variable `PATH` does not include the `/openils/bin` + directory; this should have been set in the *opensrf* Linux account's + `.bashrc` configuration file. To manually set the `PATH` variable, edit the + configuration file `~/.bashrc` as the *opensrf* Linux account and add the + following line: ++ +[source, bash] +------------------------------------------------------------------------------ +export PATH=$PATH:/openils/bin +------------------------------------------------------------------------------ ++ +3. As the *opensrf* Linux account, generate the Web files needed by the web staff + client and catalogue and update the organization unit proximity (you need to do + this the first time you start Evergreen, and after that each time you change the library org unit configuration. +): ++ +[source, bash] +------------------------------------------------------------------------------ +autogen.sh +------------------------------------------------------------------------------ ++ +4. As the *root* Linux account, restart the Apache Web server: ++ +[source, bash] +------------------------------------------------------------------------------ +/etc/init.d/apache2 restart +------------------------------------------------------------------------------ ++ +If the Apache Web server was running when you started the OpenSRF services, you +might not be able to successfully log in to the OPAC or web staff client until the +Apache Web server is restarted. + +== Testing connections to Evergreen == + +Once you have installed and started Evergreen, test your connection to +Evergreen via `srfsh`. As the *opensrf* Linux account, issue the following +commands to start `srfsh` and try to log onto the Evergreen server using the +*egadmin* Evergreen administrator user name and password that you set using the +`eg_db_config` command: + +[source, bash] +------------------------------------------------------------------------------ +/openils/bin/srfsh +srfsh% login +------------------------------------------------------------------------------ + +You should see a result like: + + Received Data: "250bf1518c7527a03249858687714376" + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 0.045286 + ------------------------------------ + + Received Data: { + "ilsevent":0, + "textcode":"SUCCESS", + "desc":" ", + "pid":21616, + "stacktrace":"oils_auth.c:304", + "payload":{ + "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", + "authtime":420 + } + + } + + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 1.336568 + ------------------------------------ +[[install-troubleshooting-1]] +If this does not work, it's time to do some troubleshooting. + + * As the *opensrf* Linux account, run the `settings-tester.pl` script to see + if it finds any system configuration problems. The script is found at + `Open-ILS/src/support-scripts/settings-tester.pl` in the Evergreen source + tree. + * Follow the steps in the http://evergreen-ils.org/dokuwiki/doku.php?id=troubleshooting:checking_for_errors[troubleshooting guide]. + * If you have faithfully followed the entire set of installation steps + listed here, you are probably extremely close to a working system. + Gather your configuration files and log files and contact the + http://evergreen-ils.org/communicate/mailing-lists/[Evergreen development +mailing list] for assistance before making any drastic changes to your system + configuration. + +== Getting help == + +Need help installing or using Evergreen? Join the mailing lists at +http://evergreen-ils.org/communicate/mailing-lists/ or contact us on the Freenode +IRC network on the #evergreen channel. + +== License == + +This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 +Unported License. To view a copy of this license, visit +http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative +Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA. diff --git a/docs/modules/installation/pages/server_upgrade.adoc b/docs/modules/installation/pages/server_upgrade.adoc new file mode 100644 index 0000000000..cbd647b426 --- /dev/null +++ b/docs/modules/installation/pages/server_upgrade.adoc @@ -0,0 +1,322 @@ += Upgrading the Evergreen Server = +:toc: + +Before upgrading, it is important to carefully plan an upgrade strategy to minimize system downtime and service interruptions. +All of the steps in this chapter are to be completed from the command line. + +== Software Prerequisites == + + * **PostgreSQL**: The minimum supported version is 9.6. + * **Linux**: Evergreen 3.X.X has been tested on Debian Stretch (9.0), + Debian Jessie (8.0), Ubuntu Xenial Xerus (16.04), and Ubuntu Bionic Beaver (18.04). + If you are running an older version of these distributions, you may want + to upgrade before upgrading Evergreen. For instructions on upgrading these + distributions, visit the Debian or Ubuntu websites. + * **OpenSRF**: The minimum supported version of OpenSRF is 3.2.0. + + +In the following instructions, you are asked to perform certain steps as either the *root* or *opensrf* user. + + * **Debian**: To become the *root* user, issue the `su` command and enter the password of the root user. + * **Ubuntu**: To become the *root* user, issue the `sudo su` command and enter the password of your current user. + +To switch from the *root* user to a different user, issue the `su - [user]` +command; for example, `su - opensrf`. Once you have become a non-root user, to +become the *root* user again simply issue the `exit` command. + +== Upgrade the Evergreen code == + +The following steps guide you through a simplistic upgrade of a production +server. You must adjust these steps to accommodate your customizations such +as catalogue skins. + +. Stop Evergreen and back up your data: + .. As *root*, stop the Apache web server. + .. As the *opensrf* user, stop all Evergreen and OpenSRF services: ++ +[source, bash] +----------------------------- +osrf_control --localhost --stop-all +----------------------------- ++ + .. Back up the /openils directory. +. Upgrade OpenSRF. Download and install the latest version of OpenSRF from +the https://evergreen-ils.org/opensrf-downloads/[OpenSRF download page]. +. As the *opensrf* user, download and extract Evergreen 3.X.X: ++ +[source, bash] +----------------------------------------------- +wget https://evergreen-ils.org/downloads/Evergreen-ILS-3.X.X.tar.gz +tar xzf Evergreen-ILS-3.X.X.tar.gz +----------------------------------------------- ++ +[NOTE] +For the latest edition of Evergreen, check the https://evergreen-ils.org/egdownloads/[Evergreen download page] and adjust upgrading instructions accordingly. + +. As the *root* user, install the prerequisites: ++ +[source, bash] +--------------------------------------------- +cd /home/opensrf/Evergreen-ILS-3.X.X +--------------------------------------------- ++ +On the next command, replace `[distribution]` with one of these values for your +distribution of Debian or Ubuntu: ++ +indexterm:[Linux, Debian] +indexterm:[Linux, Ubuntu] ++ + * `debian-stretch` for Debian Stretch (9.0) (EDI compatibility in progress) + * `debian-jessie` for Debian Jessie (8.0) (See https://bugs.launchpad.net/evergreen/+bug/1342227[Bug 134222] if you want to use EDI) + * `ubuntu-xenial` for Ubuntu Xenial Xerus (16.04) (EDI compatibility in progress) + ++ +[source, bash] +------------------------------------------------------------ +make -f Open-ILS/src/extras/Makefile.install [distribution] +------------------------------------------------------------ ++ +. As the *opensrf* user, configure and compile Evergreen: ++ +[source, bash] +------------------------------------------------------------ +cd /home/opensrf/Evergreen-ILS-3.X.X +PATH=/openils/bin:$PATH ./configure --prefix=/openils --sysconfdir=/openils/conf +make +------------------------------------------------------------ ++ +These instructions assume that you have also installed OpenSRF under /openils/. If not, please adjust PATH as needed so that the Evergreen configure script can find osrf_config. ++ +. As the *root* user, install Evergreen: ++ +[source, bash] +------------------------------------------------------------ +cd /home/opensrf/Evergreen-ILS-3.X.X +make install +------------------------------------------------------------ ++ + +**Note** that this version of Evergreen does not use the legacy XUL staff +client by default, but if you wish to use a versioned XUL staff client, you +can supply `STAFF_CLIENT_STAMP` during the `make install` step like this: ++ +[source, bash] +------------------------------------------------------------ +cd /home/opensrf/Evergreen-ILS-3.X.X +make STAFF_CLIENT_STAMP_ID=rel_3_x_x install +------------------------------------------------------------ ++ +. As the *root* user, change all files to be owned by the opensrf user and group: ++ +[source, bash] +------------------------------------------------------------ +chown -R opensrf:opensrf /openils +------------------------------------------------------------ ++ +. (Optional, only if you are using the legacy staff client) + As the *opensrf* user, update the server symlink in /openils/var/web/xul/: ++ +[source, bash] +------------------------------------------------------------ +cd /openils/var/web/xul/ +rm server +ln -sf rel_3_x_x/server server +------------------------------------------------------------ ++ +. As the *opensrf* user, update opensrf_core.xml and opensrf.xml by copying the + new example files (/openils/conf/opensrf_core.xml.example and + /openils/conf/opensrf.xml). The _-b_ option creates a backup copy of the old file. ++ +[source, bash] +------------------------------------------------------------ +cp -b /openils/conf/opensrf_core.xml.example /openils/conf/opensrf_core.xml +cp -b /openils/conf/opensrf.xml.example /openils/conf/opensrf.xml +------------------------------------------------------------ ++ +[CAUTION] +Copying these configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying them. ++ +. As the *opensrf* user, update the configuration files: ++ +[source, bash] +------------------------------------------------------------------------- +cd /home/opensrf/Evergreen-ILS-3.X.X +perl Open-ILS/src/support-scripts/eg_db_config --update-config --service all \ +--create-offline --database evergreen --host localhost --user evergreen --password evergreen +------------------------------------------------------------------------- ++ +. As the *root* user, update the Apache files: ++ +indexterm:[Apache] ++ +Use the example configuration files in `Open-ILS/examples/apache/` (for +Apache versions below 2.4) or `Open-ILS/examples/apache_24/` (for Apache +versions 2.4 or greater) to configure your Web server for the Evergreen +catalog, staff client, Web services, and administration interfaces. Issue the +following commands as the *root* Linux account: ++ +[CAUTION] +Copying these Apache configuration files will remove any customizations you have made to them. Remember to redo your customizations after copying them. +For example, if you purchased an SSL certificate, you will need to edit eg.conf to point to the appropriate SSL certificate files. +The diff command can be used to show the differences between the distribution version and your customized version. `diff ` ++ +.. Update _/etc/apache2/eg_startup_ by copying the example from _Open-ILS/examples/apache/eg_startup_. ++ +[source, bash] +---------------------------------------------------------- +cp /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/examples/apache/eg_startup /etc/apache2/eg_startup +---------------------------------------------------------- ++ +.. Update /etc/apache2/eg_vhost.conf by copying the example from Open-ILS/examples/apache/eg_vhost.conf. ++ +[source, bash] +---------------------------------------------------------- +cp /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/examples/apache/eg_vhost.conf /etc/apache2/eg_vhost.conf +---------------------------------------------------------- ++ +.. Update /etc/apache2/sites-available/eg.conf by copying the example from Open-ILS/examples/apache/eg.conf. ++ +[source, bash] +---------------------------------------------------------- +cp /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/examples/apache/eg.conf /etc/apache2/sites-available/eg.conf +---------------------------------------------------------- + +== Upgrade the Evergreen database schema == + +indexterm:[database schema] + +The upgrade of the Evergreen database schema is the lengthiest part of the +upgrade process for sites with a significant amount of production data. + +Before running the upgrade script against your production Evergreen database, +back up your database, restore it to a test server, and run the upgrade script +against the test server. This enables you to determine how long the upgrade +will take and whether any local customizations present problems for the +stock upgrade script that require further tailoring of the upgrade script. +The backup also enables you to cleanly restore your production data if +anything goes wrong during the upgrade. + +[NOTE] +============= +Evergreen provides incremental upgrade scripts that allow you to upgrade +from one minor version to the next until you have the current version of +the schema. For example, if you want to upgrade from 2.9.0 to 2.11.0, you +would run the following upgrade scripts: + +- 2.9.0-2.9.1-upgrade-db.sql +- 2.9.1-2.9.2-upgrade-db.sql +- 2.9.2-2.9.3-upgrade-db.sql +- 2.9.3-2.10.0-upgrade-db.sql (this is a major version upgrade) +- 2.10.0-2.10.1-upgrade-db.sql +- 2.10.1-2.10.2-upgrade-db.sql +- 2.10.2-2.10.3-upgrade-db.sql +- 2.10.3-2.10.4-upgrade-db.sql +- 2.10.4-2.10.5-upgrade-db.sql +- 2.10.5-2.10.6-upgrade-db.sql +- 2.10.6-2.10.7-upgrade-db.sql +- 2.10.7-2.11.0-upgrade-db.sql (this is a major version upgrade) + +Note that you do *not* necessarily want to run additional upgrade scripts to +upgrade to the newest version, since currently there is no automated way, for +example to upgrade from 2.9.4+ to 2.10. Only upgrade as far as necessary to +reach the major version upgrade script (in this example, as far as 2.9.3). + +============= + +[CAUTION] +Pay attention to error output as you run the upgrade scripts. If you encounter errors +that you cannot resolve yourself through additional troubleshooting, please +report the errors to the https://evergreen-ils.org/communicate/mailing-lists/[Evergreen +Technical Discussion List]. + +Run the following steps (including other upgrade scripts, as noted above) +as a user with the ability to connect to the database server. + +[source, bash] +---------------------------------------------------------- +cd /home/opensrf/Evergreen-ILS-3.X.X/Open-ILS/src/sql/Pg +psql -U evergreen -h localhost -f version-upgrade/3.X.W-3.X.X-upgrade-db.sql evergreen +---------------------------------------------------------- + +[TIP] +After the some database upgrade scripts finish, you may see a +note on how to reingest your bib records. You may run this after you have +completed the entire upgrade and tested your system. Reingesting records +may take a long time depending on the number of bib records in your system. + +== Restart Evergreen and Test == + +. As the *root* user, restart memcached to clear out all old user sessions. ++ +[source, bash] +-------------------------------------------------------------- +service memcached restart +-------------------------------------------------------------- ++ +. As the *opensrf* user, start all Evergreen and OpenSRF services: ++ +[source, bash] +-------------------------------------------------------------- +osrf_control --localhost --start-all +-------------------------------------------------------------- ++ +. As the *opensrf* user, run autogen to refresh the static organizational data files: ++ +[source, bash] +-------------------------------------------------------------- +cd /openils/bin +./autogen.sh +-------------------------------------------------------------- ++ +. Start srfsh and try logging in using your Evergreen username and password: ++ +[source, bash] +-------------------------------------------------------------- +/openils/bin/srfsh +srfsh% login username password +-------------------------------------------------------------- ++ +You should see a result like: ++ +[source, bash] +-------------------------------------------------------------- +Received Data: "250bf1518c7527a03249858687714376" + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 0.045286 + ------------------------------------ + + Received Data: { + "ilsevent":0, + "textcode":"SUCCESS", + "desc":" ", + "pid":21616, + "stacktrace":"oils_auth.c:304", + "payload":{ + "authtoken":"e5f9827cc0f93b503a1cc66bee6bdd1a", + "authtime":420 + } + + } + + ------------------------------------ + Request Completed Successfully + Request Time in seconds: 1.336568 + ------------------------------------ +-------------------------------------------------------------- ++ +If this does not work, it's time to do some +xref:installation:server_installation.adoc#install-troubleshooting-1[troubleshooting]. ++ +. As the *root* user, start the Apache web server. ++ +If you encounter errors, refer to the +xref:installation:server_installation.adoc#install-troubleshooting-1[troubleshooting] section +of this documentation for tips on finding solutions and seeking further assistance +from the Evergreen community. + +== Review Release Notes == + +Review this version's release notes for other tasks +that need to be done after upgrading. If you have upgraded over several +major versions, you will need to review the release notes for each version also. diff --git a/docs/modules/installation/pages/system_requirements.adoc b/docs/modules/installation/pages/system_requirements.adoc new file mode 100644 index 0000000000..31cbd72e56 --- /dev/null +++ b/docs/modules/installation/pages/system_requirements.adoc @@ -0,0 +1,35 @@ += System Requirements = +:toc: + +== Server Minimum Requirements == + +The following are the base requirements setting Evergreen up on a test server: + + * An available desktop, server or virtual image + * 4GB RAM, or more if your server also runs a graphical desktop + * Linux Operating System (community supports Debian, Ubuntu, or Fedora) + * Ports 80 and 443 should be opened in your firewall for TCP connections to allow OPAC and staff client connections to the Evergreen server. + +== Web Client Requirements == + +The current stable release of Firefox or Chrome is required to run the web +client in a browser. + +== Staff Client Requirements == + +Staff terminals connect to the central database using the Evergreen staff client, available for download from The Evergreen download page. +The staff client must be installed on each staff workstation and requires at minimum: + + * Windows, Mac OS X, or Linux operating system + * a reliable high speed Internet connection + * 2GB RAM + * The staff client uses the TCP protocol on ports 80 and 443 to communicate with the Evergreen server. + +*Barcode Scanners* + +Evergreen will work with virtually any barcode scanner – if it worked with your legacy system it should work on Evergreen. + +*Printers* + +Evergreen can use any printer configured for your terminal to print receipts, check-out slips, holds lists, etc. The single exception is spine label printing, +which is still under development. Evergreen currently formats spine labels for output to a label roll printer. If you do not have a roll printer manual formatting may be required. diff --git a/docs/modules/local_admin/_attributes.adoc b/docs/modules/local_admin/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/local_admin/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/local_admin/nav.adoc b/docs/modules/local_admin/nav.adoc new file mode 100644 index 0000000000..30fcf92cfc --- /dev/null +++ b/docs/modules/local_admin/nav.adoc @@ -0,0 +1,13 @@ +* xref:local_admin:introduction.adoc[Local Administration] +** xref:admin:librarysettings.adoc[Library Settings Editor] +** xref:admin:lsa-address_alert.adoc[Address Alert] +** xref:admin:lsa-barcode_completion.adoc[Barcode Completion] +** xref:admin:hold_driven_recalls.adoc[Hold-driven recalls] +** xref:admin:emergency_closing_handler.adoc[Emergency Closing Handler] +** xref:admin:actiontriggers.adoc[Notifications / Action Triggers] +*** xref:admin:actiontriggers_process.adoc[Processing Action Triggers] +** xref:admin:staff_client-recent_searches.adoc[Recent Staff Searches] +** xref:admin:lsa-standing_penalties.adoc[Standing Penalties] +** xref:admin:lsa-statcat.adoc[Statistical Categories Editor] +** xref:admin:popularity_badges_web_client.adoc[Statistical Popularity Badges] +** xref:admin:lsa-work_log.adoc[Work Log] diff --git a/docs/modules/local_admin/pages/_attributes.adoc b/docs/modules/local_admin/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/local_admin/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/local_admin/pages/introduction.adoc b/docs/modules/local_admin/pages/introduction.adoc new file mode 100644 index 0000000000..b3d20385bc --- /dev/null +++ b/docs/modules/local_admin/pages/introduction.adoc @@ -0,0 +1,4 @@ += Introduction = + +This part covers the options in the Local Administration menu found in the staff +client. diff --git a/docs/modules/opac/_attributes.adoc b/docs/modules/opac/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/opac/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/opac/assets/images/media/BatchActionsSearch-01.png b/docs/modules/opac/assets/images/media/BatchActionsSearch-01.png new file mode 100644 index 0000000000..c7f91182ec Binary files /dev/null and b/docs/modules/opac/assets/images/media/BatchActionsSearch-01.png differ diff --git a/docs/modules/opac/assets/images/media/BatchActionsSearch-02.png b/docs/modules/opac/assets/images/media/BatchActionsSearch-02.png new file mode 100644 index 0000000000..6ce6669ecb Binary files /dev/null and b/docs/modules/opac/assets/images/media/BatchActionsSearch-02.png differ diff --git a/docs/modules/opac/assets/images/media/BatchActionsSearch-03.png b/docs/modules/opac/assets/images/media/BatchActionsSearch-03.png new file mode 100644 index 0000000000..df4b5c47cb Binary files /dev/null and b/docs/modules/opac/assets/images/media/BatchActionsSearch-03.png differ diff --git a/docs/modules/opac/assets/images/media/BatchActionsSearch-04.png b/docs/modules/opac/assets/images/media/BatchActionsSearch-04.png new file mode 100644 index 0000000000..33c901d425 Binary files /dev/null and b/docs/modules/opac/assets/images/media/BatchActionsSearch-04.png differ diff --git a/docs/modules/opac/assets/images/media/BatchActionsSearch-06.png b/docs/modules/opac/assets/images/media/BatchActionsSearch-06.png new file mode 100644 index 0000000000..1a84d018b7 Binary files /dev/null and b/docs/modules/opac/assets/images/media/BatchActionsSearch-06.png differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC1.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC1.jpg new file mode 100644 index 0000000000..847bbb5182 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC1.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC10.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC10.jpg new file mode 100644 index 0000000000..944159369e Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC10.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC11.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC11.jpg new file mode 100644 index 0000000000..d3ed5bfba8 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC11.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC12.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC12.jpg new file mode 100644 index 0000000000..7255464160 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC12.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC13.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC13.jpg new file mode 100644 index 0000000000..1693ad15eb Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC13.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC14.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC14.jpg new file mode 100644 index 0000000000..3c0214b404 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC14.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC15.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC15.jpg new file mode 100644 index 0000000000..a483c1654d Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC15.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC16.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC16.jpg new file mode 100644 index 0000000000..33cce3d7f3 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC16.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC17.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC17.jpg new file mode 100644 index 0000000000..c7c845bcd6 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC17.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC2.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC2.jpg new file mode 100644 index 0000000000..aebcdfef2c Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC2.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC4.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC4.jpg new file mode 100644 index 0000000000..9b14495aa0 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC4.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC5.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC5.jpg new file mode 100644 index 0000000000..61b6c3a6aa Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC5.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC6.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC6.jpg new file mode 100644 index 0000000000..3bf605bf27 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC6.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC7.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC7.jpg new file mode 100644 index 0000000000..604c76beb1 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC7.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC8.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC8.jpg new file mode 100644 index 0000000000..d8b2f0889f Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC8.jpg differ diff --git a/docs/modules/opac/assets/images/media/Kids_OPAC9.jpg b/docs/modules/opac/assets/images/media/Kids_OPAC9.jpg new file mode 100644 index 0000000000..8754a8ca28 Binary files /dev/null and b/docs/modules/opac/assets/images/media/Kids_OPAC9.jpg differ diff --git a/docs/modules/opac/assets/images/media/My_Lists.png b/docs/modules/opac/assets/images/media/My_Lists.png new file mode 100644 index 0000000000..c19ecd3cdf Binary files /dev/null and b/docs/modules/opac/assets/images/media/My_Lists.png differ diff --git a/docs/modules/opac/assets/images/media/My_Lists1.jpg b/docs/modules/opac/assets/images/media/My_Lists1.jpg new file mode 100644 index 0000000000..feb5fe32ec Binary files /dev/null and b/docs/modules/opac/assets/images/media/My_Lists1.jpg differ diff --git a/docs/modules/opac/assets/images/media/My_Lists3.jpg b/docs/modules/opac/assets/images/media/My_Lists3.jpg new file mode 100644 index 0000000000..562749bad0 Binary files /dev/null and b/docs/modules/opac/assets/images/media/My_Lists3.jpg differ diff --git a/docs/modules/opac/assets/images/media/My_Lists6.jpg b/docs/modules/opac/assets/images/media/My_Lists6.jpg new file mode 100644 index 0000000000..ac11709917 Binary files /dev/null and b/docs/modules/opac/assets/images/media/My_Lists6.jpg differ diff --git a/docs/modules/opac/assets/images/media/My_Lists7.jpg b/docs/modules/opac/assets/images/media/My_Lists7.jpg new file mode 100644 index 0000000000..06c2ed7904 Binary files /dev/null and b/docs/modules/opac/assets/images/media/My_Lists7.jpg differ diff --git a/docs/modules/opac/assets/images/media/My_Lists_dd.png b/docs/modules/opac/assets/images/media/My_Lists_dd.png new file mode 100644 index 0000000000..9f41ad5e21 Binary files /dev/null and b/docs/modules/opac/assets/images/media/My_Lists_dd.png differ diff --git a/docs/modules/opac/assets/images/media/advholdoption_6.jpg b/docs/modules/opac/assets/images/media/advholdoption_6.jpg new file mode 100644 index 0000000000..71e7585fd9 Binary files /dev/null and b/docs/modules/opac/assets/images/media/advholdoption_6.jpg differ diff --git a/docs/modules/opac/assets/images/media/advsrchpg_1.jpg b/docs/modules/opac/assets/images/media/advsrchpg_1.jpg new file mode 100644 index 0000000000..32d465a1c7 Binary files /dev/null and b/docs/modules/opac/assets/images/media/advsrchpg_1.jpg differ diff --git a/docs/modules/opac/assets/images/media/catalogue-10.png b/docs/modules/opac/assets/images/media/catalogue-10.png new file mode 100644 index 0000000000..8cb6c4374e Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-10.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-3.png b/docs/modules/opac/assets/images/media/catalogue-3.png new file mode 100644 index 0000000000..610d4a9fea Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-3.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-5.png b/docs/modules/opac/assets/images/media/catalogue-5.png new file mode 100644 index 0000000000..dc8cbf81bd Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-5.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-6.png b/docs/modules/opac/assets/images/media/catalogue-6.png new file mode 100644 index 0000000000..2cf678c27c Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-6.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-7.png b/docs/modules/opac/assets/images/media/catalogue-7.png new file mode 100644 index 0000000000..2ebec0c7af Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-7.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-8.png b/docs/modules/opac/assets/images/media/catalogue-8.png new file mode 100644 index 0000000000..ae3973f0b3 Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-8.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-8a.png b/docs/modules/opac/assets/images/media/catalogue-8a.png new file mode 100644 index 0000000000..2eb504a0f1 Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-8a.png differ diff --git a/docs/modules/opac/assets/images/media/catalogue-9.png b/docs/modules/opac/assets/images/media/catalogue-9.png new file mode 100644 index 0000000000..8692d738ed Binary files /dev/null and b/docs/modules/opac/assets/images/media/catalogue-9.png differ diff --git a/docs/modules/opac/assets/images/media/message_center10.PNG b/docs/modules/opac/assets/images/media/message_center10.PNG new file mode 100644 index 0000000000..9a25289175 Binary files /dev/null and b/docs/modules/opac/assets/images/media/message_center10.PNG differ diff --git a/docs/modules/opac/assets/images/media/message_center11.PNG b/docs/modules/opac/assets/images/media/message_center11.PNG new file mode 100644 index 0000000000..a2b3ed71fb Binary files /dev/null and b/docs/modules/opac/assets/images/media/message_center11.PNG differ diff --git a/docs/modules/opac/assets/images/media/message_center12.PNG b/docs/modules/opac/assets/images/media/message_center12.PNG new file mode 100644 index 0000000000..d81efdc8f0 Binary files /dev/null and b/docs/modules/opac/assets/images/media/message_center12.PNG differ diff --git a/docs/modules/opac/assets/images/media/mrholdgf_9.jpg b/docs/modules/opac/assets/images/media/mrholdgf_9.jpg new file mode 100644 index 0000000000..32a2d59c73 Binary files /dev/null and b/docs/modules/opac/assets/images/media/mrholdgf_9.jpg differ diff --git a/docs/modules/opac/assets/images/media/my_list_call_numbers.png b/docs/modules/opac/assets/images/media/my_list_call_numbers.png new file mode 100644 index 0000000000..62e75e36d7 Binary files /dev/null and b/docs/modules/opac/assets/images/media/my_list_call_numbers.png differ diff --git a/docs/modules/opac/assets/images/media/opensearch1.png b/docs/modules/opac/assets/images/media/opensearch1.png new file mode 100644 index 0000000000..9311defc00 Binary files /dev/null and b/docs/modules/opac/assets/images/media/opensearch1.png differ diff --git a/docs/modules/opac/assets/images/media/opensearch2.png b/docs/modules/opac/assets/images/media/opensearch2.png new file mode 100644 index 0000000000..630cd39701 Binary files /dev/null and b/docs/modules/opac/assets/images/media/opensearch2.png differ diff --git a/docs/modules/opac/assets/images/media/opensearch3.png b/docs/modules/opac/assets/images/media/opensearch3.png new file mode 100644 index 0000000000..832febddda Binary files /dev/null and b/docs/modules/opac/assets/images/media/opensearch3.png differ diff --git a/docs/modules/opac/assets/images/media/opensearch4.png b/docs/modules/opac/assets/images/media/opensearch4.png new file mode 100644 index 0000000000..22a04e35a9 Binary files /dev/null and b/docs/modules/opac/assets/images/media/opensearch4.png differ diff --git a/docs/modules/opac/assets/images/media/other-formats-and-editions.png b/docs/modules/opac/assets/images/media/other-formats-and-editions.png new file mode 100644 index 0000000000..1c9565f64c Binary files /dev/null and b/docs/modules/opac/assets/images/media/other-formats-and-editions.png differ diff --git a/docs/modules/opac/assets/images/media/placehold_5.jpg b/docs/modules/opac/assets/images/media/placehold_5.jpg new file mode 100644 index 0000000000..0910c3467d Binary files /dev/null and b/docs/modules/opac/assets/images/media/placehold_5.jpg differ diff --git a/docs/modules/opac/assets/images/media/recorddetailpg_8.jpg b/docs/modules/opac/assets/images/media/recorddetailpg_8.jpg new file mode 100644 index 0000000000..7835c360a1 Binary files /dev/null and b/docs/modules/opac/assets/images/media/recorddetailpg_8.jpg differ diff --git a/docs/modules/opac/assets/images/media/searchfilters1.PNG b/docs/modules/opac/assets/images/media/searchfilters1.PNG new file mode 100644 index 0000000000..e5cfe323d5 Binary files /dev/null and b/docs/modules/opac/assets/images/media/searchfilters1.PNG differ diff --git a/docs/modules/opac/assets/images/media/searchfilters2.PNG b/docs/modules/opac/assets/images/media/searchfilters2.PNG new file mode 100644 index 0000000000..02af8d3d00 Binary files /dev/null and b/docs/modules/opac/assets/images/media/searchfilters2.PNG differ diff --git a/docs/modules/opac/assets/images/media/srchresultpg2_3.jpg b/docs/modules/opac/assets/images/media/srchresultpg2_3.jpg new file mode 100644 index 0000000000..cf1886d2f8 Binary files /dev/null and b/docs/modules/opac/assets/images/media/srchresultpg2_3.jpg differ diff --git a/docs/modules/opac/assets/images/media/srchresultpg3_4.jpg b/docs/modules/opac/assets/images/media/srchresultpg3_4.jpg new file mode 100644 index 0000000000..bb21800e32 Binary files /dev/null and b/docs/modules/opac/assets/images/media/srchresultpg3_4.jpg differ diff --git a/docs/modules/opac/assets/images/media/srchresultpg4_7.jpg b/docs/modules/opac/assets/images/media/srchresultpg4_7.jpg new file mode 100644 index 0000000000..ceb9783c3c Binary files /dev/null and b/docs/modules/opac/assets/images/media/srchresultpg4_7.jpg differ diff --git a/docs/modules/opac/assets/images/media/srchresultpg_2.jpg b/docs/modules/opac/assets/images/media/srchresultpg_2.jpg new file mode 100644 index 0000000000..0026285aa5 Binary files /dev/null and b/docs/modules/opac/assets/images/media/srchresultpg_2.jpg differ diff --git a/docs/modules/opac/assets/images/media/textcn1.png b/docs/modules/opac/assets/images/media/textcn1.png new file mode 100644 index 0000000000..27f19adff8 Binary files /dev/null and b/docs/modules/opac/assets/images/media/textcn1.png differ diff --git a/docs/modules/opac/assets/images/media/using-opac-view-permalink.png b/docs/modules/opac/assets/images/media/using-opac-view-permalink.png new file mode 100644 index 0000000000..a81bbee498 Binary files /dev/null and b/docs/modules/opac/assets/images/media/using-opac-view-permalink.png differ diff --git a/docs/modules/opac/nav.adoc b/docs/modules/opac/nav.adoc new file mode 100644 index 0000000000..6787fc0c8a --- /dev/null +++ b/docs/modules/opac/nav.adoc @@ -0,0 +1,12 @@ +* xref:opac:introduction.adoc[Using the Public Access Catalog] +** xref:opac:using_the_public_access_catalog.adoc[Using the Public Access Catalog] +** xref:opac:my_lists.adoc[My Lists] +** xref:opac:batch_actions_from_search.adoc[Batch Actions from Search] +** xref:opac:kids_opac.adoc[Kids OPAC] +** xref:opac:catalog_browse.adoc[Catalog Browse] +** xref:opac:advanced_features.adoc[Bibliographic Search Enhancements] +** xref:opac:tpac_meta_record_holds.adoc[TPAC Metarecord Search and Metarecord Level Holds] +** xref:opac:linked_libraries.adoc[Library Information Pages] +** xref:opac:opensearch.adoc[Adding Evergreen Search to Web Browsers] +** xref:opac:search_form.adoc[Adding an Evergreen search form to a web page] + diff --git a/docs/modules/opac/pages/_attributes.adoc b/docs/modules/opac/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/opac/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/opac/pages/advanced_features.adoc b/docs/modules/opac/pages/advanced_features.adoc new file mode 100644 index 0000000000..af27cf697c --- /dev/null +++ b/docs/modules/opac/pages/advanced_features.adoc @@ -0,0 +1,92 @@ += Bibliographic Search Enhancements = +:toc: + +Enhancements to the bibliographic search function enable you to search for records that were created, edited, or deleted within a date range. You can use the catalog interface or the record feed to search for records with specific date ranges. + +Note that all dates should be formatted as YYYY-MM-DD and should be included in parentheses. + + +== Use the Catalog to Retrieve Records with Specified Date Ranges: == + + +=== Search by Create Date or Range === + +To find records that were created on or after a specific date, enter the term, create_date, and the date in the catalog search field. For example, to find records that were created on or after April 1, 2013, enter the following into the catalog search field: + + +create_date(2013-04-01) + + +To find records that were created within a specific date range, enter the term, create_date, followed by comma-separated dates in parentheses. For example, to find records that were created between April 1, 2013 and April 8, 2013, enter the following into the catalog search field: + + +create_date(2013-04-01,2013-04-08) + + + + +=== Search by Edit Date or Range === + + +To find records that were edited on or before a specific date, enter the term, edit-date, and the date in the catalog search field. The date should be preceded by a comma. For example, to find records that were edited on or before April 1, 2013, enter the following into the catalog search field: + + +edit_date(,2013-04-01) + + +To find records that were edited on or after a specific date, enter the term, edit_date, and the date in the catalog search field. For example, to find records that were edited on or after April 1, 2013, enter the following into the catalog search field: + + +edit_date(2013-04-01) + + +To find records that were edited within a specific range, enter the term, edit_date, followed by comma-separated dates in parentheses. For example, to find records that were edited between April 1, 2013 and April 8, 2013, enter the following into the catalog search field: + + +edit_date(2013-04-01,2013-04-08) + + + + +=== Search by Deleted Status === + + +To search for deleted records, enter in your catalog search field the term, edit_date, the date that you want to search, and the term, #deleted. For example, to find records that were deleted on or after April 1, 2013, enter the following into the catalog search field: + +edit_date(2013-04-01)#deleted + + + +To find records that were deleted within a specific range, enter the term, edit_date, followed by comma-separated dates in parentheses. For example, to find records that were deleted between April 1, 2013 and April 8, 2013, enter the following into the catalog search field: + + +edit_date(2013-04-01,2013-04-08)#deleted + + + +== Use a Feed to Retrieve Records with Specified Date Ranges: == + +You can use a feed to retrieve records that were created, edited, or deleted within specific date ranges by adding the dates to the catalog's URL. You can do this manually, or you can write a script that would automatically retrieve this information. + +To manually retrieve records that were created, edited, or deleted within a specific date, enter the terms and dates as specified above within the search terms in the URL. For example, to retrieve records created on or after April 1, 2013, enter the following in your URL: + + +http://test.esilibrary.com/opac/extras/opensearch/1.1/-/html-full?searchTerms=create_date(2013-04-01)&searchClass=keyword + + +NOTE: To retrieve deleted records, replace the # with %23 in your URL. + + +== Binary MARC21 Feeds == +Evergreen's OpenSearch service can return search results in many formats, including HTML, MARCXML, and MODS. As of version 2.4, it can also return results in binary MARC21 format. + +When making an HTTP request to an Evergreen system using the OpenSearch API, you must include the term "marc21" in the appropriate location within the URL to retrieve a feed of MARC21 records in a binary format. The following example demonstrates the appropriate form of the URL: + +http://test.esilibrary.com/opac/extras/opensearch/1.1/-/marc21?searchTerms=create_date%282013-04-01%29&searchClass=keyword + +You can add this term manually to the URL produced by a catalog search, or you can create a script that would retrieve this information automatically. + + + + + diff --git a/docs/modules/opac/pages/batch_actions_from_search.adoc b/docs/modules/opac/pages/batch_actions_from_search.adoc new file mode 100644 index 0000000000..c7da7bb19e --- /dev/null +++ b/docs/modules/opac/pages/batch_actions_from_search.adoc @@ -0,0 +1,108 @@ +[#batch_actions_from_search] += Batch Actions from Search = +:toc: + +== Introduction == + +The public catalog and staff interface display checkboxes on the search results pages, both for bibliographic records and metarecord constituents. Selecting one or more titles with these checkboxes adds the titles to a basket, which is viewable on the search bar as an icon. Users can then take a variety of actions on titles within the basket: place holds, print or email title details, add the items to a permanent list (from the public catalog) or add the titles to a bucket (from the staff interface). + + +== Using Batch Actions from Search in the Public Catalog == + +. Perform a search in the public catalog and retrieve a list of results. ++ +Checkboxes appear to the left of the number of each result. In the case of a metarecord search, checkboxes only appear on the list of metarecord constituents, as metarecords themselves cannot be placed in lists or in baskets. If you want to place the entire page of results on the list, click the _Select All_ checkbox at the top of the results list. ++ + +. Select one or more titles from the results list by clicking on the checkboxes. ++ +Selected titles are automatically added to the basket. A link above the results list tracks the number of titles selected and added to the basket. ++ +image::media/BatchActionsSearch-01.png[Selecting Search Results] ++ + +. The number of items can also be found with the basket icon above the search bar, next to the _Basket Actions_ drop-down. ++ +image::media/BatchActionsSearch-02.png[Basket Actions Drop-down] ++ + +. Click on the _Basket Actions_ drop-down next to the basket icon to take any of the following actions on titles within the basket: View Basket, Place Hold, Print Title Details, Email Title Details, Add Basket to Saved List, Clear Basket. + +image::media/BatchActionsSearch-03.png[Details of Basket Actions Drop-down] + + +=== Actions Initiated with the Basket Actions Drop-down === +* *View Basket* - This opens the basket in a new screen. Checkboxes allow for the selection of one or more titles within the basket. A drop-down menu appears above the list of titles that can be used to place holds, print title details, email title details, or remove titles from the basket. This menu reads _Actions for these items_. (See the next section for more information about this menu.) + +* *Place Hold* - This allows for placement of holds in batch for all of the items in the basket. If not already authenticated, users will be asked to login. Once authenticated, the holds process begins for all titles within the basket. Users can set _Advanced Hold Options_ for each title, as well as set the pickup location, hold notification and suspend options. + +* *Print Title Details* - This allows for printing details of all titles within the basket. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after printing. + +* *Email Title Details* - This allows for emailing details of all titles within the basket. If not already authenticated, users will be asked to login. Once authenticated, the email process begins. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after emailing. + +* *Add Basket to Saved List* - This allows basket items to be saved to a new permanent list. If not already authenticated, users will be asked to login. Once authenticated, the creation of a new permanent list begins. + +* *Clear Basket* - This removes removes all titles from the basket + +=== View Basket -> _Actions for These Items_ Drop-down Menu === +Most actions described above can be taken on titles from within the basket with the _Actions for these items_ drop-down menu. This menu offers additional flexibility, as users can select some or all of the individual titles in the basket on which to place holds, print or email details, or remove from the basket. Users cannot add titles to permanent lists with this menu. + +image::media/BatchActionsSearch-04.png[Actions for These Items Drop-down Menu] + +== Using Batch Actions from Search in the Staff Interface == + +. Perform a search in the staff interface and retrieve a list of results. ++ +Checkboxes appear to the left of the number of each result. In the case of a metarecord search, checkboxes only appear on the list of metarecord constituents, as metarecords themselves cannot be placed in lists or in baskets. If you want to place the entire page of results on the list, click the Select All checkbox at the top of the results list. ++ + +. Select one or more titles from the results list by clicking on the checkboxes. Selected titles are automatically added to the basket. A link above the results list tracks the number of titles selected and added to the basket. ++ +image::media/BatchActionsSearch-01.png[Selecting Search Results] ++ + +. The number of items can also be found with the basket icon above the search bar, next to the _Basket Actions_ drop-down. ++ +image::media/BatchActionsSearch-02.png[Basket Actions Drop-down] ++ + +. Click on the _Basket Actions_ drop-down next to the basket icon to take any of the following actions on titles within the basket: View Basket, Place Hold, Print Title Details, Email Title Details, Add Basket to Saved List, Clear Basket. + +image::media/BatchActionsSearch-03.png[Details of Basket Actions Drop-down] + + +=== Actions Initiated with the Basket Actions Drop-down === + +* *View Basket* - This opens the basket in a new screen. Checkboxes allow for the selection of one or more titles within the basket. A drop-down menu appears above the list of titles that can be used to place holds, print title details, email title details, or remove titles from the basket. This menu reads _Actions for these items_. (See the next section for more information about this menu.) + +* *Place Hold* - This allows for placement of holds in batch for all of the items in the basket. When initiated, the holds process begins for all titles within the basket. Staff can set _Advanced Hold Options_ for each title placed on hold, as well as set the pickup location, hold notification and suspend options. + +* *Print Title Details* - This allows for printing details of all titles within the basket. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after printing. + +* *Email Title Details* - This allows for emailing details of all titles within the basket. A confirmation page opens prior to printing that includes a checkbox option for clearing the basket after printing. + +* *Add Basket to Bucket* - This allows for titles within the basket to be added to an existing or new Record Bucket. +** Click the _Basket Actions_ drop-down and choose _Add Basket to Bucket_ +** To add the titles in your basket to an existing bucket, select the bucket from the _Name of existing bucket_ dropdown and click _Add to Select Bucket_. +** To add the titles in your basket to a new bucket, enter the name of your new bucket in the text box and click _Add to New Bucket_. ++ +image::media/BatchActionsSearch-06.png[Add Basket Titles to Bucket] ++ +* *Clear Basket* - removes all items from the basket + + +=== View Basket -> Actions for These Items Drop-down Menu === + +Most of the basket actions can be taken on titles from within the basket with the _Actions for these items_ drop-down menu. This menu offers additional flexibility, as staff can select some or all of the individual titles within the basket on which to place holds, print or email details, or remove from the basket. Staff cannot place titles in Records Buckets from this menu. + +== Additional Information == + +The basket used to be called a *Temporary List* in previous versions of Evergreen. + +Titles also may be added from the detailed bibliographic record with the _Add to Basket_ link. + +Javascript must be enabled for checkboxes to appear in the public catalog; however, users can still add items to the basket and perform batch actions without Javascript. + +The default limit on the number of basket titles is 500; however, a template config.tt2 setting (+ctx.max_basket_size+) can be used to set a different limit. When the configured limit is reached, checkboxes are disabled unless or until some titles in the basket are removed. + +The permanent list management page within a patron’s account also now includes batch print and email actions. diff --git a/docs/modules/opac/pages/catalog_browse.adoc b/docs/modules/opac/pages/catalog_browse.adoc new file mode 100644 index 0000000000..85b8c8178b --- /dev/null +++ b/docs/modules/opac/pages/catalog_browse.adoc @@ -0,0 +1,31 @@ += Catalog Browse = +:toc: + +*Abstract* + +Catalog Browse enables you to browse bibliographic headings available in your catalog. You can click the hyperlinked bibliographic headings to retrieve catalog records that contain these headings. Also, if a given bibliographic heading is linked to an authority record, and if that authority is linked to another one via the first authority's See and See Also tags, the additional variants of (e.g.) an author's name will appear in your search results. + + +*Use Catalog Browse* + +. To access this feature, navigate to the catalog search page, and click the link, *Browse the Catalog*. By default, you can browse by title, author, subject, or series. System administrators can revise this list by editing the file at the location 'opac/parts/qtype_selector.tt2', and they can even make use of custom indices based on definitions in the database's 'config.metabib_field' table. + + +. Enter a term or part of a term to browse. Evergreen will retrieve a list of bibliographic headings that match your query. Click the *Back* and *Forward* buttons to page through you results. To limit your browse results to a specific branch or copy location group, select the appropriate unit from the drop down menu, and click *Go*. + +. Select a link from the search results. Each linked heading displays the number of bibliographic records associated with the heading. Appropriate information from linked authority records, if any, appears below the main entry heading. + +. To return to your list of results, click the browser's back button or *Browse the Catalog*. Evergreen will return you to your previous position in your list of results. + + + +*Administration* + +A new global flag warns users when they are entering a browse term that begins with an article. Systems administrators can create a regular expression to configure articles matched with specific indices that would prompt a warning for the user. By default, this setting is not enabled. + +. To enable this feature, click *Administration* -> *Server Administration* -> *Global Flags*. + +. Double click *Map of search classes to regular expressions to warn user about leading articles.* + +. Make changes, and click *Save*. + diff --git a/docs/modules/opac/pages/introduction.adoc b/docs/modules/opac/pages/introduction.adoc new file mode 100644 index 0000000000..4c2e5e7f9e --- /dev/null +++ b/docs/modules/opac/pages/introduction.adoc @@ -0,0 +1,13 @@ += Introduction = +:toc: + +Evergreen has a public OPAC that meets WCAG guidelines +(http://www.w3.org/WAI/intro/wcag), which helps make the OPAC accessible to +users with a range of disabilities. This part of the documentation explains how +to use the Evergreen public OPAC. It covers the basic catalog and more advanced +search topics. It also describes the ``My Account'' tools users have to find +information and manage their personal library accounts through the OPAC. This +section could be used by staff and patrons but would be more useful for staff as +a generic reference when developing custom guides and tutorials for their users. + + diff --git a/docs/modules/opac/pages/kids_opac.adoc b/docs/modules/opac/pages/kids_opac.adoc new file mode 100644 index 0000000000..8cd50373f2 --- /dev/null +++ b/docs/modules/opac/pages/kids_opac.adoc @@ -0,0 +1,193 @@ += Kids OPAC = +:toc: + +== Introduction == + +The Kids OPAC (KPAC) is a public catalog search that was designed for children +and teens. Colorful menu items,large buttons, and simple navigation make this +an appealing search interface for kids. Librarians will appreciate the flexible +configuration of the KPAC. Librarians can create links to canned search results +for kids and can apply these links by branch. The KPAC uses the same infrastructure +as the Template Toolkit OPAC (TPAC), the adult catalog search, so you can easily +extend the KPAC using the code that already exists in the TPAC. Finally, third +party content, such as reader reviews, can be integrated into the KPAC. + +== Choose a Skin == + +Two skins, or design interfaces, have been created for the KPAC. The KPAC was +designed to run multiple skins on a single web server. A consortium, then, could +allow each library system to choose a skin for their patrons. + +*Default Skin:* + +In this skin, the search bar is the focal point of the top panel and is centered +on the screen. The search grid appears beneath the search bar. Help and Login +links appear at the top right of the interface. You can customize the appearance +and position of these links with CSS. After you login, the user name is displayed +in the top right corner, and the Login link becomes an option to Logout. + +image::media/Kids_OPAC1.jpg[Kids_OPAC1] + +*Alternate Monster Skin:* + +In this skin, the search bar shares the top panel with a playful monster. The +search grid appears beneath the search bar. Help and Login links appear in bold +colors at the top right of the interface although you can customize these with CSS. +After you login, the Login button disappears. + +image::media/Kids_OPAC2.jpg[Kids_OPAC2] + + +== Search the Catalog == + +You can search the catalog using only the search bar, the search grid, or the search +bar and the collection drop down menu. + + +*Search using the Search Bar* + +To search the catalog from the home page, enter text into the search bar in the +center of the main page, or enter text into the search bar to the right of the +results on a results page. Search indices are configurable, but the default search +indices include author, title and (key)word. + +You can use this search bar to search the entire catalog, or, using the configuration +files, you can apply a filter so that search queries entered here retrieve records +that meet specific criteria, such as child-friendly copy locations or MARC audience +codes. + + +*Search using the Grid* + +From the home page, you can search the catalog by clicking on the grid of icons. +An icon search can link to an external web link or to a canned search. For example, +the icon, Musical Instruments, could link to the results of a catalog search on +the subject heading, Musical instruments. + +The labels on the grid of icons and the content that they search are configurable +by branch. You can use the grid to search the entire catalog, or, using the +configuration files, you can apply a filter so that search queries entered here +retrieve records associated with specific criteria, such as child-friendly copy +locations or MARC audience codes. + + +image::media/Kids_OPAC4.jpg[Kids_OPAC4] + + +You can add multiple layers of icons and searches to your grid: + + +image::media/Kids_OPAC5.jpg[Kids_OPAC5] + + + +*Search using the Search Bar and the _Collection_ Drop Down Menu* + +On the search results page, a search bar and drop down menu appear on the right +side of the screen. You can enter a search term and into the search bar and select +a collection from the drop down menu to search these configured collections. +Configured collections might provide more targeted searching for your audience +than a general catalog search. For example, you could create collections by shelving +location or by MARC audience code. + + +image::media/Kids_OPAC17.jpg[Kids_OPAC17] + + +Using any search method, the search results display in the center of the screen. +Brief information displays beneath each title in the initial search result. The +brief information that displays, such as title, author, or publication information, +is configurable. + + +image::media/Kids_OPAC6.jpg[Kids_OPAC6] + + +For full details on a title, click *More Info*. The full details displays the +configured fields from the title record and copy information. Click *Show more +copies* to display up to fifty results. Use the breadcrumbs at the top to trace +your search history. + + +image::media/Kids_OPAC7.jpg[Kids_OPAC7] + + + +== Place a Hold == + +From the search results, click the *Get it!* link to place a hold. + + +image::media/Kids_OPAC11.jpg[Kids_OPAC11] + + +The brief information about the title appears, and, if you have not yet logged in, +the *Get It!* panel appears with fields for username and password. Enter the username +and password, and select the pick up library. Then click *Submit*. If you have +already logged into your account, you need only to select the pick up location, +and click *Submit*. + + +image::media/Kids_OPAC12.jpg[Kids_OPAC12] + + +A confirmation of hold placement appears. You can return to the previous record +or to your search results. + + +image::media/Kids_OPAC13.jpg[Kids_OPAC13] + + + +== Save Items to a List == + +You can save items to a temporary list, or, if you are logged in, you can save to +a list of your own creation. To save items to a list, click the *Get it* button +on the Search Results page. + + +image::media/Kids_OPAC14.jpg[Kids_OPAC14] + + +Select a list in the *Save It!* panel beneath the brief information, and click *Submit*. + + +image::media/Kids_OPAC16.jpg[Kids_OPAC16] + + +A confirmation of the saved item appears. To save the item to a list or to manage +the lists, click the *My Lists* link to return to the list management feature in +the TPAC. + + +image::media/Kids_OPAC15.jpg[Kids_OPAC15] + + + +== Third Party Content == + +Third party content, such as reader reviews, can be viewed in the Kids OPAC. The +reviews link appears adjacent to the brief information. + +image::media/Kids_OPAC8.jpg[Kids_OPAC8] + + +Click the Reviews link to view reader reviews from a third party source. The reader +reviews open beneath the brief information. + + +image::media/Kids_OPAC9.jpg[Kids_OPAC9] + + +Summaries and reviews from other publications appear in separate tabs beneath the +copy information. + + +image::media/Kids_OPAC10.jpg[Kids_OPAC10] + +== Configuration Files == + +Configuration files allow you to define labels for canned searches in the icon +grid, determine how icons lead users to new pages, and define whether those icons +are canned searches or links to external resources. Documentation describing how +to use the configuration files is available in the Evergreen repository. diff --git a/docs/modules/opac/pages/linked_libraries.adoc b/docs/modules/opac/pages/linked_libraries.adoc new file mode 100644 index 0000000000..0e19f15533 --- /dev/null +++ b/docs/modules/opac/pages/linked_libraries.adoc @@ -0,0 +1,44 @@ += Library Information Pages = +:toc: + +The branch name displayed in the copy details section of the search results +page, the record summary page, and the kids catalog record summary page will +link to a library information page. This page is located at +`http://hostname/eg/opac/library/` and at +`http://hostname/eg/opac/library/`. + +Evergreen automatically generates this page based on information entered in +*Administration* -> *Server Administration* -> *Organizational Units* (actor.org_unit). + +The library information page displays: + +* The name of the library +* Opening hours +* E-mail address +* Phone number +* Mailing address +* The branch's parent library system + +An Evergreen site can also display a link to the library's web site on the +information page. + +To display a link: + +. Go to *Administration* -> *Local Administration* -> *Library Settings Editor*. +. Edit the *Library Information URL* setting for the branch. +[NOTE] +If you set the URL at the system level, that URL will be used as the link for +the system and for all child branches that do not have its own URL set. +. Enter the URL in the following format: http://example.com/about.html. + +An Evergreen site may also opt to link directly from the copy details section +of the catalog to the library web site, bypassing the automatically-generated +library information page. To do so: + +. Add the library's URL to the *Library Information URL* setting as described +above. +. Go to *Administration* -> *Local Administration* -> *Library Settings Editor*. +. Set the *Use external "library information URL" in copy table, if available* +setting to true. + +The library information pages publish schema.org structured data, as do parts of the OPAC bibliographic record views, which can enable search engines and other systems to better understand your libraries and their resources. diff --git a/docs/modules/opac/pages/my_account.adoc b/docs/modules/opac/pages/my_account.adoc new file mode 100644 index 0000000000..8bc15502fb --- /dev/null +++ b/docs/modules/opac/pages/my_account.adoc @@ -0,0 +1,300 @@ + +[#my_account] += My Account = +:toc: + +// ``First Login Password Update'' the following documentation comes from JSPAC +// as of 2013-03-12 this feature did not exist in EG 2.4 TPAC, +// so I am commenting it out for now because it will be added in the future +// see bug report https://bugs.launchpad.net/evergreen/+bug/1013786 +// Yamil Suarez 2013-03-12 + +//// + + +== First Login Password Update == + + +indexterm:[my account, first login password update] + +Patrons are given temporary passwords when new accounts are created, or +forgotten passwords are reset by staff. Patrons MUST change their password to +something more secure when they login or for the first time. Once the password +is updated, they will not have to repeat this process for subsequent logins. + +. Open a web browser and go to your Evergreen OPAC +. Click My Account +. Enter your _Username_ and _Password_. + * By default, your username is your library card number. + * Your password is a 4 digit code provided when your account was created. If +you have forgotten your password, contact your library to have it reset or use +the online the section called ``<>'' tool. +//// + + +== Logging In == + +indexterm:[my account, logging in] + +Logging into your account from the online catalog: + +. Open a web browser and navigate to your Evergreen OPAC. +. Click _My Account_ . +. Enter your _Username_ and _Password_. +** By default, your username is your library card number. +** Your password is a 4 digit code provided when your account was created. If +you have forgotten your password, contact your local library to have it reset or + use the the section called <> tool. +. Click Login. ++ +** At the first login, you may be prompted to change your password. +** If you updated your password, you must enter your _Username_ and _Password_ +again. ++ +. Your _Account Summary_ page displays. + + +To view your account details, click one of the _My Account_ tabs. + +To start a search, enter a term in the search box at the top of the page and +click _Search_! + +[CAUTION] +================= +If using a public computer be sure to log out! +================= + +[#password_reset] + +=== Password Reset === + +indexterm:[my account, password reset] + + +To reset your password: + +. click on the _Forgot your password?_ link located beside the login button. + +. Fill in the _Barcode_ and _User name_ text boxes. + +. A message should appear indicating that your request has been processed and +that you will receive an email with further instructions. + +. An email will be sent to the email addressed you have registered with your +Evergreen library. You should click on the link included in the email to open +the password reset page. Processing time may vary. ++ +[NOTE] +================= +You will need to have a valid email account set up in Evergreen for you to reset +your password. Otherwise, you will need to contact your library to have your +password reset by library staff. +================= ++ + +. At the reset email page you should enter the new password in the _New +password_ field and re-enter it in the _Re-enter new password_ field. + +. Click _Submit_. + +. A message should appear on the page indicating that your password has been reset. + +. Login to your account with your new password. + + +== Account Summary == + +indexterm:[my account, account summary] + +In the *My Account* -> *Account Summary* page, you can see when your account +expires and your total number of items checked out, items on hold, and items +ready for pickup. In addition, the Account Summary page lists your current fines +and payment history. + + +== Items Checked Out == + +indexterm:[my account, items checked out] + +Users can manage items currently checked out, like renew specific items. Users +can also view overdue items and see how many renewals they have remaining for +specific item. + +As of Evergreen version 2.9, sorting of selected columns is available in the + _Items Checked Out_ and _Check Out History_ pages. Clicking on the appropriate + column heads sorts the contents from "ascending" to "descending" to "no sort". +(The "no sort" restores the original list as presented in the screen.) The sort +indicator (an up or down arrow) is placed to the right of the column head, as +appropriate. + +Within *Items Checked Out* -> *Current Items Checked Out*, the following column + headers can be sorted: _Title_, _Author_, _Renewals Left_, _Due Date_, +_Barcode_, and _Call Number_. + +Within *Items Checked Out* -> *Check Out History*, the following column headers +can be sorted: _Title_, _Author_, _Checkout Date_, _Due Date_, _Date Returned_, +_Barcode_, and _Call Number_ + +[NOTE] +========== +To protect patron privacy, the Check Out History will be completely blank unless the patron has previously opted in under the _Account Preferences_ tab, in the _Search and History Preferences_ +area. +========== + + +== Holds == + +indexterm:[my account, holds] + +From *My Account*, patrons can see *Items on Hold* and *Holds History* and +manage items currently being requested. In *Holds* -> *Items on Hold*, the +content shown can be sorted by clicking on the following column headers: +_Title_, _Author_, and _Format_ (based on format name represented by the icon). + +Actions include: + +* Suspend - set a period of time during which the hold will not become active, +such as during a vacation +* Activate - manually remove the suspension +* Cancel - remove the hold request + +Edit options include: + +* Change pick up library +* Change the _Cancel unless filled by_ date, also known as the hold expiration +date +* Change the status of the hold to either active or suspended. +* Change the _If suspended, activate on_ date, which reactivates a suspended +hold at the specified date + +To edit items on hold: + +. Login to _My Account_, click the _Holds_ tab. +. Select the hold to modify. +. Click _Edit_ for selected holds. +. Select the change to make and follow the instructions. + +[NOTE] +========== +To protect patron privacy, the Holds History will be completely blank unless the patron has previously opted in under the _Account Preferences_ tab, in the _Search and History Preferences_ +area. +========== + +== Account Preferences == + +indexterm:[my account, account preferences] + +From here you can manage display preferences including your *Personal +Information*, *Notification Preferences*, and *Search and History Preferences*. +Additional static information, such as your _Account Expiration Date_, can be +found under Personal Information. + +For example: + +* Personal Information + +** change password - allows patrons to change their password + +** change email address - allows patrons to change their email address. + + + +* Notification Preferences + +** _Notify by Email_ by default when a hold is ready for pickup? + +** _Notify by Phone_ by default when a hold is ready for pickup? + +** _Default Phone Number_ + + +* Search and History Preferences + +** Search hits per page + +** Preferred pickup location + +** Keep history of checked out items? + +** Keep history of holds? + +[WARNING] +======== +Turning off the _Keep history of checked out items?_ or _Keep history of holds?_ features will permanently delete all entries in the relevant patron screens. After this is unchecked, +there is no way for a patron to recover those data. +======== + + +After changing any of these settings, you must click _Save_ to store your +preferences. + +=== Authorize other people to use your account === + +indexterm:[Allow others to use my account] +indexterm:[checking out,materials on another patron's account] +indexterm:[holds,picking up another patron's] +indexterm:[privacy waiver] + + +If your library has enabled it, you can authorize other people to use +your account. In the Search and History Preferences tab +under Account Preferences, find the section labeled "Allow others to use +my account". Enter the name and indicate that the +specified person is allowed to place holds, pickup holds, view +borrowing history, and check out items on their account. This +information will also be visible to circulation staff at your library. + + + +indexterm:[holds, preferred pickup location] + +== Patron Messages == + +The Patron Message Center provides a way for libraries to communicate with +patrons through messages that can be accessed through the patron's OPAC account. + Library staff can create messages manually by adding an OPAC visible Patron +Note to an account. Messages can also be automatically generated through an +Action Trigger event. Patrons can access and manage messages within their OPAC +account. See Circulation - Patron Record - Patron Message Center for more +information on adding messages to patron accounts. + +*Viewing Patron Messages in the OPAC* + +Patrons will see a new tab for *Messages* in their OPAC account, as well as a +notification of *Unread Messages* in the account summary. + +image::media/message_center11.PNG[Message Center 11] + +Patrons will see a list of the messages from the library by clicking on the +*Messages* tab. + +image::media/message_center10.PNG[Message Center 10] + +Patrons can click on a message *Subject* to view the message. After viewing the +message, it will automatically be marked as read. Patrons have the options to +mark the message as unread and to delete the message. + +image::media/message_center12.PNG[Message Center 12] + +NOTE: Patron deleted messages will still appear in the patron's account in the +staff client under Other -> Message Center. + +== Reservations == + +When patrons place a reservation for a particular item at a particular time, +they can check on its status using the *Reservations* tab. + +After they initially place a reservation, its status will display as _Reserved_. +After staff capture the reservation, the status will change to _Ready for Pickup_. +After the patron picks up the reservation, the status will change to _Checked Out_. +Finally, after the patron returns the item, the reservation will be removed from +the list. + +[NOTE] +==================== +This interface pulls its timezone from the Library +Settings Editor. Make sure that you have a timezone +listed for your library in the Library Settings Editor +before using this feature. +==================== + diff --git a/docs/modules/opac/pages/my_lists.adoc b/docs/modules/opac/pages/my_lists.adoc new file mode 100644 index 0000000000..5be9c21e41 --- /dev/null +++ b/docs/modules/opac/pages/my_lists.adoc @@ -0,0 +1,68 @@ += My Lists = +:toc: + +The *My Lists* feature replaces the bookbag feature that was available in versions prior to 2.2. The *My Lists* feature is a part of the Template Toolkit OPAC that is available in version 2.2. This feature enables you to create temporary and permanent lists; create and edit notes for items in lists; place holds on items in lists; and share lists via RSS feeds and CSV files. + +There is now a direct link to *My Lists* from the *My Account* area in the top right part of the screen. This gives users the ability to quickly access their lists while logged into the catalog. + +As of version 3.2, xref:opac:batch_actions_from_search.adoc#batch_actions_from_search[Batch Actions from Search Results] has replaced the old Temporary Lists feature, as well as enabled multiple selections from a search results list. + +image::media/My_Lists.png[My Lists] + +== Create New Lists == + +1) Log in to your account in the OPAC. + +2) Search for titles. + +3) Choose a title to add to your list. Click *Add to My List*. + +image::media/My_Lists1.jpg[Add to My List] + +4) Select an existing list, or create the a new list. + +image::media/My_Lists_dd.png[List Dropdown] + +5) Scroll up to the top of the screen and click *My Lists*. Click on the name of your list to see any titles added to it. + +6) The *Actions for these items* menu on the left side of the screen demonstrates the actions that you can apply to this list. You can place holds on titles in your list, print or email title details of titles in your list, and remove titles from your list. + +To perform actions on multiple list rows, check the box adjacent to the title of the item, and select the desired function. + +image::media/My_Lists3.jpg[List Actions] + +7) Click *Edit* to add or edit a note. + +8) Enter desired notes, and click *Save Notes*. + +image::media/My_Lists6.jpg[List Notes] + +9) You can keep your list private, or you can share it. To share your list, click *Share*, and click the orange RSS icon to share through an RSS reader. You can also click *HTML View* to share your list as an HTML link. + +You can also download your list into a CSV file by clicking *Download CSV*. + +image::media/My_Lists7.jpg[Share, Delete, Download List] + +16) When you no longer need a list, click *Delete List*. + + +== Local Call Number in My Lists == + +When a title is added to a list in the TPAC, a local call number will be displayed in the list to assist patrons in locating the physical item. Evergreen will look at the following locations to identify the most relevant call number to display in the list: + +1) Physical location - the physical library location where the search takes place + +2) Preferred library - the Preferred Search Location, which is set in patron OPAC account Search and History Preferences, or the patron's Home Library + +3) Search library - the search library or org unit that is selected in the OPAC search interface + +The call number that is displayed will be the most relevant call number to the searcher. If the patron is searching at the library, Evergreen will display a call number from that library location. If the patron is not searching at a library, but is logged in to their OPAC account, Evergreen will display a call number from their Home Library or Preferred Search Location. If the patron is not searching at the library and is not signed in to their OPAC account, then Evergreen will display a call number from the org unit, or library, that they choose to search in the OPAC search interface. + +The local call number and associated library location will appear in the list: + +image::media/my_list_call_numbers.png[Local Call Number in List] + +== My Lists Preferences == + +Patrons can adjust the number of lists or list items displayed in a page. This setting can be found under the *Account Preferences* tab, in the *My Lists Preferences* section. + diff --git a/docs/modules/opac/pages/new_skin_customizations.adoc b/docs/modules/opac/pages/new_skin_customizations.adoc new file mode 100644 index 0000000000..2e7872966e --- /dev/null +++ b/docs/modules/opac/pages/new_skin_customizations.adoc @@ -0,0 +1,131 @@ += Creating a New Skin: the Bare Minimum = +:toc: + +== Introduction == + +When you adopt the TPAC as your catalog, you must create a new skin. This +involves a combination of overriding template files and setting Apache +directives to control the look and feel of your customized TPAC. + +== Apache directives == +There are a few Apache directives and environment variables of note for +customizing TPAC behavior. These directives should generally live within a +`` section of your Apache configuration. + +* `OILSWebDefaultLocale` specifies which locale to display when a user lands + on a page in the TPAC and has not chosen a different locale from the TPAC + locale picker. The following example shows the `fr_ca` locale being added + to the locale picker and being set as the default locale: ++ +------------------------------------------------------------------------------ +PerlAddVar OILSWebLocale "fr_ca" +PerlAddVar OILSWebLocale "/openils/var/data/locale/opac/fr-CA.po" +PerlAddVar OILSWebDefaultLocale "fr-CA" +------------------------------------------------------------------------------ ++ +* `physical_loc` is an Apache environment variable that sets the default + physical location, used for setting search scopes and determining the order + in which copies should be sorted. The following example demonstrates the + default physical location being set to library ID 104: ++ +------------------------------------------------------------------------------ +SetEnv physical_loc 104 +------------------------------------------------------------------------------ + +== Customizing templates == +When you install Evergreen, the TPAC templates include many placeholder images, +text, and links. You should override most of these to provide your users with a +custom experience that matches your library. Following is a list of templates +that include placeholder images, text, or links that you should override. + +NOTE: All paths are relative to `/openils/var/templates/opac` + +[[configtt2]] + +* `parts/config.tt2`: contains many configuration settings that affect the + behavior of the TPAC, including: + ** hiding the *Place Hold* button for available items + ** enabling RefWorks support for citation management + ** adding OpenURL resolution for electronic resources + ** enabling Google Analytics tracking for your TPAC + ** displaying the "Forgot your password?" prompt + ** controlling the size of cover art on the record details page + ** defining which facets to display, and in which order + ** controlling basic and advanced search options + ** controlling if the "Show More Details" button is visible or activated by +default in OPAC search results + ** hiding phone notification options (useful for libraries that do not do +phone notifications) + ** disallowing password or e-mail changes (useful for libraries that use +centralized authentication or single sign-on systems) + ** displaying a maintenance message in the public catalog and KPAC (this is +controlled by the _ctx.maintenance_message_ variable) + ** displaying previews of books when available from Google Books. This is +controlled by the _ctx.google_books_preview_ variable, which is set to 0 by +default to protect the privacy of users who might not want to share their +browsing behavior with Google. + ** disabling the "Group Formats and Editions" search. This is controlled by +setting the metarecords.disabled variable to 1. + ** setting the default search to a 'Group Formats and Editions' search. This +is done by setting the search.metarecord_default variable to 1. +* `parts/footer.tt2` and `parts/topnav_links.tt2`: contains customizable + links. Defaults like 'Link 1' will not mean much to your users! +* `parts/homesearch.tt2`: holds the large Evergreen logo on the home page + of the TPAC. Substitute your library's logo, or if you are adventurous, + create a "most recently added items" carousel... and then share your + customization with the Evergreen community. +* `parts/topnav_logo.tt2`: holds the small Evergreen logo that appears on the + top left of every page in the TPAC. You will also want to remove or change + the target of the link that wraps the logo and leads to the + http://evergreen-ils.org[Evergreen site]. +* `parts/login/form.tt2`: contains some assumptions about terminology and + examples that you might prefer to change to be more consistent with your own + site's existing practices. For example, you may not use 'PIN' at your library + because you want to encourage users to use a password that is more secure than + a four-digit number. +* `parts/login/help.tt2`: contains links that point to http://example.com, + images with text on them (which is not an acceptable practice for + accessibility reasons), and promises of answers to frequently asked questions + that might not exist at your site. +* \`parts/login/password_hint.tt2\`: contains a hint about your users' password + on first login that is misleading if your library does not set the initial + password for an account to the last four digits of the phone number associated + with the account. +* `parts/myopac/main_refund_policy.tt2`: describes the policy for refunds for + your library. +* `parts/myopac/prefs_hints.tt2`: suggests that users should have a valid email + on file so they can receive courtesy and overdue notices. If your library + does not send out email notices, you should edit this to avoid misleading your + users. +* `myopac/update_password_msg.tt2`: defines the password format that needs + to be used when setting a user password. If your Evergreen site has set + _Password format_ regex in the Library Settings Editor, you + should update the language to describe the format that should be used. +* `password_reset.tt2`: in the msg_map section, you might want to change the + NOT_STRONG text that appears when the user tries to set a password that + does not match the required format. Ideally, this message will tell the user + how they should format the password. +* \`parts/css/fonts.tt2\`: defines the font sizes for the TPAC in terms of one + base font size, and all other sizes derived from that in percentages. The + default is 12 pixels, but http://goo.gl/WfNkE[some design sites] strongly + suggest a base font size of 16 pixels. Perhaps you want to try '1em' as a + base to respect your users' preferences. You only need to change one number + in this file if you want to experiment with different options for your users. +* `parts/css/colors.tt2`: chances are your library's official colors do not + match Evergreen's wall of dark green. This file defines the colors in use in + the standard Evergreen template. In theory you should be able to change just + a few colors and everything will work, but in practice you will need to + experiment to avoid light-gray-on-white low-contrast combinations. + +The following are templates that are less frequently overridden, but some +libraries benefit from the added customization options. + +* `parts/advanced/numeric.tt2`: defines the search options of the Advanced +Search > Numeric search. If you wanted to add a bib call number search option, +which is different from the item copy call number; you would add the following +code to `numeric.tt2`. ++ +------------------------------------------------------------------------------ + +------------------------------------------------------------------------------ + diff --git a/docs/modules/opac/pages/opensearch.adoc b/docs/modules/opac/pages/opensearch.adoc new file mode 100644 index 0000000000..18883cd1e1 --- /dev/null +++ b/docs/modules/opac/pages/opensearch.adoc @@ -0,0 +1,34 @@ += Adding Evergreen Search to Web Browsers = +:toc: + +== Adding OpenSearch to Firefox browser == + +OpenSearch is a collection of simple formats for the sharing of search results. +More information about OpenSearch can be found on their +http://www.opensearch.org[website]. + +The following example illustrates how to add an OpenSearch source to the list +of search sources in a Firefox browser: + +. Navigate to any catalog page in your Firefox browser and click on the top + right box's dropdown and select the option for *Add "Example Consortium OpenSearch"*. + The label will match the current scope. ++ +image::media/opensearch1.png[opensearch1] + +. At this point, it will add a new search option for the location the catalog + is currently using. In this example, that is CONS (searching the whole + consortium). ++ +image::media/opensearch2.png[opensearch2] + +. Enter search terms to begin a keyword search using this source. The next + image illustrates an example search for "mozart" using the sample bib + record set. ++ +image::media/opensearch3.png[opensearch3] + +. You can select which search source to use by clicking on the dropdown + picker. ++ +image::media/opensearch4.png[opensearch4] diff --git a/docs/modules/opac/pages/search_form.adoc b/docs/modules/opac/pages/search_form.adoc new file mode 100644 index 0000000000..6cc3997241 --- /dev/null +++ b/docs/modules/opac/pages/search_form.adoc @@ -0,0 +1,92 @@ += Adding an Evergreen search form to a web page = +:toc: + +== Introduction == + +To enable users to quickly search your Evergreen catalog, you can add a +simple search form to any HTML page. The following code demonstrates +how to create a quick search box suitable for the header of your web +site: + +== Simple search form == + +[source,html] +------------------------------------------------------------------------------ +
+ + + + +
+------------------------------------------------------------------------------ +<1> Replace ''example.com'' with the hostname for your catalog. To link to + the Kid's OPAC instead of the TPAC, replace ''opac'' with ''kpac''. +<2> Replace ''keyword'' with ''title'', ''author'', ''subject'', or ''series'' + if you want to provide more specific searches. You can even specify + ''identifier|isbn'' for an ISBN search. +<3> Replace ''4'' with the ID number of the organizational unit at which you + wish to anchor your search. This is the value of the ''locg'' parameter in + your normal search. + +== Advanced search form == + +[source,html] +-------------------------------------------------------------------------------- + +-------------------------------------------------------------------------------- + +== Encoding == + +For non English characters it is vital to set the attribute `accept-charset="UTF-8"` in the form tag (as in the examples above). If the parameter is not set, records with non English characters will not be retrieved. + +== Setting the document type == + +You can set the document types to be searched using the attribute `option value=` in the form. For the value use MARC 21 code defining the type of record (i.e. https://www.loc.gov/marc/bibliographic/bdleader.html[Leader, position 06]). + +For example, for musical recordings you could use `` + +== Setting the library == + +Instead of searching the entire consortium, you can set the Library to be searched in using the attribute `option value=` in the form. For the value use Evergreen database.organization unit ID. + + diff --git a/docs/modules/opac/pages/search_url.adoc b/docs/modules/opac/pages/search_url.adoc new file mode 100644 index 0000000000..d6ea158d3c --- /dev/null +++ b/docs/modules/opac/pages/search_url.adoc @@ -0,0 +1,51 @@ +== Search URL == + +indexterm:[search, URL] + +When performing a search or clicking on the details links, Evergreen constructs +a GET request url with the parameters of the search. The url for searches and +details in Evergreen are persistent links in that they can be saved, shared and +used later. + +Here is a basic search URL structure: + + ++++[hostname]+++/eg/opac/results?query=[search term]&**qtype**=keyword&fi%3Aitem_type=&**locg**=[location id] + +=== locg Parameter === +This is the id of the search location. It is an integer and matches the id of the +location the user selected in the location drop down menu. + +=== qtype Parameter === + +The _qtype_ parameter in the URL represents the search type values and represent +one of the following search or request types: + +* Keyword +* Title +* Journal Title +* Author +* Subject +* Series +* Bib Call Number + +These match the options in the search type drop-down box. + +=== Sorting === + +The _sort_ parameter sorts the results on one of these criteria. + +* `sort=pubdate` (publication date) - chronological order +* `sort=titlesort` - Alphabetical order +* `sort=authorsort` - Alphabetical order on family name first + +To change the sort direction of the results, the _sort_ parameter value has the +".descending" suffix added to it. + +* `sort=titlesort.descending` +* `sort=authorsort.descending` +* `sort=pubdate.descending` + +In the absence of the _sort_ parameter, the search results default to sorting by +relevance. + diff --git a/docs/modules/opac/pages/sitemap.adoc b/docs/modules/opac/pages/sitemap.adoc new file mode 100644 index 0000000000..d66d246b22 --- /dev/null +++ b/docs/modules/opac/pages/sitemap.adoc @@ -0,0 +1,18 @@ += Sitemap generator = +:toc: + +A http://www.sitemaps.org[sitemap] directs search engines to the pages of +interest in a web site so that the search engines can intelligently crawl +your site. In the case of Evergreen, the primary pages of interest are the +bibliographic record detail pages. + +The sitemap generator script creates sitemaps that adhere to the +http://sitemaps.org specification, including: + +* limiting the number of URLs per sitemap file to no more than 50,000 URLs; +* providing the date that the bibliographic record was last edited, so + that once a search engine has crawled all of your sites' record detail pages, + it only has to reindex those pages that are new or have changed since the last + crawl; +* generating a sitemap index file that points to each of the sitemap files. + diff --git a/docs/modules/opac/pages/tpac_meta_record_holds.adoc b/docs/modules/opac/pages/tpac_meta_record_holds.adoc new file mode 100644 index 0000000000..59a548009a --- /dev/null +++ b/docs/modules/opac/pages/tpac_meta_record_holds.adoc @@ -0,0 +1,105 @@ += TPAC Metarecord Search and Metarecord Level Holds = +:toc: + +Metarecords are compilations of individual bibliographic records that represent +the same work. This compilation allows for several records to be represented on +a single line on the TPAC search results page, which can help to reduce result +duplications. + + +*Advanced Search Page* + +Selecting the *Group Formats and Editions* checkbox on the Advanced Search page +allows the user to perform a metarecord search. + +image::media/advsrchpg_1.jpg[] + +[TIP] +Administrators can also configure the catalog to default to a *Group Formats and +Editions* search by enabling the relevant config.tt2 setting on +the server. Setting this option will pre-select the checkbox on the Advanced +Search and Search Result Pages. Users can remove the checkmark, but new searches +will revert to the default search behavior. + +*Search Results Page* + +Within the Search Results page, users can also refine their searches and filter +on metarecord search results by selecting the *Group Formats and Editions* +checkbox. + +image::media/srchresultpg_2.jpg[] + +The metarecord search results will display both the representative metarecord +bibliographic data and the combined metarecord holdings data (if the holdings +data is OPAC visible). + +The number of records represented by the metarecord are displayed in parenthesis +next to the title. + +The formats contained within the metarecord are displayed under the title. + +image::media/srchresultpg2_3.jpg[] + +For the metarecord search result, the *Place Hold* link defaults to a metarecord +level hold. + +image::media/srchresultpg3_4.jpg[] + +To place a metarecord level hold: + +. Click the *Place Hold* link. +. Users who are not logged into their accounts will be directed to the *Log in +to Your Account* screen, where they will enter their username and password. +Users who are already logged into their accounts will be directed to the *Place +Hold* screen. +. Within the *Place Hold* screen, users can select the multiple formats and/or +languages that are available. +. Continue to enter any additional hold information (such as Pickup Location), if needed. +. Click *Submit*. + +image::media/placehold_5.jpg[] + +Selecting multiple formats will not place all of these formats on hold for the +user. For example, a user cannot select CD Audiobook and Book and expect to +place both the CD and book on hold at the same time. Instead, the user is +implying that either the CD format or the book format is the acceptable format +to fill the hold. If no format is selected, then any of the available formats +may be used to fill the hold. The same holds true for selecting multiple +languages. + +*Advanced Hold Options* + +When users place a hold on an individual bibliographic record they will see an +*Advanced Hold Options* link within the Place Hold screen. Clicking the +*Advanced Hold Options* link will take the users into the metarecord level hold +feature, enabling them to select multiple formats and/or languages. + +image::media/advholdoption_6.jpg[] + +*Metarecord Constituent Records Page* + +The TPAC now includes a Metarecord Constituent Records page, which displays a +listing of the individual bibliographic records grouped within the metarecord. +Access the Metarecord Constituent Records page by clicking on the metarecord +title on the Search Results page. + +image::media/srchresultpg4_7.jpg[] + +This will allow the user to view the results for grouped records. + +image::media/recorddetailpg_8.jpg[] + +*Show Holds on Bib* + +Within the staff client, *Show Holds on Bib* for a metarecord level hold will +take the staff member into the Metarecord Constituent Records page. + +*Global Flag: OPAC Metarecord Hold Formats Attribute* + +To utilize the metarecord level hold feature, the Global Flag: OPAC Metarecord +Hold Formats Attribute must be enabled and its value set at mr_hold_format, +which is the system's default configuration. + +image::media/mrholdgf_9.jpg[] + + diff --git a/docs/modules/opac/pages/using_the_public_access_catalog.adoc b/docs/modules/opac/pages/using_the_public_access_catalog.adoc new file mode 100644 index 0000000000..800a8d3ef0 --- /dev/null +++ b/docs/modules/opac/pages/using_the_public_access_catalog.adoc @@ -0,0 +1,566 @@ += Using the Public Access Catalog = +:toc: + +== Basic Search == + +indexterm:[OPAC] + +From the OPAC home, you can conduct a basic search of all materials owned by all +libraries in your Evergreen system. + +This search can be as simple as typing keywords into the search box and clicking +the _Search_ button. Or you can make your search more precise by limiting your +search by fields to search, material type or library location. + +indexterm:[search box] + +The _Homepage_ contains a single search box for you to enter search terms. You +can get to the _Homepage_ at any time by clicking the _Another Search_ link from +the leftmost link on the bar above your search results in the catalogue, or you +can enter a search anywhere you see a search box. + +You can select to search by: + +indexterm:[search, keyword] +indexterm:[search, title] +indexterm:[search, journal title] +indexterm:[search, author] +indexterm:[search, subject] +indexterm:[search, series] +indexterm:[search, bib call number] + +* *Keyword*: finds the terms you enter anywhere in the entire record for an +item, including title, author, subject, and other information. + +* *Title*: finds the terms you enter in the title of an item. + +* *Journal Title*: finds the terms you enter in the title of a serial bib +record. + +* *Author*: finds the terms you enter in the author of an item. + +* *Subject*: finds the terms you enter in the subject of an item. Subjects are +categories assigned to items according to a system such as the Library of +Congress Subject Headings. + +* *Series*: finds the terms you enter in the title of a multi-part series. + +[TIP] +============= +To search an item copy call number, use <> +============= + +=== Formats === + +You can limit your search by formats based on MARC fixed field type: + +indexterm:[formats, books] +indexterm:[formats, audiobooks] +indexterm:[formats, video] +indexterm:[formats, music] + + +* *All Books* +* *All Music* +* *Audiocassette music recording* +* *Blu-ray* +* *Braille* +* *Cassette audiobook* +* *CD Audiobook* +* *CD Music recording* +* *DVD* +* *E-audio* +* *E-book* +* *E-video* +* *Equipment, games, toys* +* *Kit* +* *Large Print Book* +* *Map* +* *Microform* +* *Music Score* +* *Phonograph music recording* +* *Phonograph spoken recording* +* *Picture* +* *Serials and magazines* +* *Software and video games* +* *VHS* + + +==== Libraries ==== + +If you are using a catalogue in a library or accessing a library’s online +catalogue from its homepage, the search will return items for your local +library. If your library has multiple branches, the result will display items +available at your branch and all branches of your library system separately. + + +== Advanced Search == + +Advanced searches allow users to perform more complex searches by providing more +options. Many kinds of searches can be performed from the _Advanced Search_ +screen. You can access by clicking _Advanced Search_ on the catalogue _Homepage_ +or search results screen. + +The available search options are the same as on the basic search. But you may +use one or many of them simultaneously. If you want to combine more than three +search options, use _Add Search Row_ button to add more search input rows. +Clicking the _X_ button will close the search input row. + + +=== Sort Results === + +indexterm:[advanced search, sort results] + +By default, the search results are in order of greatest to least relevance, see + <>. In the sort results menu you may select + to order the search results by relevance, title, author, or publication date. + + +=== Search Library === + +indexterm:[advanced search, search library] + +The current search library is displayed under _Search Library_ drop down menu. +By default it is your library. The search returns results for your local library +only. If your library system has multiple branches, use the _Search Library_ box +to select different branches or the whole library system. + + +=== Limit to Available === + +indexterm:[advanced search, limit to available] + + +This checkbox is at the bottom line of _Search Library_. Select _Limit to +Available_ to limit results to those titles that have items with a circulation +status of "available" (by default, either _Available_ or _Reshelving_). + +=== Exclude Electronic Resources === + +indexterm:[advanced search, exclude electronic resources] + +This checkbox is below _Limit to Available_. Select _Exclude Electronic +Resources_ to limit results to those bibliographic records that do not have an +"o" or "s" in the _Item Form_ fixed field (electronic forms) and overrides other +form limiters. + +This feature is optional and will not appear for patrons or staff until enabled. + +[TIP] +=============== +To display the *Exclude Electronic Resources* checkbox in the advance search +page and search results, set +the 'ctx.exclude_electronic_checkbox' setting in config.tt2 to 1. +=============== + + +=== Search Filter === + +indexterm:[advanced search, search filters] + +You can filter your search by _Item Type_, _Item Form_, _Language_, _Audience_, +_Video Format_, _Bib Level_, _Literary Form_, _Search Library_, and _Publication +Year_. Publication year is inclusive. For example, if you set _Publication Year_ +Between 2005 and 2007, your results can include items published in 2005, 2006 +and 2007. + +For each filter type, you may select multiple criteria by holding down the + _CTRL_ key as you click on the options. If nothing is selected for a filter, +the search will return results as though all options are selected. + +==== Search Filter Enhancements ==== + +Enhancements to the Search Filters now makes it easier to view, remove, and modify search filters while viewing search results in the Evergreen OPAC. Filters that are selected while conducting an advanced search in the Evergreen OPAC now appear below the search box in the search results interface. + +For example, the screenshot below shows a Keyword search for "violin concerto" while filtering on Item Type: Musical sound recording and Shelving Location: Music. + +image::media/searchfilters1.PNG[search using search filters] + +In the search results, the Item Type and Shelving Location filters appear directly below the search box. + +image::media/searchfilters2.PNG[search results with search filter enhancements] + +Each filter can be removed by clicking the X next to the filter name to modify the search within the search results screen. Below the search box on the search results screen, there is also a link to _Refine My Original Search_, which will bring the user back to the advanced search screen where the original search parameters selected can be viewed and modified. + + +[#numeric_search] +indexterm:[advanced search, numeric search] + +=== Numeric Search === + +If you have details on the exact item you wish to search for, use the _Numeric +Search_ tab on the advanced search page. Use the drop-down menu to select your +search by _ISBN_, _ISSN_, _Bib Call Number_, _Call Number (Shelf Browse)_, +_LCCN_, _TCN_, or _Item Barcode_. Enter the information and then click the +_Search_ button. + +=== Expert Search === + +indexterm:[advanced search, expert search] + +If you are familiar with MARC cataloging, you may search by MARC tag in the +_Expert Search_ option on the left of the screen. Enter the three-digit tag +number, the subfield if relevant, and the value or text that corresponds to the +tag. For example, to search by publisher name, enter `260 b Random House`. To +search several tags simultaneously, use the _Add Row_ option. Click _Submit_ to +run the search. + +[TIP] +============= +Use the MARC Expert Search only as a last resort, as it can take much longer to +retrieve results than by using indexed fields. For example, rather than running +an expert search for "245 a Gone with the wind", simply do a regular title +search for "Gone with the wind". +============= + +== Boolean operators == + +indexterm:[search, AND operator] +indexterm:[search, OR operator] +indexterm:[search, NOT operator] +indexterm:[search, boolean] + +Classic search interfaces (that is, those used primarily by librarians) forced +users to learn the art of crafting search phrases with Boolean operators. To a +large extent this was due to the inability of those systems to provide relevancy +ranking beyond a "last in, first out" approach. Thankfully, Evergreen, like most +modern search systems, supports a rather sophisticated relevancy ranking system +that removes the need for Boolean operators in most cases. + +By default, all terms that have been entered in a search query are joined with +an implicit `AND` operator. Those terms are required to appear in the designated + fields to produce a matching record: a search for _golden compass_ will search +for entries that contain both _golden_ *and* _compass_. + +Words that are often considered Boolean operators, such as _AND_, _OR_, and +_NOT_, are not special in Evergreen: they are treated as just another search +term. For example, a title search for `golden and compass` will not return the +title _Golden Compass_. + +However, Evergreen does support Boolean searching for those rare cases where you +might require it, using symbolic operators as follows: + +.Boolean symbolic operators +[width="50%",options="header"] +|================================= +| Operator | Symbol | Example +| AND | `&&` | `a && b` +| OR | `\|\|` | `a \|\| b` +| NOT | `-`_term_ | `a -b` +|================================= + +== Search Tips == + +indexterm:[search, stop words] +indexterm:[search, truncation] + +Evergreen tries to approach search from the perspective of a major search +engine: the user should simply be able to enter the terms they are looking for +as a general keyword search, and Evergreen should return results that are most +relevant given those terms. For example, you do not need to enter author's last +name first, nor do you need to enter an exact title or subject heading. +Evergreen is also forgiving about plurals and alternate verb endings, so if you +enter _dogs_, Evergreen will also find items with _dog_. + +The search engine has no _stop words_ (terms are ignored by the search engine): +a title search for `to be or not to be` (in any order) yields a list of titles +with those words. + +* Don’t worry about white space, exact punctuation, or capitalization. + +. White spaces before or after a word are ignored. So, a search for `[ golden +compass ]` gives the same results as a search for `[golden compass]`. + +. A double dash or a colon between words is reduced to a blank space. So, a +title search for _golden:compass_ or _golden -- compass_ is equivalent to +_golden compass_. + +. Punctuation marks occurring within a word are removed; the exception is \_. +So, a title search for _gol_den com_pass_ gives no result. + +. Diacritical marks and solitary `&` or `|` characters located anywhere in the +search term are removed. Words or letters linked together by `.` (dot) are +joined together without the dot. So, a search for _go|l|den & comp.ass_ is +equivalent to _golden compass_. + +. Upper and lower case letters are equivalent. So, _Golden Compass_ is the same +as _golden compass_. + +* Enter your search words in any order. So, a search for _compass golden_ gives +the same results as a search for _golden compass_. Adding more search words +gives fewer but more specific results. + +** This is also true for author searches. Both _David Suzuki_ and _Suzuki, +David_ will return results for the same author. + +* Use specific search terms. Evergreen will search for the words you specify, +not the meanings, so choose search terms that are likely to appear in an item +description. For example, the search _luxury hotels_ will produce more +relevant results than _nice places to stay_. + +* Search for an exact phrase using double-quotes. For example ``golden compass''. + +** The order of words is important for an exact phrase search. _golden compass_ +is different than _compass golden_. + +** White space, punctuation and capitalization are removed from exact phrases as + described above. So a phrase retains its search terms and its relative order, +but not special characters and not case. + +** Two phrases are joined by and, so a search for _"golden compass"_ _"dark +materials"_ is equivalent to _golden compass_ *and* _dark materials_. + + +* **Truncation** +Words may be right-hand truncated using an asterisk. Use a single asterisk * to +truncate any number of characters. +(example: _environment* agency_) + + +== Search Methodology == + +[#stemming] + +=== Stemming === + +indexterm:[search, stemming] + +A search for _dogs_ will also return hits with the word dog and a search for +parenting will return results with the words parent and parental. This is +because the search uses stemming to help return the most relevant results. That +is, words are reduced to their stem (or root word) before the search is +performed. + +The stemming algorithm relies on common English language patterns - like verbs +ending in _ing_ - to find the stems. This is more efficient than looking up each +search term in a dictionary and usually produces desirable results. However, it +also means the search will sometimes reduce a word to an incorrect stem and +cause unexpected results. To prevent a word or phrase from stemming, put it in +double-quotes to force an exact search. For example, a search for `parenting` +will also return results for `parental`, but a search for `"parenting"` will +not. + +Understanding how stemming works can help you to create more relevant searches, +but it is usually best not to anticipate how a search term will be stemmed. For +example, searching for `gold compass` does not return the same results as +`golden compass`, because `-en` is not a regular suffix in English, and +therefore the stemming algorithm does not recognize _gold_ as a stem of +_golden_. + + +[#order_of_results] + +=== Order of Results === + +indexterm:[search, order of results] + +By default, the results are listed in order of relevance, similar to a search +engine like Google. The relevance is determined using a number of factors, +including how often and where the search terms appear in the item description, +and whether the search terms are part of the title, subject, author, or series. +The results which best match your search are returned first rather than results +appearing in alphabetical or chronological order. + +In the _Advanced Search_ screen, you may select to order the search results by +relevance, title, author, or publication date before you start the search. You +can also re-order your search results using the _Sort Results_ dropdown list on +the search result screen. + + +== Search Results == + +indexterm:[search results] + +The search results are a list of relevant works from the catalogue. If there are +many results, they are divided into several pages. At the top of the list, you +can see the total number of results and go back and forth between the pages +by clicking the links that say _Previous_ or _Next_ on top or bottom of the +list. You can also click on the adjacent results page number listed. These page +number links allow you to skip to that results page, if your search results +needed multiple pages to display. Here is an example: + + +image::media/catalogue-3.png[catalogue-3] + +Brief information about the title, such as author, edition, publication date, +etc. is displayed under each title. The icons beside the brief information +indicate formats such as books, audio books, video recordings, and other +formats. If you hover your mouse over the icon, a text explanation will show up +in a small pop-up box. + +Clicking a title goes to the title details. Clicking an author searches all +works by the author. If you want to place a hold on the title, click _Place +Hold_ beside the format icons. + +On the top right, there is a _Limit to Available_ checkbox. Checking this box +will filter out those titles with no available copies in the library or +libraries at the moment. Usually you will see your search results are +re-displayed with fewer titles. + +When enabled, under the _Limit to Available_ checkbox, there is an _Exclude +Electronic Resources_ checkbox. Checking this box will filter out materials +that are cataloged as electronic in form. + +The _Sort by_ dropdown list is found at the top of the search results, beside +the _Show More Details_ link. Clicking an entry on the list will re-sort your +search results accordingly. + + +=== Facets: Subjects, Authors, and Series === + +indexterm:[search results, facets: subjects, authors, and series] + +At the left, you may see a list of _Facets of Subjects_, _Authors_, and +_Series_. Selecting any one of these links filters your current search results +using that subject, author, or series to narrow down your current results. The +facet filters can be undone by clicking the link a second time, thus returning +your original results before the facet was activated. + +image::media/catalogue-5.png[catalogue-5] + + +=== Availability === + +indexterm:[search results, availability] + +The number of available copies and total copies are displayed under each search +result's call number. If you are using a catalogue inside a library or accessing +a library’s online catalogue from its homepage, you will see how many copies are +available in the library under each title, too. If the library belongs to a +multi-branch library system you will see an extra row under each title showing +how many copies are available in all branches. + + +image::media/catalogue-6.png[catalogue-6] + +image::media/catalogue-7.png[catalogue-7] + +You may also click the _Show More Details_ link at the top of the results page, +next to the _Limit to available items_ check box, to view each search result's +copies' individual call number, status, and shelving location. + + +=== Viewing a record === + +indexterm:[search results, viewing a record] + +Click on a search result's title to view a detailed record of the title, +including descriptive information, location and availability, current holds, and +options for placing holds, add to my list, and print/email. + +image::media/catalogue-8.png[catalogue-8] +image::media/catalogue-8a.png[catalogue-8a] + +== Details == + +indexterm:[search results, details] + +The record shows details such as the cover image, title, author, publication +information, and an abstract or summary, if available. + +Near the top of the record, users can easily see the number of copies that +are currently available in the system and how many current holds are on the +title. + +If there are other formats and editions of the same work in the +database, links to those alternate formats will display. The formats used +in this section are based on the configurable catalog icon formats. + + +image::media/other-formats-and-editions.png[other-formats-and-editions] + +The Record Details view shows how many copies are at the library or libraries +you have selected, and whether they are available or checked out. It also +displays the Call number and Copy Location for locating the item on the shelves. +Clicking on Text beside the call number will allow you to send the item's call +number by text message, if desired. Clicking the location library link will +reveal information about owning library, such as address and open hours. + +Below the local details you can open up various tabs to display more +information. You can select Reviews and More to see the book’s summaries and +reviews, if available. You can select Shelf Browser to view items appearing near +the current item on the library shelves. Often this is a good way to browse for +similar items. You can select MARC Record to display the record in MARC format. +If your library offers the service, clicking on Awards, Reviews, and Suggested +Reads will reveal that additional information. + +[NOTE] +========== +Copies are sorted by (in order): org unit, call number, part label, copy number, +and barcode. +========== + + + +=== Placing Holds === + +indexterm:[search results, placing holds] + +Holds can be placed on either title results or search results page. If the item +is available, it will be pulled from the shelf and held for you. If all copies +at your local library are checked out, you will be placed on a waiting list and +you will be notified when items become available. + +On title details page, you can select the _Place Hold_ link in the upper right +corner of the record to reserve the item. You will need your library account +user name and password. You may choose to be notified by phone or email. + +In the example below, the phone number in your account will automatically show +up. Once you select the Enable phone notifications for this hold checkbox, you +can supply a different phone number for this hold only. The notification method +will be selected automatically if you have set it up in your account references. +But you still have a chance to re-select on this screen. You may also suspend +the hold temporarily by checking the Suspend box. Click the _Help_ beside it for +details. + +You can view and cancel a hold at anytime. Before your hold is captured, which +means an item has been held waiting for you to pick up, you can edit, suspend or + activate it. You need log into your patron account to do it. +From your account you can also set up a _Cancel if not filled by_ date for your +hold. _Cancel if not filled by_ date means after this date, even though your +hold has not been fulfilled you do not need the item anymore. + + +image::media/catalogue-9.png[catalogue-9] + +=== Permalink === + +The record summary page offers a link to a shorter permalink that + can be used for sharing the record with others. All URL parameters are stripped + from the link with the exception of the locg and copy_depth parameters. Those + parameters are maintained so that people can share a link that displays just + the holdings from one library/system or displays holdings from all libraries + with a specific library's holdings floating to the top. + +image::media/using-opac-view-permalink.png[Permalink] + + +=== SMS Call Number === + +If configured by the library system administrator, you may send yourself the +call number via SMS message by clicking on the *Text* link, which appears beside +the call number. + +image::media/textcn1.png[] + +[WARNING] +========== +Carrier charges may apply when using the SMS call number feature. +========== + + +=== Going back === + +indexterm:[search results, going back] + +When you are viewing a specific record, you can always go back to your title +list by clicking the link _Search Results_ on the top right or left bottom of +the page. + +image::media/catalogue-10.png[catalogue-10] + +You can start a new search at any time by entering new search terms in the +search box at the top of the page, or by selecting the _Another Search_ or +_Advanced Search_ links in the left-hand sidebar. + diff --git a/docs/modules/opac/pages/visibility_on_the_web.adoc b/docs/modules/opac/pages/visibility_on_the_web.adoc new file mode 100644 index 0000000000..d1fcb6183f --- /dev/null +++ b/docs/modules/opac/pages/visibility_on_the_web.adoc @@ -0,0 +1,117 @@ += Library visibility on the Web = +:toc: + +== Introduction == + +Evergreen follows a number of best practices to +make Library data integrate with the rest of the +Web. Evergreen's public catalog pages are +designed so that search engines can easily extract +meaningful information about your library and +collections. Evergreen is also preparing for an +eventual shift toward linked open bibliographic +data. + +== Catalog data in search engines == + +Each record in the catalog is displayed to search +engines using http://schema.org[schema.org] microdata. + +[IMPORTANT] +Make sure your system administrator has not added +a restrictive robots.txt file to your server. +These files restrict search engines, up to the +point of not allowing search engines to index your +site at all. + +=== Details of the schema.org mapping === + + * Each item is listed as a + http://schema.org/Offer[schema:Offer], which is + the same category that an online bookseller might + use to describe an item for sale. These Offers + are always listed with a price of $0.00. + * Subject headings are exposed as + http://schema.org/about[schema:about] + properties. + * Electronic resources are assigned a + http://schema.org/url[schema:url] + property, and any notes or link text + are assigned a + http://schema.org/description[schema:description] + property. + * Given a Library of Congress relator code for + 1xx and 7xx fields, Evergreen surfaces the URL + for that relator code along with the + http://schema.org/contributor[schema:contributor] + property to give machines a better chance + of understanding how the person or organization + actually contributed to this work. + * Linking out to related records: + ** Given an LCCN (010 field), Evergreen links to + the corresponding Library of Congress record + using http://schema.org/sameAs[schema:sameAs]. + ** Given an OCLC number (035 field, subfield `a` + beginning with `(OCoLC)`), Evergreen links to + the corresponding WorldCat record using + http://schema.org/sameAs[schema:sameAs]. + ** Given a URI (024 field, subfield 2 = `'uri'`), + Evergreen links to the corresponding OCLC + Work Entity record using + http://schema.org/exampleOfWork[schema:exampleOfWork]. + + +=== Viewing microdata === +You can learn more about how Evergreen publicizes +these data by viewing them directly. The +http://linter.structured-data.org[structured data linter] +is a helpful tool for viewing microdata. + +. Using your favorite Web browser, navigate to a + record in your public catalog. +. Copy the URL that displays in your browser's + address bar. +. Go to http://linter.structured-data.org +. Under the _Lint by URL_ tab, paste your URL + into the text box. +. Click _Submit_ + +=== Other helpful features for search engines === + * Titles of catalog pages follow a + "Page title - Library name" pattern to provide + specific titles in search engine results pages, + browser bookmarks, and browser tabs. + * Links that robots should not crawl, such as search + result links, are marked with the + https://support.google.com/webmasters/answer/96569?hl=en[@rel="nofollow"] + property. + * Catalog pages for record details and for library + descriptions express a + https://support.google.com/webmasters/answer/139066?hl=en[@rel="canonical"] + link to simplify the number of variations of page + URLs that could otherwise have been derived from + different search parameters. + * Catalog pages that do not exist return a proper + 404 "HTTP_NOT_FOUND" HTTP status code, and record + detail pages for records that have been deleted + now return a proper 410 "HTTP_GONE" HTTP status code. + * Record detail and library pages include + http://ogp.me/[Open Graph Protocol] markup. + * Each library has its own page at + _http://localhost/eg/opac/library/LIBRARY_SHORTNAME_ + that provides machine-readable hours and contact + information. + +== SKOS support == + +Some vocabularies used (or which could be used) for +stock record attributes and coded value maps in Evergreen +are published on the web using SKOS. The record +attributes system can now associate Linked Data URIs +with specific attribute values. In particular, seed data +supplying URIs for the RDA Content Type, Media Type, and +Carrier Type has been added. + +This is an experimental, "under-the-hood" feature that +will be built upon in subsequent releases. + diff --git a/docs/modules/reports/_attributes.adoc b/docs/modules/reports/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/reports/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports1.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports1.jpg new file mode 100644 index 0000000000..9436acc161 Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports1.jpg differ diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2.jpg new file mode 100644 index 0000000000..320f5310af Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2.jpg differ diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2a.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2a.jpg new file mode 100644 index 0000000000..79faac392c Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports2a.jpg differ diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports3.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports3.jpg new file mode 100644 index 0000000000..aa5fa81865 Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports3.jpg differ diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports4.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports4.jpg new file mode 100644 index 0000000000..89b8125481 Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports4.jpg differ diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports5.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports5.jpg new file mode 100644 index 0000000000..567bb86c89 Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports5.jpg differ diff --git a/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports6.jpg b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports6.jpg new file mode 100644 index 0000000000..27dfe78859 Binary files /dev/null and b/docs/modules/reports/assets/images/media/2_7_Enhancements_to_Reports6.jpg differ diff --git a/docs/modules/reports/assets/images/media/create-template-1.png b/docs/modules/reports/assets/images/media/create-template-1.png new file mode 100644 index 0000000000..0358768eb8 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-1.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-10.png b/docs/modules/reports/assets/images/media/create-template-10.png new file mode 100644 index 0000000000..12deb5ce9b Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-10.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-11.png b/docs/modules/reports/assets/images/media/create-template-11.png new file mode 100644 index 0000000000..003b05bc8d Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-11.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-12.png b/docs/modules/reports/assets/images/media/create-template-12.png new file mode 100644 index 0000000000..fe4d999663 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-12.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-13.png b/docs/modules/reports/assets/images/media/create-template-13.png new file mode 100644 index 0000000000..0831126d09 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-13.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-15.png b/docs/modules/reports/assets/images/media/create-template-15.png new file mode 100644 index 0000000000..19734c337a Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-15.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-16.png b/docs/modules/reports/assets/images/media/create-template-16.png new file mode 100644 index 0000000000..71665a0ffb Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-16.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-17.png b/docs/modules/reports/assets/images/media/create-template-17.png new file mode 100644 index 0000000000..0a6308483d Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-17.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-19.png b/docs/modules/reports/assets/images/media/create-template-19.png new file mode 100644 index 0000000000..a62b2825f8 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-19.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-2.png b/docs/modules/reports/assets/images/media/create-template-2.png new file mode 100644 index 0000000000..20466a6723 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-2.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-20.png b/docs/modules/reports/assets/images/media/create-template-20.png new file mode 100644 index 0000000000..d4beb2bd28 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-20.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-21.png b/docs/modules/reports/assets/images/media/create-template-21.png new file mode 100644 index 0000000000..e2cb2f9ade Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-21.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-22.png b/docs/modules/reports/assets/images/media/create-template-22.png new file mode 100644 index 0000000000..b7f8532bf7 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-22.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-23.png b/docs/modules/reports/assets/images/media/create-template-23.png new file mode 100644 index 0000000000..03de846b1a Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-23.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-24.png b/docs/modules/reports/assets/images/media/create-template-24.png new file mode 100644 index 0000000000..ef381f6934 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-24.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-25.png b/docs/modules/reports/assets/images/media/create-template-25.png new file mode 100644 index 0000000000..88d2a17a59 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-25.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-26.png b/docs/modules/reports/assets/images/media/create-template-26.png new file mode 100644 index 0000000000..b6816c88e2 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-26.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-27.png b/docs/modules/reports/assets/images/media/create-template-27.png new file mode 100644 index 0000000000..ac60c901a3 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-27.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-28.png b/docs/modules/reports/assets/images/media/create-template-28.png new file mode 100644 index 0000000000..69d6cf1c26 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-28.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-29.png b/docs/modules/reports/assets/images/media/create-template-29.png new file mode 100644 index 0000000000..1dcb26094f Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-29.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-3.png b/docs/modules/reports/assets/images/media/create-template-3.png new file mode 100644 index 0000000000..d2bf614be4 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-3.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-30.png b/docs/modules/reports/assets/images/media/create-template-30.png new file mode 100644 index 0000000000..9421cb5f78 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-30.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-31.png b/docs/modules/reports/assets/images/media/create-template-31.png new file mode 100644 index 0000000000..3a07d05822 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-31.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-32.png b/docs/modules/reports/assets/images/media/create-template-32.png new file mode 100644 index 0000000000..3150321434 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-32.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-4.png b/docs/modules/reports/assets/images/media/create-template-4.png new file mode 100644 index 0000000000..b6d7201afc Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-4.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-5.png b/docs/modules/reports/assets/images/media/create-template-5.png new file mode 100644 index 0000000000..d24ad3c233 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-5.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-6.png b/docs/modules/reports/assets/images/media/create-template-6.png new file mode 100644 index 0000000000..47fd843b46 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-6.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-7.png b/docs/modules/reports/assets/images/media/create-template-7.png new file mode 100644 index 0000000000..8803035b01 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-7.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-8.png b/docs/modules/reports/assets/images/media/create-template-8.png new file mode 100644 index 0000000000..8c46199336 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-8.png differ diff --git a/docs/modules/reports/assets/images/media/create-template-9.png b/docs/modules/reports/assets/images/media/create-template-9.png new file mode 100644 index 0000000000..49fc2ef426 Binary files /dev/null and b/docs/modules/reports/assets/images/media/create-template-9.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_bool.png b/docs/modules/reports/assets/images/media/datatypes_bool.png new file mode 100644 index 0000000000..c00b467ebe Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_bool.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_id.png b/docs/modules/reports/assets/images/media/datatypes_id.png new file mode 100644 index 0000000000..df178e0a7f Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_id.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_int.png b/docs/modules/reports/assets/images/media/datatypes_int.png new file mode 100644 index 0000000000..3182ce0a37 Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_int.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_interval.png b/docs/modules/reports/assets/images/media/datatypes_interval.png new file mode 100644 index 0000000000..3c907fa274 Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_interval.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_link.png b/docs/modules/reports/assets/images/media/datatypes_link.png new file mode 100644 index 0000000000..559d756ca5 Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_link.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_money.png b/docs/modules/reports/assets/images/media/datatypes_money.png new file mode 100644 index 0000000000..34d5f36cad Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_money.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_orgunit.png b/docs/modules/reports/assets/images/media/datatypes_orgunit.png new file mode 100644 index 0000000000..bb11f53b96 Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_orgunit.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_text.png b/docs/modules/reports/assets/images/media/datatypes_text.png new file mode 100644 index 0000000000..e87683d6a0 Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_text.png differ diff --git a/docs/modules/reports/assets/images/media/datatypes_timestamp.png b/docs/modules/reports/assets/images/media/datatypes_timestamp.png new file mode 100644 index 0000000000..e2bb18c4a7 Binary files /dev/null and b/docs/modules/reports/assets/images/media/datatypes_timestamp.png differ diff --git a/docs/modules/reports/assets/images/media/folder-1.png b/docs/modules/reports/assets/images/media/folder-1.png new file mode 100644 index 0000000000..0e24910efb Binary files /dev/null and b/docs/modules/reports/assets/images/media/folder-1.png differ diff --git a/docs/modules/reports/assets/images/media/generate-report-1.png b/docs/modules/reports/assets/images/media/generate-report-1.png new file mode 100644 index 0000000000..a208d89e9e Binary files /dev/null and b/docs/modules/reports/assets/images/media/generate-report-1.png differ diff --git a/docs/modules/reports/assets/images/media/generate-report-10.png b/docs/modules/reports/assets/images/media/generate-report-10.png new file mode 100644 index 0000000000..9980b92096 Binary files /dev/null and b/docs/modules/reports/assets/images/media/generate-report-10.png differ diff --git a/docs/modules/reports/assets/images/media/generate-report-14.png b/docs/modules/reports/assets/images/media/generate-report-14.png new file mode 100644 index 0000000000..e6846b560a Binary files /dev/null and b/docs/modules/reports/assets/images/media/generate-report-14.png differ diff --git a/docs/modules/reports/assets/images/media/generate-report-2.png b/docs/modules/reports/assets/images/media/generate-report-2.png new file mode 100644 index 0000000000..8ba8a9773d Binary files /dev/null and b/docs/modules/reports/assets/images/media/generate-report-2.png differ diff --git a/docs/modules/reports/assets/images/media/generate-report-3.png b/docs/modules/reports/assets/images/media/generate-report-3.png new file mode 100644 index 0000000000..e5cdfdb3ae Binary files /dev/null and b/docs/modules/reports/assets/images/media/generate-report-3.png differ diff --git a/docs/modules/reports/assets/images/media/generate-report-8.png b/docs/modules/reports/assets/images/media/generate-report-8.png new file mode 100644 index 0000000000..72a700271c Binary files /dev/null and b/docs/modules/reports/assets/images/media/generate-report-8.png differ diff --git a/docs/modules/reports/assets/images/media/view-output-1.png b/docs/modules/reports/assets/images/media/view-output-1.png new file mode 100644 index 0000000000..7fa0aec3a2 Binary files /dev/null and b/docs/modules/reports/assets/images/media/view-output-1.png differ diff --git a/docs/modules/reports/assets/images/media/view-output-2.png b/docs/modules/reports/assets/images/media/view-output-2.png new file mode 100644 index 0000000000..b536d07234 Binary files /dev/null and b/docs/modules/reports/assets/images/media/view-output-2.png differ diff --git a/docs/modules/reports/assets/images/media/view-output-4.png b/docs/modules/reports/assets/images/media/view-output-4.png new file mode 100644 index 0000000000..54e364c3c9 Binary files /dev/null and b/docs/modules/reports/assets/images/media/view-output-4.png differ diff --git a/docs/modules/reports/assets/images/media/view-output-5.png b/docs/modules/reports/assets/images/media/view-output-5.png new file mode 100644 index 0000000000..c4d9f61308 Binary files /dev/null and b/docs/modules/reports/assets/images/media/view-output-5.png differ diff --git a/docs/modules/reports/nav.adoc b/docs/modules/reports/nav.adoc new file mode 100644 index 0000000000..2c48e8e1f2 --- /dev/null +++ b/docs/modules/reports/nav.adoc @@ -0,0 +1,13 @@ +* xref:reports:introduction.adoc[Reports] +** xref:reports:reporter_daemon.adoc[Starting and Stopping the Reporter Daemon] +** xref:reports:reporter_folder.adoc[Folders] +** xref:reports:reporter_create_templates.adoc[Creating Templates] +** xref:reports:reporter_generating_reports.adoc[Generating Reports from Templates] +** xref:reports:reporter_view_output.adoc[Viewing Report Output] +** xref:reports:reporter_cloning_shared_templates.adoc[Cloning Shared Templates] +** xref:reports:reporter_add_data_source.adoc[Adding Data Sources to Reporter] +** xref:reports:reporter_running_recurring_reports.adoc[Running Recurring Reports] +** xref:reports:reporter_template_terminology.adoc[Template Terminology] +** xref:reports:reporter_template_enhancements.adoc[Template Enhancements] +** xref:reports:reporter_export_usingpgAdmin.adoc[Exporting Report Templates Using phpPgAdmin] + diff --git a/docs/modules/reports/pages/README b/docs/modules/reports/pages/README new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/modules/reports/pages/_attributes.adoc b/docs/modules/reports/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/reports/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/reports/pages/introduction.adoc b/docs/modules/reports/pages/introduction.adoc new file mode 100644 index 0000000000..81389652ee --- /dev/null +++ b/docs/modules/reports/pages/introduction.adoc @@ -0,0 +1,4 @@ += Introduction = +:toc: + +Learn how to create and use reports in Evergreen. diff --git a/docs/modules/reports/pages/reporter_add_data_source.adoc b/docs/modules/reports/pages/reporter_add_data_source.adoc new file mode 100644 index 0000000000..8496222bae --- /dev/null +++ b/docs/modules/reports/pages/reporter_add_data_source.adoc @@ -0,0 +1,260 @@ += Adding Data Sources to Reporter = +:toc: + +indexterm:[reports, adding data sources] + +You can further customize your Evergreen reporting environment by adding +additional data sources. + +The Evergreen reporter module does not build and execute SQL queries directly, +but instead uses a data abstraction layer called *Fieldmapper* to mediate queries +on the Evergreen database.Fieldmapper is also used by other core Evergreen DAO +services, including cstore and permacrud. The configuration file _fm_IDL.xml_ +contains the mapping between _Fieldmapper_ class definitions and the database. +The _fm_IDL.xml_ file is located in the _/openils/conf_ directory. + +indexterm:[fm_IDL.xml] + +There are 3 basic steps to adding a new data source. Each step will be discussed +in more detail in the + +. Create a PostgreSQL query, view, or table that will provide the data for your +data source. +. Add a new class to _fm_IDL.xml_ for your data source. +. Restart the affected services to see the new data source in Reporter. + +There are two possible sources for new data sources: + +indexterm:[PostgreSQL] + +indexterm:[SQL] + +* An SQL query built directly into the class definition in _fm_IDL.xml_. You can +use this method if you are only going to access this data source through the +Evergreen reporter and/or cstore code that you write. +* A new table or view in the Evergreen PostgreSQL database on which a class +definition in _fm_IDL.xml_. You can use this method if you want to be able to +access this data source through directly through SQL or using other reporting tool. + +== Create a PostgreSQL query, view, or table for your data source == + +indexterm:[PostgreSQL] + +You need to decide whether you will create your data source as a query, a view, +or a table. + +. Create a query if you are planning to access this data source only through the +Evergreen reporter and/or cstore code that you write. You will use this query to +create an IDL only view. +. Create a view if you are planning to access this data source through other +methods in addition to the Evergreen reporter, or if you may need to do +performance tuning to optimize your query. +. You may also need to use an additional table as part of your data source if +you have additional data that's not included in the base Evergreen, or if you +need to use a table to store the results of a query for performance reasons. + +To develop and test queries, views, and tables, you will need + +* Access to the Evergreen PostgreSQL database at the command line. This is +normally the psql application. You +can access the Postgres documentation at the +https://www.postgresql.org/docs/[Official Postgres documentation] for +more information about PostgreSQL. +* Knowledge of the Evergreen database structure for the data that you want to +access. You can find this information by looking at the Evergreen schema +http://docs.evergreen-ils.org/2.2/schema/[Evergreen schema] + +indexterm:[database schema] + +If the views that you are creating are purely local in usage and are not intended +for contribution to the core Evergreen code, create the Views and Tables in the +extend_reporter schema. This schema is intended to be used for local +customizations and will not be modified during upgrades to the Evergreen system. + +You should make sure that you have an appropriate version control process for the SQL +used to create your data sources. + +Here's an example of a view created to incorporate some locally defined user +statistical categories: + +.example view for reports +------------------------------------------------------------ +create view extend_reporter.patronstats as +select u.id, +grp.name as "ptype", +rl.stat_cat_entry as "reg_lib", +gr.stat_cat_entry as "gender", +ag.stat_cat_entry as "age_group", +EXTRACT(YEAR FROM age(u.dob)) as "age", +hl.id as "home_lib", +u.create_date, +u.expire_date, +ms_balance_owed +from actor.usr u +join permission.grp_tree grp + on (u.profile = grp.id and (grp.parent = 2 or grp.name = 'patron')) +join actor.org_unit hl on (u.home_ou = hl.id) +left join money.open_usr_summary ms + on (ms.usr = u.id) +left join actor.stat_cat_entry_usr_map rl + on (u.id = rl.target_usr and rl.stat_cat = 4) +left join actor.stat_cat_entry_usr_map bt + on (u.id = bt.target_usr and bt.stat_cat = 3) +left join actor.stat_cat_entry_usr_map gr + on (u.id = gr.target_usr and gr.stat_cat = 2) +left join actor.stat_cat_entry_usr_map gr + on (u.id = gr.target_usr and gr.stat_cat = 2) +left join actor.stat_cat_entry_usr_map ag + on (u.id = ag.target_usr and ag.stat_cat = 1) +where u.active = 't' and u.deleted <> 't'; +------------------------------------------------------------ + +== Add a new class to fm_IDL.xml for your data source == + +Once you have your data source, the next step is to add that data source as a +new class in _fm_IDL.xml_. + +indexterm:[fm_IDL.xml] +indexterm:[fieldmapper] +indexterm:[report sources] + +You will need to add the following attributes for the class definition: + +* *id*. You should follow a consistent naming convention for your class names +that won't create conflicts in the future with any standard classes added in +future upgrades. Evergreen normally names each class with the first letter of +each word in the schema and table names. You may want to add a local prefix or +suffix to your local class names. +* *controller=”open-ils.cstore”* +* *oils_obj:fieldmapper=”extend_reporter::long_name_of_view”* +* *oils_persist.readonly=”true”* +* *reporter:core=”true”* (if you want this to show up as a “core” reporting source) +* *reporter:label*. This is the name that will appear on the data source list in +the Evergreen reporter. +* *oils_persist:source_definition*. If this is an IDL-only view, add the SQL query +here. You don't need this attribute if your class is based on a PostgreSQL view +or table. +* *oils_persist:tablename="schemaname.viewname or tablename"* If this class is +based on a PostgreSQL view or table, add the table name here. You don't need +this attribute is your class is an IDL-only view. + +For each column in the view or query output, add field element and set the +following attributes. The fields should be wrapped with _ _: + +* *reporter:label*. This is the name that appears in the Evergreen reporter. +* *name*. This should match the column name in the view or query output. +* *reporter:datatype* (which can be id, bool, money, org_unit, int, number, +interval, float, text, timestamp, or link) + +For each linking field, add a link element with the following attributes. The +elements should be wrapped with _ _: + +* *field* (should match field.name) +* *reltype* (“has_a”, “might_have”, or “has_many”) +* *map* (“”) +* *key* (name of the linking field in the foreign table) +* *class* (ID of the IDL class of the table that is to be linked to) + +The following example is a class definition for the example view that was created +in the previous section. + +.example class definition for reports +------------------------------------------------------------ + + + + + + + + + + + + + + + + + + + + + +------------------------------------------------------------ + +NOTE: _fm_IDL.xml_ is used by other core Evergreen DAO services, including cstore +and permacrud. So changes to this file can affect the entire Evergreen +application, not just reporter. After making changes fm_IDL.xml, it is a good +idea to ensure that it is valid XML by using a utility such as *xmllint* – a +syntax error can render much of Evergreen nonfunctional. Set up a good change +control system for any changes to fm_IDL.xml. You will need to keep a separate +copy of you local class definitions so that you can reapply the changes to +_fm_IDL.xml_ after Evergreen upgrades. + +== Restart the affected services to see the new data source in the reporter == + +The following steps are needed to for Evergreen to recognize the changes to +_fm_IDL.xml_ + +. Copy the updated _fm_IDL.xml_ into place: ++ +------------- +cp fm_IDL.xml /openils/conf/. +------------- ++ +. (Optional) Make the reporter version of fm_IDL.xml match the core version. +Evergreen systems supporting only one interface language will normally find +that _/openils/var/web/reports/fm_IDL.xml_ is a symbolic link pointing to +_/openils/conf/fm_IDL.xml_, so no action will be required. However, systems +supporting multiple interfaces will have a different version of _fm_IDL.xml_ in +the _/openils/var/web/reports_ directory. The _right_ way to update this is to +go through the Evergreen internationalization build process to create the +entity form of _fm_IDL.xml_ and the updated _fm_IDL.dtd_ files for each +supported language. However, that is outside the scope of this document. If you +can accept the reporter interface supporting only one language, then you can +simply copy your updated version of _fm_IDL.xml_ into the +_/openils/var/web/reports_ directory: ++ +------------- +cp /openils/conf/fm_IDL.xml /openils/var/web/reports/. +------------- ++ +. As the *opensrf* user, run Autogen to to update the Javascript versions of +the fieldmapper definitions. ++ +------------- +/openils/bin/autogen.sh +------------- ++ +. As the *opensrf* user, restart services: ++ +------------- +osrf_control --localhost --restart-services +------------- ++ +. As the *root* user, restart the Apache web server: ++ +------------- +service apache2 restart +------------- ++ +. As the *opensrf* user, restart the Evergreen reporter. You may need to modify +this command depending on your system configuration and PID path: ++ +------------ +opensrf-perl.pl -l -action restart -service open-ils.reporter \ +-config /openils/conf/opensrf_core.xml -pid-dir /openils/var/run +------------ ++ +. Restart the Evergreen staff client, or use *Admin --> For Developers --> + Clear Cache* + diff --git a/docs/modules/reports/pages/reporter_cloning_shared_templates.adoc b/docs/modules/reports/pages/reporter_cloning_shared_templates.adoc new file mode 100644 index 0000000000..3d4b8ba09a --- /dev/null +++ b/docs/modules/reports/pages/reporter_cloning_shared_templates.adoc @@ -0,0 +1,42 @@ += Cloning Shared Templates = +:toc: + +indexterm:[reports, cloning] + +This chapter describes how to make local copies of shared templates for routine +reports or as a starting point for customization. When creating a new template +it is a good idea to review the shared templates first: even if the exact +template you need does not exist it is often faster to modify an existing +template than to build a brand new one. A Local System Administrator account is +required to clone templates from the _Shared Folders_ section and save them to _My +Folders_. + +The steps below assume you have already created at least one _Templates_ folder. +If you haven’t done this, please see +xref:reports:reporter_folder.adoc#reporter_creating_folders[Creating Folders]. + +. Access the reports interface from _Administration_ -> _Reports_ +. Under _Shared Folders_ expand the _Templates_ folder and the subfolder of the +report you wish to clone. To expand the folders click on the grey arrow or +folder icon. Do not click on the blue underlined hyperlink. +. Click on the subfolder. +. Select the template you wish to clone. From the dropdown menu choose _Clone +selected templates_, then click _Submit_. ++ +NOTE: By default Evergreen only displays the first 10 items in any folder. To view +all content, change the Limit output setting from 10 to All. ++ +. Choose the folder where you want to save the cloned template, then click +_Select Folder_. Only template folders created with your account will be visible. +If there are no folders to choose from please see +xref:reports:reporter_folder.adoc#reporter_creating_folders[Creating Folders]. + +. The cloned template opens in the template editor. From here you may modify +the template by adding, removing, or editing fields and filters as described in +xref:reports:reporter_create_templates.adoc#reporter_creating_templates[Creating Templates]. _Template Name_ and +_Description_ can also be edited. When satisfied with your changes click _Save_. + +. Click _OK_ in the resulting confirmation windows. + +Once saved it is not possible to edit a template. To make changes, clone a +template and change the clone. diff --git a/docs/modules/reports/pages/reporter_create_templates.adoc b/docs/modules/reports/pages/reporter_create_templates.adoc new file mode 100644 index 0000000000..73d2417d70 --- /dev/null +++ b/docs/modules/reports/pages/reporter_create_templates.adoc @@ -0,0 +1,289 @@ +[[reporter_creating_templates]] += Creating Templates = +:toc: + +indexterm:[reports, creating templates] + +Once you have created a folder, the next step in building a report is to create +or clone a template. Templates allow you to run a report more than once without +building it anew every time, by changing definitions to suit current +requirements. For example, you can create a shared template that reports on +circulation at a given library. Then, other libraries can use your template and +simply select their own library when they run the report. + +It may take several tries to refine a report to give the output that you want. +It can be useful to plan out your report on paper before getting started with +the reporting tool. Group together related fields and try to identify the key +fields that will help you select the correct source. + +It may be useful to create complex queries in several steps. For example, first +add all fields from the table at the highest source level. Run a report and check +to see that you get results that seem reasonable. Then clone the report, add any +filters on fields at that level and run another report. Then drill down to the +next table and add any required fields. Run another report. Add any filters at +that level. Run another report. Continue until you’ve drilled down to all the +fields you need and added all the filters. This might seem time consuming and +you will end up cloning your initial report several times. However, it will help +you to check the correctness of your results, and will help to debug if you run +into problems because you will know exactly what changes caused the problem. +Also consider adding extra fields in the intermediate steps to help you check +your results for correctness. + +This example illustrates creating a template for circulation statistics. This is +an example of the most basic template that you can create. The steps required to +create a template are the same every time, but the tables chosen, how the data +is transformed and displayed, and the filters used will vary depending on your +needs. + +== Choosing Report Fields == + +indexterm:[reports, creating templates, choosing reports fields] + +. Click on the My Folder template folder where you want the template to be saved. ++ +image::media/create-template-1.png[create-template-1] ++ +. Click on Create a new Template for this folder. ++ +image::media/create-template-2.png[create-template-2] ++ +. You can now see the template creating interface. The upper half of the screen +is the _Database Source Browser_. The top left hand pane contains the database +_Sources_ drop-down list. This is the list of tables available as a starting point +for your report. Commonly used sources are _Circulation_ (for circ stats and +overdue reports), _ILS User_ (for patron reports), and _Item_ (for reports on a +library's holdings). ++ +image::media/create-template-3.png[create-template-3] ++ +The Enable source nullability checkbox below the sources list is for advanced +reporting and should be left unchecked by default. ++ +. Select _Circulation_ in the _Sources_ dropdown menu. Note that the _Core +Sources_ for reporting are listed first, however it is possible to access all +available sources at the bottom of this dropdown menu. You may only specify one +source per template. ++ +image::media/create-template-4.png[create-template-4] ++ +. Click on _Circulation_ to retrieve all the field names in the Field Name pane. +Note that the _Source_ Specifier (above the middle and right panes) shows the +path that you took to get to the specific field. ++ +image::media/create-template-5.png[create-template-5] ++ +. Select _Circ ID_ in the middle _Field Name_ pane, and _Count Distinct_ from the +right _Field Transform_ pane. The _Field Transform_ pane is where you choose how +to manipulate the data from the selected fields. You are counting the number of +circulations. ++ +indexterm:[reports, field transform] ++ +image::media/create-template-6.png[create-template-6] ++ +_Field Transforms_ have either an _Aggregate_ or _Non-Aggregate_ output type. +See the section called +xref:reports:reporter_template_terminology.adoc#field_transforms[Field Transforms] for more about +_Count, _Count Distinct_, and other transform options. ++ +. Click _Add Selected Fields_ underneath the _Field Transform_ pane to add this +field to your report output. Note that _Circ ID_ now shows up in the bottom left +hand pane under the _Displayed Fields_ tab. ++ +image::media/create-template-7.png[create-template-7] ++ +. _Circ ID_ will be the column header in the report output. You can rename +default display names to something more meaningful. To do so in this example, +select the _Circ ID_ row and click _Alter Display Header_. ++ +image::media/create-template-8.png[create-template-8] ++ +Double-clicking on the displayed field name is a shortcut to altering the +display header. ++ +. Type in the new column header name, for example _Circ count_ and click _OK_. ++ +image::media/create-template-9.png[create-template-9] ++ +. Add other data to your report by going back to the _Sources_ pane and selecting +the desired fields. In this example, we are going to add _Circulating Item --> +Shelving Location_ to further refine the circulation report. ++ +In the top left hand _Sources_ pane, expand _Circulation_. Depending on your +computer you will either click on the _+_ sign or on an arrow to expand the tree. ++ +image::media/create-template-10.png[create-template-10] ++ +Click on the _+_ or arrow to expand _Circulating Item_. Select +_Shelving Location_. ++ +image::media/create-template-11.png[create-template-11] ++ +When you are creating a template take the shortest path to the field you need in +the left hand Sources pane. Sometimes it is possible to find the same field name +further in the file structure, but the shortest path is the most efficient. ++ +In the Field Name pane select Name. ++ +image::media/create-template-12.png[create-template-12] ++ +In the upper right _Field Transform_ pane, select _Raw Data_ and click _Add Selected_ +Fields. Use _Raw Data_ when you do not wish to transform field data in any manner. ++ +image::media/create-template-13.png[create-template-13] ++ +Name will appear in the bottom left pane. Select the Name row and click _Alter +Display Header_. ++ +image::media/create-template-15.png[create-template-15] ++ +. Enter a new, more descriptive column header, for example, _Shelving location_. +Click _OK_. ++ +image::media/create-template-16.png[create-template-16] ++ +. Note that the order of rows (top to bottom) will correspond to the order of +columns (left to right) on the final report. Select _Shelving location_ and click +on _Move Up_ to move _Shelving location_ before _Circ count_. ++ +image::media/create-template-17.png[create-template-17] ++ +. Return to the _Sources_ pane to add more fields to your template. Under +_Sources_ click _Circulation_, then select _Check Out Date/Time_ from the middle +_Field Name_ pane. ++ +image::media/create-template-19.png[create-template-19] ++ +. Select _Year + Month_ in the right hand _Field Transform_ pane and click _Add +Selected Fields_ ++ +image::media/create-template-20.png[create-template-20] ++ +. _Check Out Date/Time_ will appear in the _Displayed Fields_ pane. In the report +it will appear as a year and month _(YYYY-MM)_ corresponding to the selected transform. ++ +image::media/create-template-21.png[create-template-21] ++ +. Select the _Check Out Date/Time_ row. Click _Alter Display Header_ and change +the column header to _Check out month_. ++ +image::media/create-template-22.png[create-template-22] ++ +. Move _Check out month_ to the top of the list using the _Move Up_ button, so +that it will be the first column in an MS Excel spreadsheet or in a chart. +Report output will sort by the first column. + +image::media/create-template-23.png[create-template-23] + +[NOTE] +====== +Note the _Change Transform_ button in the bottom left hand pane. It has the same +function as the upper right _Field Transform_ pane for fields that have already +been added. + +image::media/create-template-24.png[create-template-24] +====== + + +== Applying Filters == + +indexterm:[reports, applying filters] + +Evergreen reports access the entire database, so to limit report output to a +single library or library system you need to apply filters. + +After following the steps in the previous section you will see three fields in +the bottom left hand _Template Configuration_ pane. There are three tabs in this +pane: _Displayed Fields_ (covered in the previous section), _Base Filters_ and +_Aggregate Filters_. A filter allows you to return only the results that meet +the criteria you set. + +indexterm:[reports, applying filters, base filter] + +indexterm:[reports, applying filters, aggregate filters] + +_Base Filters_ apply to non-aggregate output types, while _Aggregate Filters_ are +used for aggregate types. In most reports you will be using the _Base Filters_ tab. +For more information on aggregate and non-aggregate types see the section called +“Field Transforms”. + +There are many available operators when using filters. Some examples are _Equals_, +_In list_, is _NULL_, _Between_, _Greater than_ or _equal to_, and so on. _In list_ +is the most flexible operator, and in this case will allow you flexibility when +running a report from this template. For example, it would be possible to run a +report on a list of timestamps (in this case will be trimmed to year and month +only), run a report on a single month, or run a report comparing two months. It +is also possible to set up recurring reports to run at the end of each month. + +In this example we are going to use a Base Filter to filter out one library’s +circulations for a specified time frame. The time frame in the template will be +configured so that you can change it each time you run the report. + +=== Using Base Filters === + +indexterm:[reports, applying filters, base filter] + +. Select the _Base Filters_ tab in the bottom _Template Configuration_ pane. + +. For this circulation statistics example, select _Circulation --> Check Out +Date/Time --> Year + Month_ and click on _Add Selected Fields_. You are going to +filter on the time period. ++ +image::media/create-template-25.png[create-template-25] ++ +. Select _Check Out Date/Time_. Click on _Change Operator_ and select _In list_ +from the dropdown menu. ++ +image::media/create-template-26.png[create-template-26] ++ +. To filter on the location of the circulation select +_Circulation --> Circulating library --> Raw Data_ and click on _Add Selected Fields_. ++ +image::media/create-template-27.png[create-template-276] ++ +. Select _Circulating Library_ and click on _Change Operator_ and select _Equals_. +Note that this is a template, so the value for _Equals_ will be filled out when +you run the report. ++ +image::media/create-template-28.png[create-template-28] ++ +For multi-branch libraries, you would select _Circulating Library_ with _In list_ +as the operator, so you could specify the branch(es) when you run the report. This +leaves the template configurable to current requirements. In comparison, sometimes +you will want to hardcode true/false values into a template. For example, deleted +bibliographic records remain in the database, so perhaps you want to hardcode +_deleted=false_, so that deleted records don’t show up in the results. You might +want to use _deleted=true_, for a template for a report on deleted items in the +last month. ++ +. Once you have configured your template, you must name and save it. Name this +template _Circulations by month for one library_. You can also add a description. +In this example, the title is descriptive enough, so a description is not necessary. +Click _Save_. ++ +image::media/create-template-29.png[create-template-29] ++ +. Click _OK_. ++ +image::media/create-template-30.png[create-template-30] ++ +. You will get a confirmation dialogue box that the template was successfully +saved. Click OK. ++ +image::media/create-template-31.png[create-template-31] ++ +After saving it is not possible to edit a template. To make changes you will +need to clone it and edit the clone + +[NOTE] +========== +The bottom right hand pane is also a source specifier. By selecting one of these +rows you will limit the fields that are visible to the sources you have specified. +This may be helpful when reviewing templates with many fields. Use *Ctrl+Click* to +select or deselect items. + +image::media/create-template-32.png[create-template-32] +========== + + + diff --git a/docs/modules/reports/pages/reporter_daemon.adoc b/docs/modules/reports/pages/reporter_daemon.adoc new file mode 100644 index 0000000000..4066851821 --- /dev/null +++ b/docs/modules/reports/pages/reporter_daemon.adoc @@ -0,0 +1,62 @@ += Starting and Stopping the Reporter Daemon = +:toc: + +indexterm:[reports, starting server application] + +indexterm:[reporter, starting daemon] + +Before you can view reports, the Evergreen administrator must start +the reporter daemon from the command line of the Evergreen server. + +The reporter daemon periodically checks for requests for new reports or +scheduled reports and gets them running. + +== Starting the Reporter Daemon == + +indexterm:[reporter, starting] + +To start the reporter daemon, run the following command as the opensrf user: + +---- +clark-kent.pl --daemon +---- + +You can also specify other options: + +* *sleep=interval*: number of seconds to sleep between checks for new reports to +run; defaults to 10 +* *lockfile=filename*: where to place the lockfile for the process; defaults to +/tmp/reporter-LOCK +* *concurrency=integer*: number of reporter daemon processes to run; defaults to +1 +* *bootstrap=filename*: OpenSRF bootstrap configuration file; defaults to +/openils/conf/opensrf_core.xml + + +[NOTE] +============= +The open-ils.reporter process must be running and enabled on the gateway before +the reporter daemon can be started. + +Remember that if the server is restarted, the reporter daemon will need to be +restarted before you can view reports unless you have configured your server to +start the daemon automatically at start up time. +============= + +== Stopping the Reporter Daemon == + +indexterm:[reports, stopping server application] + +indexterm:[reporter, stopping daemon] + +To stop the reporter daemon, you have to kill the process and remove the +lockfile. Assuming you're running just a single process and that the +lockfile is in the default location, perform the following commands as the +opensrf user: + +---- +kill `ps wax | grep "Clark Kent" | grep -v grep | cut -b1-6` + +rm /tmp/reporter-LOCK +---- + diff --git a/docs/modules/reports/pages/reporter_export_usingpgAdmin.adoc b/docs/modules/reports/pages/reporter_export_usingpgAdmin.adoc new file mode 100644 index 0000000000..9fb5370362 --- /dev/null +++ b/docs/modules/reports/pages/reporter_export_usingpgAdmin.adoc @@ -0,0 +1,54 @@ += Exporting Report Templates Using phpPgAdmin = +:toc: + +indexterm:[reports, exporting templates] + +Once the data is exported. Database Administrators/Systems Administrators can +easily import this data into the templates folder to make it available in the +client. + +== Dump the Entire Reports Template Table == + +The data exported in this method can create issues importing into a different +system if you do not have a matching folder and owner. This is going to export +report templates created in your system. The most important fields for importing +into the new system are _name_, _description_, and _data_. Data defines the actual +structure of the report. The _owner_ and _folder_ fields will unique to the system +they were exported from and will have to be altered to ensure they match the +appropriate owner and folder information for the new system. + +. Go to the *Reporter* schema. Report templates are located in the *Template* table +. Click on the link to the *Template* table +. Click the *export* button at the top right of the phpPgAdmin screen +. Make sure the following is selected +.. _Data Only_ (checked) +.. _Format_: Select _CSV_ or _Tabbed_ did get the data in a text format +.. _Download_ checked +. Click _export_ button at the bottom +. A text file will download to your local system + +== Dump Data with an SQL Statement == + + +The following statement could be used to grab the data in the folder and dump it +with admin account as the owner and the first folder in your system. + +------------- +SELECT 1 as owner, name, description, data, 1 as folder FROM reporter.template +------------- + +or use the following to capture your folder names for export + +-------------- +SELECT 1 as owner, t.name, t.description, t.data, f.name as folder + FROM reporter.template t + JOIN reporter.template_folder f ON t.folder=f.id +-------------- + +. Run the above query +. Click the *download* link at the bottom of the page +. Select the file format (_CSV_ or _Tabbed_) +. Check _download_ +. A text file with the report template data will be downloaded. + + diff --git a/docs/modules/reports/pages/reporter_folder.adoc b/docs/modules/reports/pages/reporter_folder.adoc new file mode 100644 index 0000000000..239e85e69b --- /dev/null +++ b/docs/modules/reports/pages/reporter_folder.adoc @@ -0,0 +1,74 @@ +[[reporter_folders]] += Folders = +:toc: + +indexterm:[reports, folders] + +There are three main components to reports: _Templates_, _Reports_, and _Output_. +Each of these components must be stored in a folder. Folders can be private +(accessible to your login only) or shared with other staff at your library, +other libraries in your system or consortium. It is also possible to selectively +share +only certain folders and/or subfolders. + +There are two parts to the folders pane. The _My Folders_ section contains folders +created with your Evergreen account. Folders that other users have shared with +you appear in the _Shared Folders_ section under the username of the sharing +account. + +image::media/folder-1.png[folder-1] + +[[reporter_creating_folders]] +== Creating Folders == + + +indexterm:[reports, folders, creating] + +Whether you are creating a report from scratch or working from a shared template +you must first create at least one folder. + +The steps for creating folders are similar for each reporting function. It is +easier to create folders for templates, reports, and output all at once at the +beginning, though it is possible to do it before each step. This example +demonstrates creating a folder for a template. + +. Click on _Templates_ in the _My Folders_ section. +. Name the folder. Select _Share_ or _Do not share_ from the dropdown menu. +. If you want to share your folder, select who you want to share this folder +with from the dropdown menu. +. Click _Create Sub Folder_. +. Click _OK_. +. Next, create a folder for the report definition to be saved to. Click on +_Reports_. +. Repeat steps 2-5 to create a Reports folder also called _Circulation_. +. Finally, you need to create a folder for the report’s output to be saved in. +Click on _Output_. +. Repeat steps 2-5 to create an Output folder named _Circulation_. + + +TIP: Using a parallel naming scheme for folders in Templates, Reports, +and Output helps keep your reports organized and easier to find + +The folders you just created will now be visible by clicking the arrows in _My +Folders_. Bracketed after the folder name is whom the folder is shared with. For +example, _Circulation (BNCLF)_ is shared with the North Coast Library Federation. +If it is not a shared folder there will be nothing after the folder name. You +may create as many folders and sub-folders as you like. + +== Managing Folders == + +indexterm:[reports, folders, managing] + +Once a folder has been created you can change the name, delete it, create a new +subfolder, or change the sharing settings. This example demonstrates changing a +folder name; the other choices follow similar steps + +. Click on the folder that you wish to rename. +. Click _Manage Folder_. +. Select _Change folder name_ from the dropdown menu and click _Go_. +. Enter the new name and click _Submit_. +. Click _OK_. +. You will get a confirmation box that the _Action Succeeded_. Click _OK_. + + + diff --git a/docs/modules/reports/pages/reporter_generating_reports.adoc b/docs/modules/reports/pages/reporter_generating_reports.adoc new file mode 100644 index 0000000000..12859236f2 --- /dev/null +++ b/docs/modules/reports/pages/reporter_generating_reports.adoc @@ -0,0 +1,109 @@ +[[generating_reports]] += Generating Reports from Templates = +:toc: + +indexterm:[reports, generating] + +Now you are ready to run the report from the template you have created. + +. In the My Folders section click the arrow next to _Templates_ to expand this +folder and select _circulation_. ++ +image::media/generate-report-1.png[generate-report-1] ++ +. Select the box beside _Circulations by month for one library_. Select _Create a +new report_ from selected template from the dropdown menu. Click _Submit_. ++ +image::media/generate-report-2.png[generate-report-2] ++ +. Complete the first part of report settings. Only _Report Name_ and _Choose a +folder_... are required fields. ++ +image::media/generate-report-3.png[generate-report-3] ++ +1) _Template Name_, _Template Creator_, and _Template Description_ are for +informational purposes only. They are hard coded when the template is created. +At the report definition stage it is not possible to change them. ++ +2) _Report Name_ is required. Reports stored in the same folder must have unique +names. ++ +3) _Report Description_ is optional but may help distinguish among similar +reports. ++ +4) _Report Columns_ lists the columns that will appear in the output. This is +derived from the template and cannot be changed during report definition. ++ +5) _Pivot Label Column_ and _Pivot Data Column_ are optional. Pivot tables are a +different way to view data. If you currently use pivot tables in MS Excel it is +better to select an Excel output and continue using pivot tables in Excel. ++ +6) You must choose a report folder to store this report definition. Only report +folders under My Folders are available. Click on the desired folder to select it. ++ +. Select values for the _Circulation > Check Out Date/Time_. Use the calendar +widget or manually enter the desired dates, then click Add to include the date +on the list. You may add multiple dates. ++ +image::media/generate-report-8.png[generate-report-8] ++ +The Transform for this field is Year + Month, so even if you choose a specific +date (2009-10-20) it will appear as the corresponding month only (2009-10). ++ +It is possible to select *relative dates*. If you select a relative date 1 month +ago you can schedule reports to automatically run each month. If you want to run +monthly reports that also show comparative data from one year ago, select a +relative date 1 month ago, and 13 months ago. ++ +. Select a value for the _Circulating Library_. +. Complete the bottom portion of the report definition interface, then click +_Save_. ++ +image::media/generate-report-10.png[generate-report-10] ++ +1) Select one or more output formats. In this example the report output will be +available as an Excel spreadsheet, an HTML table (for display in the staff +client or browser), and as a bar chart. ++ +2) If you want the report to be recurring, check the box and select the +_Recurrence Interval_ as described in +xref:reports:reporter_running_recurring_reports.adoc#recurring_reports[Recurring Reports]. +In this example, as this is a report that will only be run once, the _Recurring +Report_ box is not checked. ++ +3) Select _Run_ as soon as possible for immediate output. It is also possible to +set up reports that run automatically at future intervals. ++ +4) It is optional to fill out an email address where a completion notice can be +sent. The email will contain a link to password-protected report output (staff +login required). If you have an email address in your Local System Administrator +account it will automatically appear in the email notification box. However, +you can enter a different email address or multiple addresses separated by commas. ++ +. Select a folder for the report's output. +. You will get a confirmation dialogue box that the Action Succeeded. Click _OK_. ++ +image::media/generate-report-14.png[generate-report-14] ++ +Once saved, reports stay there forever unless you delete them. + +== Viewing and Editing Report Parameters == + +New options to view or edit report parameters are available from the reports folder. + +To view the parameters of a report, select the report that you want to view from the *Reports* folder, and click *View*. This will enable you to view the report, including links to external documentation and field hints. However, you cannot make any changes to the report. + +image::media/2_7_Enhancements_to_Reports4.jpg[Reports4] + + +To edit the parameters of a report, select the report that you want to view from the *Reports* folder, and click *Edit*. After making changes, you can *Save [the] Report* or *Save as New*. If you *Save the Report*, any subsequent report outputs that are generated from this report will reflect the changes that you have made. + +In addition, whenever there is a pending (scheduled, but not yet started) report output, the interface will warn you that the pending output will be modified. At that point, you can either continue or choose the alternate *Save as New* option, leaving the report output untouched. + + +image::media/2_7_Enhancements_to_Reports6.jpg[Reports6] + + +If, after making changes, you select, *Save as New*, then you have created a new report by cloning and amending a previously existing report. Note that if you create a new report, you will be prompted to rename the new report. Evergreen does not allow two reports with the same name to exist. To view or edit your new report, select the reports folder to which you saved it. + +image::media/2_7_Enhancements_to_Reports5.jpg[Reports5] diff --git a/docs/modules/reports/pages/reporter_running_recurring_reports.adoc b/docs/modules/reports/pages/reporter_running_recurring_reports.adoc new file mode 100644 index 0000000000..eec0f39bea --- /dev/null +++ b/docs/modules/reports/pages/reporter_running_recurring_reports.adoc @@ -0,0 +1,42 @@ +[[recurring_reports]] += Running Recurring Reports = +:toc: + +indexterm:[reports, recurring] + +Recurring reports are a useful way to save time by scheduling reports that you +run on a regular basis, such as monthly circulation and monthly patron +registration statistics. When you have set up a report to run on a monthly basis +you’ll get an email informing you that the report has successfully run. You can +click on a link in the email that will take you directly to the report output. +You can also access the output through the reporter interface as described in +xref:reports:reporter_view_output.adoc#viewing_report_output[Viewing Report Output]. + +To set up a monthly recurring report follow the procedure in +xref:reports:reporter_generating_reports.adoc#generating_reports[Generating Reports from Templates] but make the changes described below. + +. Select the Recurring Report check-box and set the recurrence interval to 1 month. +. Do not select Run ASAP. Instead schedule the report to run early on the first +day of the next month. Enter the date in _YYYY-MM-DD_ format. +. Ensure there is an email address to receive completion emails. You will +receive an email completion notice each month when the output is ready. +. Select a folder for the report’s output. +. Click Save Report. +. You will get a confirmation dialogue box that the Action Succeeded. Click OK. + +You will get an email on the 1st of each month with a link to the report output. +By clicking this link it will open the output in a web browser. It is still +possible to login to the staff client and access the output in Output folder. + +*How to stop or make changes to an existing recurring report?* Sometimes you may +wish to stop or make changes to a recurring report, e.g. the recurrence interval, +generation date, email address to receive completion email, output format/folder +or even filter values (such as the number of days overdue). You will need to +delete the current report from the report folder, then use the above procedure +to set up a new recurring report with the desired changes. Please note that +deleting a report also deletes all output associated with it. + +TIP: Once you have been on Evergreen for a year, you could set up your recurring +monthly reports to show comparative data from one year ago. To do this select +relative dates of 1 month ago and 13 months ago. + diff --git a/docs/modules/reports/pages/reporter_template_enhancements.adoc b/docs/modules/reports/pages/reporter_template_enhancements.adoc new file mode 100644 index 0000000000..31b948c809 --- /dev/null +++ b/docs/modules/reports/pages/reporter_template_enhancements.adoc @@ -0,0 +1,30 @@ += Template Enhancements = +:toc: + +== Documentation URL == + +You can add a link to local documentation that can help staff create a report template. To add documentation to a report template, click *Admin* -> *Local Administration* -> *Reports*, and create a new report template. A new field, *Documentation URL*, appears in the *Template Configuration* panel. Enter a URL that points to relevant documentation. + + +image::media/2_7_Enhancements_to_Reports1.jpg[Reports1] + + +The link to this documentation will also appear in your list of report templates. + + +image::media/2_7_Enhancements_to_Reports2a.jpg[Reports2a] + +== Field Hints == + +Descriptive information about fields or filters in a report template can be added to the *Field Hints* portion of the *Template Configuration* panel. For example, a circulation report template might include the field, *Circ ID*. You can add content to the *Field hints* to further define this field for staff and provide a reminder about the type of information that they should select for this field. + + +To view a field hint, click the *Column Picker*, and select *Field Hint*. The column will be added to the display. + +image::media/2_7_Enhancements_to_Reports2.jpg[Reports2] + + +To add or edit a field hint, select a filter or field, and click *Change Field Hint*. Enter text, and click *Ok*. + + +image::media/2_7_Enhancements_to_Reports3.jpg[Reports3] diff --git a/docs/modules/reports/pages/reporter_template_terminology.adoc b/docs/modules/reports/pages/reporter_template_terminology.adoc new file mode 100644 index 0000000000..81185d9628 --- /dev/null +++ b/docs/modules/reports/pages/reporter_template_terminology.adoc @@ -0,0 +1,124 @@ += Template Terminology = +:toc: + +== Data Types == + +indexterm:[reports, data types] + +The information in Evergreen's database can be classified in nine data types, formats that describe the type of data and/or its use. These were represented by text-only labels in prior versions of Evergreen. Evergreen 3.0 has replaced the text labels with icons. When building templates in _Reports_, you will find these icons in the Field Name Pane of the template creation interface. + +=== timestamp === +image::media/datatypes_timestamp.png[] + +An exact date and time (year, month, day, hour, minutes, and seconds). Remember to select the appropriate date/time transform. Raw Data includes second and timezone information, which is usually more than is required for a report. + +=== link === + +image::media/datatypes_link.png[] + +A link to another database table. Link outputs a number that is a meaningful reference for the database but not of much use to a human user. You will usually want to drill further down the tree in the Sources pane and select fields from the linked table. However, in some instances you might want to use a link field. For example, to count the number of patrons who borrowed items you could do a count on the Patron link data. + +=== text === +image::media/datatypes_text.png[] + +A field of text. You will usually want to use the Raw Data transform. + +=== bool === +image::media/datatypes_bool.png[] + +True or False. Commonly used to filter out deleted item or patron records. + +=== org_unit === +image::media/datatypes_orgunit.png[] + +Organizational Unit - a number representing a library, library system, or federation. When you want to filter on a library, make sure that the field name is on an org_unit or id data type. + +=== id === + +image::media/datatypes_id.png[] + +A unique number assigned by the database to identify each record. These numbers are meaningful references for the database but not of much use to a human user. Use in displayed fields when counting records or in filters. + +=== money === + +image::media/datatypes_money.png[] + +A monetary amount. + +=== int === + +image::media/datatypes_int.png[] + +Integer (a number) + +=== interval === + +image::media/datatypes_interval.png[] + +A period of time. + +[[field_transforms]] +== Field Transforms == + +indexterm:[reports, field transforms] + +A _Field Transform_ tells the reporter how to process a field for output. +Different data types have different transform options. + +indexterm:[reports, field transforms, raw data] + +*Raw Data*. To display a field exactly as it appears in the database use the +_Raw Data_ transform, available for all data types. + +indexterm:[reports, field transforms, count] + +indexterm:[reports, field transforms, raw distinct] + +*Count and Count Distinct*. These transforms apply to the _id_ data type and +are used to count database records (e.g. for circulation statistics). Use Count +to tally the total number of records. Use _Count Distinct_ to count the number +of unique records, removing duplicates. + +To demonstrate the difference between _Count_ and _Count Distinct_, consider an +example where you want to know the number of active patrons in a given month, +where ``active" means they borrowed at least one item. Each circulation is linked +to a _Patron ID_, a number identifying the patron who borrowed the item. If we use +the _Count Distinct_ transform for Patron IDs we will know the number of unique +patrons who circulated at least one book (2 patrons in the table below). If +instead, we use _Count_, we will know how many books were circulated, since every +circulation is linked to a _patron ID_ and duplicate values are also counted. To +identify the number of active patrons in this example the _Count Distinct_ +transform should be used. + +[options="header,footer"] +|==================================== +|Title |Patron ID |Patron Name +|Harry Potter and the Chamber of Secrets |001 |John Doe +|Northern Lights |001 |John Doe +|Harry Potter and the Philosopher’s Stone |222 |Jane Doe +|==================================== + +indexterm:[reports, field transforms, output type] + +*Output Type*. Note that each transform has either an _Aggregate_ or +_Non-Aggregate_ output type. + +indexterm:[reports, field transforms, output type, non-aggregate] + +indexterm:[reports, field transforms, output type, aggregate] + +Selecting a _Non-Aggregate_ output type will return one row of output in your +report for each row in the database. Selecting an Aggregate output type will +group together several rows of the database and return just one row of output +with, say, the average value or the total count for that group. Other common +aggregate types include minimum, maximum, and sum. + +When used as filters, non-aggregate and aggregate types correspond to _Base_ and +_Aggregate_ filters respectively. To see the difference between a base filter and +an aggregate filter, imagine that you are creating a report to count the number +of circulations in January. This would require a base filter to specify the +month of interest because the month is a non-aggregate output type. Now imagine +that you wish to list all items with more than 25 holds. This would require an +aggregate filter on the number of holds per item because you must use an +aggregate output type to count the holds. + diff --git a/docs/modules/reports/pages/reporter_view_output.adoc b/docs/modules/reports/pages/reporter_view_output.adoc new file mode 100644 index 0000000000..dcba21c09c --- /dev/null +++ b/docs/modules/reports/pages/reporter_view_output.adoc @@ -0,0 +1,41 @@ +[[viewing_report_output]] += Viewing Report Output = +:toc: + +indexterm:[reports, output] + +indexterm:[reports, output, tabular] + +indexterm:[reports, output, Excel] + +indexterm:[reports, output, spreadsheet] + +When a report runs Evergreen sends an email with a link to the output to the +address defined in the report. Output is also stored in the specified Output +folder and will remain there until manually deleted. + +. To view report output in the staff client, open the reports interface from +_Administration --> Local Administration --> Reports_ +. Click on Output to expand the folder. Select _Circulation_ (where you just +saved the circulation report output). ++ +image::media/view-output-1.png[view-output-1] ++ +. View report output is the default selection in the dropdown menu. Select +_Recurring Monthly Circ by Location_ by clicking the checkbox and click _Submit_. ++ +image::media/view-output-2.png[view-output-2] ++ +. A new tab will open for the report output. Select either _Tabular Output_ or +_Excel Output_. If Bar Charts was selected during report definition the chart +will also appear. +. Tabular output looks like this: ++ +image::media/view-output-4.png[view-output-4] ++ +. If you want to manipulate, filter or graph this data, Excel output would be +more useful. Excel output will generate a ".xlsx" file. Excel output looks like this in Excel: ++ +image::media/view-output-5.png[view-output-5] + + diff --git a/docs/modules/serials/_attributes.adoc b/docs/modules/serials/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/serials/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC2.jpg b/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC2.jpg new file mode 100644 index 0000000000..4c775be36b Binary files /dev/null and b/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC2.jpg differ diff --git a/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC5.jpg b/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC5.jpg new file mode 100644 index 0000000000..f1dd239985 Binary files /dev/null and b/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC5.jpg differ diff --git a/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC7.jpg b/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC7.jpg new file mode 100644 index 0000000000..574aaf0f30 Binary files /dev/null and b/docs/modules/serials/assets/images/media/Group_Serials_Issues_in_the_OPAC7.jpg differ diff --git a/docs/modules/serials/assets/images/media/serials_ct1.PNG b/docs/modules/serials/assets/images/media/serials_ct1.PNG new file mode 100644 index 0000000000..5f78c5a162 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_ct1.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_extra1.PNG b/docs/modules/serials/assets/images/media/serials_extra1.PNG new file mode 100644 index 0000000000..0bdbfe74ac Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_extra1.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_extra2.PNG b/docs/modules/serials/assets/images/media/serials_extra2.PNG new file mode 100644 index 0000000000..af795b91c9 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_extra2.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_mfhd1.PNG b/docs/modules/serials/assets/images/media/serials_mfhd1.PNG new file mode 100644 index 0000000000..8b0f1c5185 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_mfhd1.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_mfhd3.PNG b/docs/modules/serials/assets/images/media/serials_mfhd3.PNG new file mode 100644 index 0000000000..3b652d48b4 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_mfhd3.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_mfhd6.PNG b/docs/modules/serials/assets/images/media/serials_mfhd6.PNG new file mode 100644 index 0000000000..222b1e6537 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_mfhd6.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_routing1.PNG b/docs/modules/serials/assets/images/media/serials_routing1.PNG new file mode 100644 index 0000000000..12aba412f7 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_routing1.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub0.PNG b/docs/modules/serials/assets/images/media/serials_sub0.PNG new file mode 100644 index 0000000000..5efad47c43 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub0.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub1.PNG b/docs/modules/serials/assets/images/media/serials_sub1.PNG new file mode 100644 index 0000000000..34435de02c Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub1.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub10.PNG b/docs/modules/serials/assets/images/media/serials_sub10.PNG new file mode 100644 index 0000000000..ca2f1c3010 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub10.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub11.PNG b/docs/modules/serials/assets/images/media/serials_sub11.PNG new file mode 100644 index 0000000000..a190a81c69 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub11.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub2.PNG b/docs/modules/serials/assets/images/media/serials_sub2.PNG new file mode 100644 index 0000000000..e2c808cff5 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub2.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub3.PNG b/docs/modules/serials/assets/images/media/serials_sub3.PNG new file mode 100644 index 0000000000..89ef1be219 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub3.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub4.PNG b/docs/modules/serials/assets/images/media/serials_sub4.PNG new file mode 100644 index 0000000000..e749b25ba8 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub4.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub5.PNG b/docs/modules/serials/assets/images/media/serials_sub5.PNG new file mode 100644 index 0000000000..33ffd0429b Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub5.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub6.PNG b/docs/modules/serials/assets/images/media/serials_sub6.PNG new file mode 100644 index 0000000000..44ebb6ee90 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub6.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub7.PNG b/docs/modules/serials/assets/images/media/serials_sub7.PNG new file mode 100644 index 0000000000..48e7e5ee76 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub7.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub8.PNG b/docs/modules/serials/assets/images/media/serials_sub8.PNG new file mode 100644 index 0000000000..be1812e900 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub8.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_sub9.PNG b/docs/modules/serials/assets/images/media/serials_sub9.PNG new file mode 100644 index 0000000000..f34c61783f Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_sub9.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_wizard1.PNG b/docs/modules/serials/assets/images/media/serials_wizard1.PNG new file mode 100644 index 0000000000..9b6345dabe Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_wizard1.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_wizard2.PNG b/docs/modules/serials/assets/images/media/serials_wizard2.PNG new file mode 100644 index 0000000000..96c430908b Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_wizard2.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_wizard3.PNG b/docs/modules/serials/assets/images/media/serials_wizard3.PNG new file mode 100644 index 0000000000..ccd7ba8396 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_wizard3.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_wizard4.PNG b/docs/modules/serials/assets/images/media/serials_wizard4.PNG new file mode 100644 index 0000000000..50f0c98002 Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_wizard4.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_wizard5.PNG b/docs/modules/serials/assets/images/media/serials_wizard5.PNG new file mode 100644 index 0000000000..6b94925a7f Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_wizard5.PNG differ diff --git a/docs/modules/serials/assets/images/media/serials_wizard6.PNG b/docs/modules/serials/assets/images/media/serials_wizard6.PNG new file mode 100644 index 0000000000..7184b6c27b Binary files /dev/null and b/docs/modules/serials/assets/images/media/serials_wizard6.PNG differ diff --git a/docs/modules/serials/nav.adoc b/docs/modules/serials/nav.adoc new file mode 100644 index 0000000000..53d37ed18b --- /dev/null +++ b/docs/modules/serials/nav.adoc @@ -0,0 +1,10 @@ +* xref:serials:A-intro.adoc[Serials] +** xref:serials:B-serials_admin.adoc[Serials Administration] +** xref:serials:C-serials_workflow.adoc[Serials Module] +** xref:serials:D-Receiving.adoc[Receiving] +** xref:serials:E-routing_lists.adoc[Routing Lists] +** xref:serials:F-Special_issue.adoc[Special Issues] +** xref:serials:G-binding.adoc[Binding Issues] +** xref:serials:H-holdings_statements.adoc[Holdings] +** xref:serials:Group_Serials_Issues_in_the_OPAC_2.2.adoc[Group Serials Issues in the OPAC] + diff --git a/docs/modules/serials/pages/A-intro.adoc b/docs/modules/serials/pages/A-intro.adoc new file mode 100644 index 0000000000..88bb993eb1 --- /dev/null +++ b/docs/modules/serials/pages/A-intro.adoc @@ -0,0 +1,6 @@ += Serials = +:toc: + +== MFHD Records == + +MARC Format for Holdings Display (MFHD) display in the catalog in addition to holding statements generated by Evergreen from subscriptions created in the Serials Module. The MFHDs are editable as MARC but the holdings statements generated from the control view are system generated. Multiple MFHDs can be created and are tied to Organizational Units. diff --git a/docs/modules/serials/pages/B-serials_admin.adoc b/docs/modules/serials/pages/B-serials_admin.adoc new file mode 100644 index 0000000000..d645371cfb --- /dev/null +++ b/docs/modules/serials/pages/B-serials_admin.adoc @@ -0,0 +1,145 @@ += Serials Administration = +:toc: + +The serials module can be administered under a new menu option: *Administration->Serials Administration*. The new Serials Administration menu currently allows staff to configure _Serial Copy Templates_ and _Pattern Templates_. + + +== Serial Copy Templates == +Serials copy templates enable you to specify item attributes that should be applied by default to copies of serials. Serials copy templates are associated with distributions in a subscription and are applied when serials copies are received. Serial copy templates can also be used as a binding template to apply specific item attributes to copies that are being bound together. + + +=== Creating a Serial Copy Template === + +To create a serial copy template, go to *Administration->Serials Administration->Serial Copy Templates*: + +. Click *Create Template* in the upper-right hand corner. A dialog box will appear. +. Within the dialog box assign the template a _Template Name_ and set any item attributes that you want in the template: +.. *Circulate?*: indicate if the items can circulate. +.. *Circulation Library*: Select the circulation library from the drop down menu. +.. *Shelving Location*: Select the shelving location for the item from the drop down menu. This menu is populated from the locations created in Admin->Local Administration->Copy Locations Editor. +.. *Circulation Modifier*: Select the circulation modifier for the item from the drop down menu. This menu is populated from the modifiers created in Admin->Server Administration->Circulation Modifiers. +.. *Loan Duration*: Select a loan duration from the drop down menu. This menu is populated from the loan durations created in Admin->Server Administration->Circulation Duration Rules. This field is required. +.. *Circulate as Type*: Select a Type of record from the drop down menu if you want to control circulation based on the Type fixed field in the MARC bibliographic record. Most libraries choose to control circulation based on Circulation Modifier instead of Circulate as Type in Evergreen. +.. *Holdable?*: Yes or No-- indicate if holds can be placed on the items. +.. *Age-based Hold Protection*: Select a rule from the drop down menu. Age-based hold protection allows you to control the extent to which an item can circulate after it has been received. For example, you may want to protect new copies of a serial so that only patrons who check out the item at your branch can use it. +.. *Fine Level*: Select a fine level from the drop down menu. This menu is populated from the fine levels created in Admin->Server Administration->Circulation Recurring Fine Rules. This field is required. +.. *Floating*: Select a Floating policy from the drop down menu if the items belong to a floating collection. +.. *Status*: Select a copy status from the Status drop down menu. This menu is populated from the statuses created in Admin → Server Administration → Copy Statuses. +.. *Reference?*: Yes or No-- indicate if the item is a reference item. +.. *OPAC Visible?*: Yes or No-- indicate if the item should be visible in the OPAC. +.. *Price*: Enter the price of the item. +.. *Deposit?*: Yes or No-- indicate if patrons must place a deposit on the copy before they can use it. +.. *Deposit Amount*: Enter a Deposit Amount if patrons must place a deposit on the copy before they can use it. +.. *Quality*: Good or Damaged-- indicate the physical condition of the item. +. Click *Save*. +. The new serial copy template will now appear in the list of templates. + +image::media/serials_ct1.PNG[] + + +=== Modifying a Serial Copy Template === + +To modify a Serial Copy Template: + +. Select the template to modify by checking the box for the template or clicking anywhere on the template row. Go to *Actions->Edit Template* or _right-click_ on the template row and select *Edit Template*. +. The dialog box will appear. Make any changes to the item attributes and click *Save*. + + +=== Deleting a Serial Copy Template === + +To delete a Serial Copy Template: + +. Select the template to modify by checking the box for the template or clicking anywhere on the template row. +. Go to *Actions->Delete Template* or _right-click_ on the template row and select *Delete Template*. + +NOTE: Serials copy templates that are being used by subscriptions cannot be deleted. + + +== Prediction Pattern Templates == + +Prediction pattern templates allow you to create templates for prediction patterns that can be shared with other staff users in your library branch, system, or throughout the consortium. Prediction patterns are used to predict issues on serials subscriptions. Templates can be created in the Administration module, as described below, and can also be created and shared directly in a subscription. + + +=== Creating a Prediction Pattern Template === +To create a template, go to *Administration->Serials Administration->Prediction Pattern Templates*: + +. Click *New Record* in the upper-right hand corner. A dialog box called _Prediction Pattern Template_ will appear. +. Assign a _Name_ to the template, such as "Monthly", to create a monthly publication pattern. +. Next to Pattern Code click *Pattern Wizard*. The Prediction Pattern Code Wizard will appear. This wizard has five tabs that will step you through creating a prediction pattern for your publication. + +.. Enumeration Labels +... _If the publication does not use enumeration and instead only uses dates_, select the radio button adjacent to _Use Calendar Dates Only_ and click *Next* in the upper right-hand corner and go to b. Chronology Display in this document. +... _If the publication uses enumerations (commonly used)_, select the radio button adjacent to _Use enumerations_. The enumerations conform to $a-$h of the 853,854, and 855 MARC tags. +... Enter the first level of enumeration in the field labeled _Level 1_. A common first level enumeration is volume, or "v.". If there are additional levels of enumeration, click *Add Level*. +... A second field labeled _Level 2_ will appear. Enter the second level of enumeration in the field. A common second level enumeration is number, or "no.". +.... Select if the second level of enumeration is a set _Number_, _Varies_, or is _Undetermined_. +.... If _Number_ is selected (commonly used): +..... Enter the number of bibliographic units per next higher level (e.g. 12 no. per v.). This conforms to $u in the 853, 854, and 855 MARC tags. +..... Select the radio button for the enumeration scheme: _Restarts at unit completion_ or _Increments continuously_. This conforms to $v in the 853, 854, and 855 MARC tags. +.... You can add up to six levels of enumeration. +... Check the box adjacent to _Add alternative enumeration_ if the publication uses an alternative enumeration. +... Check the box adjacent to _First level enumeration changes during subscription year_ to configure calendar changes if needed. A common calendar change is for the first level of enumeration to increment every January. +.... Select when the Change occurs from the drop down menu: _Start of the month_, _Specific date_, or _Start of season_. +.... From the corresponding drop down menu select the specific point in time at which the first level of enumeration should change. +.... Click *Add more* to add additional calendar changes if needed. +... When you have completed the enumerations, click *Next* in the upper right-hand corner. + + +image::media/serials_wizard1.PNG[] + + +.. Chronology Display +... To use chronological captions for the subscription, check the box adjacent to _Use Chronology Captions?_ +... Choose a chronological unit for the first level. If you want to display the term for the unit selected, such as "Year" and "Month" next to the chronology caption in the catalog, then select the checkbox for Display level descriptor? (not commonly used). +... To add additional levels of chronology for display, click *Add level*. +.... Note: Each level that you add must be a smaller chronological unit than the previous level (e.g. Level 1 = Year, Level 2 = Month). +... Check the box adjacent to _Use Alternative Chronology Captions?_ If the publication uses alternative chronology. +... After you have completed the chronology caption, click *Next* in the upper-right hand corner. + + +image::media/serials_wizard2.PNG[] + + +.. MFHD Indicators +... *Compression Display Options*: Select the appropriate option for compressing or expanding your captions in the catalog from the compressibility and expandability drop down menu. The entries in the drop down menu correspond to the indicator codes and the subfield $w in the 853 tag. Compressibility and expandability correspond to the first indicator in the 853 tag. +... *Caption Evaluation*: Choose the appropriate caption evaluation from the drop down menu. Caption Evaluation corresponds to the second indicator in the 853 tag. +... Click *Next* in the upper right hand corner. + + +image::media/serials_wizard3.PNG[] + + +.. Frequency and Regularity +... Indicate the frequency of the publication by selecting one of the following radio buttons: +.... *Pre-selected* and choose the frequency from the drop down menu. +.... *Use number of issues per year* and enter the total number of issues in the field. +... If the publication has combined, skipped, or special issues, that should be accounted for in the publication pattern, check the box adjacent to _Use specific regularity information?_. +.... From the first drop down menu, select the appropriate publication information: _Combined_, _Omitted_, or _Published_ issues. +.... From the subsequent drop down menus, select the appropriate frequency and issue information. +.... Add additional regularity rows as needed. +.... For a Combined issue, enter the relevant combined issue code. E.g., for a monthly combined issue, enter 02/03 to specify that February and March are combined. +... After you have completed frequency and regularity information, click *Next* in the upper-right hand corner. + + +image::media/serials_wizard4.PNG[] + + +.. Review +... Review the Pattern Summary to verify that the pattern is correct. You can also click on the expand arrow icon to view the _Raw Pattern Code_. +... If you want to share this pattern, assign it a name and select if it will be shared with your library, the system, or across the consortium. +... Click *Save*. + + +image::media/serials_wizard5.PNG[] + + +. Back in the Prediction Pattern Template dialog box, select the Owning Library, which will default to the workstation library. +. If you want to share the template, set the Share Depth to indicate how far out into your consortium the template will be shared. + + +image::media/serials_wizard6.PNG[] + + +. The Prediction Pattern will now appear in the list of templates and can be used to create predictions for subscriptions. + +NOTE: Prediction Patterns can be edited after creation as long as all predicted issues have the status of "Expected". Once an issue is moved into a different status, the Prediction Pattern cannot be changed. diff --git a/docs/modules/serials/pages/C-serials_workflow.adoc b/docs/modules/serials/pages/C-serials_workflow.adoc new file mode 100644 index 0000000000..17da0d33de --- /dev/null +++ b/docs/modules/serials/pages/C-serials_workflow.adoc @@ -0,0 +1,136 @@ += Serials Module = +:toc: + +The Serials Module can be used to create subscriptions, distributions, streams, and prediction patterns. As well as to generate predictions and receive issues as they come in to the library. + + +To access the Serials Module, go to a serials record in the catalog, and click on *Serials->Manage Subscriptions*. This will open the serials interface for that particular record. In this interface you can: + +. Create and manage subscriptions +. Create and manage predictions +. Create and manage issues +. Create and manage MFHDs + + +image::media/serials_sub0.PNG[] + + +== Create a Subscription == + +. From a bibliographic record, go to *Serials->Manage Subscriptions*, view the _Manage Subscriptions_ tab. +. Within the _Manage Subscriptions_ tab, create a new subscription by clicking *New Subscription*. The subscription editor will appear: +.. Select the _Owning Library for_ the subscription. The owning library indicates the organizational unit(s) whose staff can use this subscription. The rule of parental inheritance applies to this list. For example, if a system is made the owner of a subscription, then users, with appropriate permissions, at the branches within the system could also use this subscription. This field is required. +.. Enter the date that the subscription begins in the _Start Dat_e field. This field is required. +.. An _End Date_ for the subscription may also be entered, but it is not required. +.. Optionally, enter an _Expected Offset_. This is the difference between the nominal publishing date of an issue and the date that you expect to receive your copy. For example, if an issue is published the first day of each month, but you receive the copy two days prior to the publication date, then enter "-2 days" into this field. +.. Next, create a Distribution for the subscription by selecting the Library for the distribution. Distributions identify the branches that will receive copies of a serial. +... Note: If the Owning Library of the subscription was set at the branch level, the Library will be the same as the Owning Library. If the Owning Library of the subscription was set at the system level, the Library will be set to the holdings library. +.. Enter a Label for the distribution. It may be useful to identify the branch to which you are distributing these issues in this field. This field is not publicly visible and only appears when an item is received. There are no limits on the number of characters that can be entered in this field. +.. Select the preferred _OPAC Display for holdings_: Chronological or Enumeration. +.. Select the _Receiving Template_ that will be applied to items as they are received. The receiving templates are configured in Administration->Serials Administration->Serial Copy Templates. +.. Next, create a Stream by assigning a label to the stream in the _Send to_ field. The stream indicates the number of copies that should be sent to the distribution library. You can click *Add copy stream* if the library will receive multiple copies of the serial. +. After the subscription, distribution, and copy information is configured, click *Save* and go to the _Manage Predictions_ tab to create the prediction pattern that will be used to generate predictions for this title. + +NOTE: After creating a subscription, you can use the Actions menu to take a variety of actions with the subscription, such as adding Subscription or Distribution Notes, linking it to an MFHD record, or creating routing lists. + + +image::media/serials_sub1.PNG[] + + +== Create and Manage Predictions == + +From the _Manage Predictions_ tab you can create a new prediction pattern from scratch, use an existing pattern template, or use an existing pattern template as the basis for a new prediction pattern. + +=== Predict Issues Using a New Prediction Pattern === +. Within the _Manage Predictions_ tab, _Select [a] subscription_ to work on from the drop down menu. +. To create a new prediction pattern, click *Add New*. +.. The box next to *Active* will be checked by default. +.. Select the _Type of pattern_ from the drop down menu and click *Create Pattern*. The Pattern Wizard will appear. +.. Follow the steps in the section _Creating a Pattern Template_ in this documentation to create a new pattern using the wizard. + + +image::media/serials_sub2.PNG[] + + +. After creating the pattern in the wizard, click *Create*. The new prediction pattern will now appear under Existing Prediction Patterns. +. To create predictions, click *Predict New Issues*. + +NOTE: You can also predict new issues from the _Manage Issues_ tab. + + +image::media/serials_sub3.PNG[] + + +. A dialog box called _Predict New Issues: Initial Values_ will appear. +.. Select the _Publication date_ for the subscription. This will be publication date of the first issue you expect to receive. +.. The _Type_ will correspond to the type of prediction pattern selected. +.. Enter any _Enumeration labels_ for the first expected issue. +.. Enter any _Chronology labels_ for the first expected issue. +.. Enter the _Prediction count_. This is the number of issues that you want to predict. +. Click *Save*. +. Evergreen will generate the predictions and bring you to the _Manage Issues_ tab to review the predicted issues. + + +image::media/serials_sub4.PNG[] + + +=== Predict Issues Using a Prediction Pattern Template === +. Within the _Manage Predictions_ tab, *Select [a] subscription* to work on from the drop down menu. +. _Select a template_ from the drop down menu that appears under the Add New button and click *Create from Template*. The pattern information will appear below the drop down menu. + + +image::media/serials_sub5.PNG[] + + +. If you want to use the pattern "as is" click *Create*. +.. If you want to review or modify the pattern, click *Edit Pattern*. The Pattern Wizard will appear. +.. The Pattern Wizard will be pre-populated with the pattern template selected. Follow the steps in the section Creating a Pattern Template in this documentation to modify the template or click *Next* on each tab to review the template. +.. After modifying or reviewing the pattern in the wizard, click *Create*. The prediction pattern will now appear under Existing Prediction Patterns. +. To create predictions, click *Predict New Issues*. +.. Note: you can also predict new issues from the _Manage Issues_ tab. +. A dialog box called _Predict New Issues: Initial Values_ will appear. +.. Select the _Publication date_ for the subscription. This will be publication date of the first issue you expect to receive. +.. The _Type_ will correspond to the type of prediction pattern selected. +.. Enter any _Enumeration labels_ for the first expected issue. +.. Enter any _Chronology labels_ for the first expected issue. +.. Enter the _Prediction count_. This is the number of issues that you want to predict. +. Click *Save*. +. Evergreen will generate the predictions and bring you to the _Manage Issues_ tab to review the predicted issues. + + +=== Predict Issues Using a Prediction Pattern from a Bibliographic and/or MFHD Record === +Evergreen can also generate a prediction pattern from existing MFHD records attached to a serials record and from MFHD patterns embedded directly in the bibliographic record. + +. Within the _Manage Predictions_ tab, *Select [a] subscription* to work on from the drop down menu. +. Click *Import from Bibliographic and/or MFHD Records*. + + +image::media/serials_sub6.PNG[] + + +. A dialog box will appear that presents the available MFHD records and the prediction pattern that will be imported. +. Check the box adjacent to the MFHD record that you would like to import and click *Import*. The new prediction pattern will now appear under _Existing Prediction Patterns_. + + +image::media/serials_sub7.PNG[] + + +. If you want to review or modify the pattern, click *Edit Pattern*. The Pattern Wizard will appear. +.. The Pattern Wizard will be pre-populated with the pattern from the MFHD selected. Follow the steps in the section . Creating a Pattern Template. in this documentation to modify the template or click *Next* on each tab to review the template. +. To create predictions, click *Predict New Issues*. +.. Note: you can also predict new issues from the _Manage Issues_ tab. +. A dialog box called _Predict New Issues: Initial Values_ will appear. +.. Select the _Publication date_ for the subscription. This will be publication date of the first issue you expect to receive. +.. The _Type_ will correspond to the type of prediction pattern selected. +.. Enter any _Enumeration labels_ for the first expected issue. +.. Enter any _Chronology labels_ for the first expected issue. +.. Enter the _Prediction count_. This is the number of issues that you want to predict. +. Click *Save*. +. Evergreen will generate the predictions and bring you to the _Manage Issues_ tab to review the predicted issues. + + +=== Manage Issues === +After generating predictions in the _Manage Predictions_ tab, you will see a list of the predicted issues in the Manage Issues tab. A variety of actions can be taken in this tab, including receiving issues, predicting new issues, adding special issues. + + +image::media/serials_sub8.PNG[] diff --git a/docs/modules/serials/pages/D-Receiving.adoc b/docs/modules/serials/pages/D-Receiving.adoc new file mode 100644 index 0000000000..8391d1371c --- /dev/null +++ b/docs/modules/serials/pages/D-Receiving.adoc @@ -0,0 +1,81 @@ += Receiving = +:toc: +Issues can be received through the _Manage Issues_ tab or through the _Quick Receive_ option located in the bibliographic record display. While receiving, staff can select if issues should be barcoded during receipt. + + +== Quick Receive == +. From a serials record in the catalog, go to *Serials->Quick Receive*. +. A dialog box will appear. Select the _Library_ and _Subscription_ for which you are receiving issues from the drop down menu and click *OK/Continue*. +. A _Receive items_ dialog box will appear with the next expected issue. +.. To receive the item(s) and barcode them: +... The _Copy Location_ and _Circulation Modifier_ will be pre-populated from the Receive Template associated with the Distribution. Changes can be made to the pre-populated information. +.... Note: Copy location, call number, and circulation modifier can be applied to multiple copies in batch using the batch modify. +... *Call Number*: Enter a call number. Any item with a barcode must also have a call number. +... *Barcode*: Scan in the barcode that will be affixed to the issue. +... The box adjacent to _Receive the issue_ will be checked by default. +... Check the box adjacent to _Routing List_ to print an existing routing list. +... Click *Save* to receive the issue. The Status of the issue will update to "Received" and a Date Received will be recorded. The barcoded copy will now appear in the holdings area of the catalog and the Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. +.. To receive the item(s) without barcoding them: +... Uncheck the box adjacent to _Barcode Items_ and click *Save*. The Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. + + +image::media/serials_sub9.PNG[] + + +== Receiving from the Manage Issues tab == +The Manage Issues tab can be used to receive the next expected issue and to receive multiple expected issues. This tab can be accessed by retrieving the serial record, going to *Serials->Manage Subscriptions*, and selecting the _Manage Issues_ tab. + + +=== Receive Next Issue and Barcode === + +. Within the _Manage Issues_ tab, *Select [a] subscription* to work on from the drop down menu. The list of predicted issues for the subscription will appear. +. Check the box adjacent to _Barcode on receive_. +. Click *Receive Next*. +. A _Receive items_ dialog box will appear with the next expected issue and item(s). +. The _Copy Location_ and _Circulation Modifier_ will be pre-populated from the Receive Template associated with the Distribution. Changes can be made to the pre-populated information. +. *Call Number*: Enter a call number. Any item with a barcode must also have a call number. +. *Barcode*: Scan in the barcode that will be affixed to the item(s). +. The box to _Receive the item(s)_ will be checked by default. +. Check the box adjacent to _Routing List_ to print an existing routing list. +. Click *Save* to receive the item(s). The Status of the issue will update to "Received" and a Date Received will be recorded. The barcoded item(s) will now appear in the holdings area of the catalog and the Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. + + +=== Receive Next Issue (no barcode) === + +. In the _Manage Issues_ tab, make sure the box adjacent to _Barcode on receive_ is unchecked and click *Receive Next*. +. A _Receive items_ dialog box will appear with the message "Will receive # item(s) without barcoding." +. Click *OK/Continue* to receive the issue. The Status of the issue will update to "Received" and a Date Received will be recorded. The Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issue. + + +image::media/serials_sub10.PNG[] + + +== Batch Receiving == +Multiple issues can be received at the same time using the _Manage Issues_ tab. + + +=== Batch Receive and Barcode === + +. Within the _Manage Issues tab_, *Select [a] subscription* to work on from the drop down menu. The list of predicted issues for the subscription will appear. +. Check the box adjacent to _Barcode on receive_. +. Check the boxes adjacent to the expected issues you want to receive. +. Go to *Actions->Receive selected* or _right-click_ on the rows and select *Receive selected* from the drop down menu. +. A _Receive items_ dialog box will appear with the selected issues and items. +. The _Copy Location_ and _Circulation Modifier_ will be pre-populated from the Receive Template associated with the Distribution. Changes can be made to the pre-populated information. +. *Call Number*: Enter a call number. Any item with a barcode must also have a call number. +. *Barcode*: Scan in the barcodes that will be affixed to the items. +. The box to _Receive_ the items will be checked by default. +. Check the box adjacent to _Routing List_ to print an existing routing list. +. Click *Save* to receive the items. The Status of the items will update to "Received" and a Date Received will be recorded. The barcoded items will now appear in the holdings area of the catalog and the Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issues. + + +image::media/serials_sub11.PNG[] + + +=== Receive multiple issues (no barcode) === + +. Within the _Manage Issues_ tab, *Select [a] subscription* to work on from the drop down menu. The list of predicted issues for the subscription will appear. +. Make sure the box next to _Barcode on receive_ is unchecked and check the boxes adjacent to the expected issues you want to receive. +. A _Receive items_ dialog box will appear with the message "Will receive # item(s) without barcoding." +. Click *OK/Continue* to receive the issues. The Status of the issue will update to "Received" and a Date Received will be recorded. The Holdings Summary in the Issues Held tab in the catalog will reflect the newly received issues. + diff --git a/docs/modules/serials/pages/E-routing_lists.adoc b/docs/modules/serials/pages/E-routing_lists.adoc new file mode 100644 index 0000000000..31d18a2666 --- /dev/null +++ b/docs/modules/serials/pages/E-routing_lists.adoc @@ -0,0 +1,19 @@ += Routing Lists = +:toc: + +Routing lists enable you to designate specific users and/or departments that serial items need to be routed to upon receiving. + +*Create a Routing List* + +. To create a routing list for a subscription, go to the _Manage Subscriptions_ tab for a serials record, select the subscription from the list and go to *Actions->Additional Routing*, or _right-click_ and select *Additional Routing*. A dialog box will appear where you can create the routing list. +.. Scan or type in the barcode of the user the items should be routed to in the _Reader (barcode)_ field and click *Add Route*. Continue adding barcodes until the list is complete. +.. To route items to a location, click the radio button next to _Department_, type in the routing location, and click *Add Route*. +.. A _Note_ may be added along with each addition to the list. +.. The names and departments on the list will appear at the top of the dialog box and can be reordered by clicking the arrows or removed by clicking the x next to each name or department. +. When the list is complete, click *Update*. + + +image::media/serials_routing1.PNG[] + + +Routing lists can be printed as items are received (see the documentation on Receiving for more information). They can also be printed directly from the _Manage Issues_ tab in a subscription by selecting the item(s) and going to *Actions->Print routing lists* or _right-clicking_ on the item(s) and selecting *Print routing lists* from the menu. diff --git a/docs/modules/serials/pages/F-Special_issue.adoc b/docs/modules/serials/pages/F-Special_issue.adoc new file mode 100644 index 0000000000..93059f3dda --- /dev/null +++ b/docs/modules/serials/pages/F-Special_issue.adoc @@ -0,0 +1,34 @@ += Special Issues = +:toc: + +== Adding Extra Copies == +If the library receives an extra copy of an expected issue, the extra copy can be added to the list of predicted issues so it can be received through the serials module. + +*To add an extra copy of an expected issue*: + +. In the _Manage Issues_ tab, select the issuance that precedes the issuance that you received an extra copy of and go to *Actions->Add following issue* or _right-click_ on the issuance and select *Add following issue* from the menu. +. A dialog box will appear. Verify that the _Publication date_, _Type_, and _Chronology_ labels are correct. The _Enumeration_ labels will be filled in automatically when the issue is created. +. Click *Save* to create the extra copy of the following issue. +. The extra copy will appear in the list of issues and can be received using your typical workflow. + + +image::media/serials_extra1.PNG[] + + +== Adding Special Issues == +If the library receives an unexpected issue of a subscription, such as Summer Issue or Holiday Issue, it can be added to the list of predicted issues as a Special Issue so it can be received through the serials module. + +*To add a special issue*: + +. In the _Manage Issues_ tab, click *Add Special Issue*. A dialog box will appear. +. Enter the _Publication date_ of the special issue. +. Select the _Type_ (typically Basic). +. Add an _Issuance Label_ to identify the special issue, such as "Holiday Issue 2017". +. Click *Save*. +. The special issue will appear in the list of issues and can be received using your typical workflow. + + +image::media/serials_extra2.PNG[] + + +NOTE: A special issue may also be added as an ad hoc issue by following the instructions for Adding Extra Copies. Enter the Publication date and Type and check the box adjacent to Ad hoc issue? The form will update to allow you to enter an Issuance Label. diff --git a/docs/modules/serials/pages/G-binding.adoc b/docs/modules/serials/pages/G-binding.adoc new file mode 100644 index 0000000000..553f5227b0 --- /dev/null +++ b/docs/modules/serials/pages/G-binding.adoc @@ -0,0 +1,19 @@ += Binding Issues = +:toc: + +*Apply a binding template:* + +To bind issues, first a binding template needs to be applied to the associated Distribution. + +. Go to the _Manage Subscriptions_ tab and from the grid, select the Distribution(s) with issues you’d like to bind. +. _Right-click_ on the Distribution(s) or go to *Actions* and select *Apply Binding Template*. +. In the dialog box that appears, select the Serial Copy Template you’d like to use from the dropdown and click *Update*. + + +*To bind received issues together:* + +. Go to the _Manage Issues_ tab and select the issues you want to bind together. +. _Right-click_ on the issues or go to *Actions* and select *Bind*. +. The _Bind Items_ interface will appear and all items will be represented on the screen. The first item's fields will be editable. _Modify the Call Number_ if needed. Replace the *Barcode* and click *Save*. + +NOTE: The barcode must be replaced with a new barcode. The binding will fail if you attempt to reuse an existing barcode from one of the items being bound. Evergreen views it as a duplicate barcode. diff --git a/docs/modules/serials/pages/Group_Serials_Issues_in_the_OPAC_2.2.adoc b/docs/modules/serials/pages/Group_Serials_Issues_in_the_OPAC_2.2.adoc new file mode 100644 index 0000000000..69f42c9d53 --- /dev/null +++ b/docs/modules/serials/pages/Group_Serials_Issues_in_the_OPAC_2.2.adoc @@ -0,0 +1,49 @@ += Group Serials Issues in the Template Toolkit OPAC = +:toc: + +In previous versions of Evergreen, issues of serials displayed in a list ordered by publication date. The list could be lengthy if the library had extensive holdings of a serial. +Using the Template Toolkit OPAC that is available in version 2.2, you can group issues of serials in the OPAC by chronology or enumeration. For example, you might group issues by date published or by volume. Users can expand these hyperlinked groups to view holdings of specific issues. The result is a clean, easy-to-navigate interface for viewing holdings of serials with a large quantity of issues. + +NOTE: This feature is only available in the Template Toolkit OPAC. + +== Administration == + +Enable the following organizational unit settings to use this feature: + +. Click *Administration* -> *Local Administration* -> *Library Settings Editor*. +. Search or scroll to find *Serials: Default display grouping for serials distributions presented in the OPAC*. +. Click *Edit*. +. Enter *enum* to display issues by enumeration, or enter *chron* to display issues in chronological order. This value will become your default setting for display issues in the OPAC. +. Click *Update Setting*. +. Search or scroll to find *OPAC: Use fully compressed serials holdings*. +. Select the value, *True*, to view a compressed holdings statement. +. Click *Update Setting*. + +== Displaying Issues in the OPAC == + +Your library system has a subscription to the periodical, _Bon Appetit_. The serials librarian has determined that the issues at the Forest Falls branch should display in the OPAC by month and year. The issues at the McKinley branch should display by volume and number. The serials librarian will create two distributions for the serial that will include these groupings. + +. Retrieve the bibliographic record for the serial, and click *Actions for this Record* -> *Alternate Serial Control*. +. Create a *New Subscription* or click on the hyperlinked ID of an existing subscription. +. Click *New Distribution*. +. Create a label to identify the distribution. +. Select the holding library from the drop down menu that will own physical copies of the issues. +. Select a display grouping. Select *chronology* from the drop down menu. +. Select a template from the drop down menu to receive copies. +. Click *Save*. ++ +image::media/Group_Serials_Issues_in_the_OPAC2.jpg[Group_Serials_Issues_in_the_OPAC2] ++ +. Click *New Distribution* and repeat the process to send issues to the McKinley Branch. Choose *enumeration* in the *Display Grouping* field to display issues by volume and number. +. Complete the creation of your subscription. +. Retrieve the record from the catalog. +. Scroll down to and click the *Issues Held* link. The issues label for each branch appears. +. Click the hyperlinked issues label. + +The issues owned by the Forest Falls branch are grouped by chronology: + +image::media/Group_Serials_Issues_in_the_OPAC5.jpg[Group_Serials_Issues_in_the_OPAC5] + +The issues owned by the McKinley branch are grouped by enumeration: + +image::media/Group_Serials_Issues_in_the_OPAC7.jpg[Group_Serials_Issues_in_the_OPAC7] diff --git a/docs/modules/serials/pages/H-holdings_statements.adoc b/docs/modules/serials/pages/H-holdings_statements.adoc new file mode 100644 index 0000000000..1656063dbc --- /dev/null +++ b/docs/modules/serials/pages/H-holdings_statements.adoc @@ -0,0 +1,46 @@ += Holdings = +:toc: + +== System Generated Holdings Statement == +As issues are received, Evergreen creates a holding statement in the OPAC based on what is set up in the Caption and Patterns of the subscription. The systems generated holdings can only be edited by changing caption and pattern information and there is no ability to edit the statement as free text. + +== MARC Format for Holdings Display (MFHD) == +Evergreen users can create, edit and delete their own MFHD. + +=== Create an MFHD record === + +*To create a MFHD record:* + +. From a serials record in the catalog, go to *Serials->Manage MFHDs*. This will bring you to the _Manage MFHD_ tab within the serials module. +. Click *Create MFHD*. + + +image::media/serials_mfhd1.PNG[] + + +. A _Create new MFHD_ dialog box will appear. _Select the library_ for which you are creating the MFHD record and click *Create*. +. The MFHD record will appear in the list. Go to *Actions->Edit MFHD* or _right-click_ on the row and select *Edit MFHD* from the drop down menu. + + +image::media/serials_mfhd3.PNG[] + + +. The MARC Editor will appear. _Modify the MFHD record_ as needed and click *Save*. +. The Textual Holdings statement will appear in the _Issues Held_ tab in the catalog. + + +image::media/serials_mfhd6.PNG[] + + +=== Edit a MFHD record === + +. Open a serial record, go to *Serials* -> *MFHD Record* -> *Manage MFHDs* and select the appropriate MFHD. +. Go to *Actions* or right-click on the MFHD and select *Edit MFHD*. +. The MARC Editor will appear. _Modify the MFHD record_ as needed and click *Save*. + + +=== Delete a MFHD Record === + +. Open a serial record, go to *Serials* -> *MFHD Record* -> *Manage MFHDs* and select the appropriate MFHD. +. Go to *Actions* or right-click on the MFHD and select *Delete Selected MFHDs*. +. Click *OK/Continue* to delete the record. diff --git a/docs/modules/serials/pages/_attributes.adoc b/docs/modules/serials/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/serials/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/shared/_attributes.adoc b/docs/modules/shared/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/shared/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/shared/assets/images/media/ccbysa.png b/docs/modules/shared/assets/images/media/ccbysa.png new file mode 100644 index 0000000000..f0a944e0b8 Binary files /dev/null and b/docs/modules/shared/assets/images/media/ccbysa.png differ diff --git a/docs/modules/shared/pages/_attributes.adoc b/docs/modules/shared/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/shared/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/shared/pages/about_evergreen.adoc b/docs/modules/shared/pages/about_evergreen.adoc new file mode 100644 index 0000000000..582319ac7a --- /dev/null +++ b/docs/modules/shared/pages/about_evergreen.adoc @@ -0,0 +1,25 @@ += About Evergreen = + +Evergreen is an open source library automation software designed to meet the +needs of the very smallest to the very largest libraries and consortia. Through +its staff interface, it facilitates the management, cataloging, and circulation +of library materials, and through its online public access interface it helps +patrons find those materials. + +The Evergreen software is freely licensed under the GNU General Public License, +meaning that it is free to download, use, view, modify, and share. It has an +active development and user community, as well as several companies offering +migration, support, hosting, and development services. + +The community's development requirements state that Evergreen must be: + +* Stable, even under extreme load. +* Robust, and capable of handling a high volume of transactions and simultaneous users. +* Flexible, to accommodate the varied needs of libraries. +* Secure, to protect our patrons’ privacy and data. +* User-friendly, to facilitate patron and staff use of the system. + +Evergreen, which first launched in 2006 now powers over 544 libraries of every +type – public, academic, special, school, and even tribal and home libraries – +in over a dozen countries worldwide. + diff --git a/docs/modules/shared/pages/about_this_documentation.adoc b/docs/modules/shared/pages/about_this_documentation.adoc new file mode 100644 index 0000000000..43fd403c85 --- /dev/null +++ b/docs/modules/shared/pages/about_this_documentation.adoc @@ -0,0 +1,13 @@ += About This Documentation = + +This guide was produced by the Evergreen Documentation Interest Group (DIG), +consisting of numerous volunteers from many different organizations. The DIG +has drawn together, edited, and supplemented pre-existing documentation +contributed by libraries and consortia running Evergreen that were kind enough +to release their documentation into the creative commons. Please see the +xref:shared:attributions.adoc#attributions[Attributions] section for a full list of authors and +contributing organizations. Just like the software it describes, this guide is +a work in progress, continually revised to meet the needs of its users, so if +you find errors or omissions, please let us know, by contacting the DIG +facilitators at docs@evergreen-ils.org. + diff --git a/docs/modules/shared/pages/attributions.adoc b/docs/modules/shared/pages/attributions.adoc new file mode 100644 index 0000000000..770d6a8d0d --- /dev/null +++ b/docs/modules/shared/pages/attributions.adoc @@ -0,0 +1,57 @@ +[[attributions]] +[#appendix] += Attributions = + +Copyright © 2009-2018 Evergreen DIG + +Copyright © 2007-2018 Equinox + +Copyright © 2007-2018 Dan Scott + +Copyright © 2009-2018 BC Libraries Cooperative (SITKA) + +Copyright © 2008-2018 King County Library System + +Copyright © 2009-2018 Pioneer Library System + +Copyright © 2009-2018 PALS + +Copyright © 2009-2018 Georgia Public Library Service + +Copyright © 2008-2018 Project Conifer + +Copyright © 2009-2018 Bibliomation + +Copyright © 2008-2018 Evergreen Indiana + +Copyright © 2008-2018 SC LENDS + +Copyright © 2012-2018 CW MARS + +Copyright © 2014-2020 MOBIUS + + +*DIG Contributors* + +* Hilary Caws-Elwitt, Susquehanna County Library +* Karen Collier, Kent County Public Library +* George Duimovich, NRCan Library +* Lynn Floyd, Anderson County Library +* Sally Fortin, Equinox Software +* Wolf Halton, Lyrasis +* Jennifer Pringle, SITKA +* June Rayner, eiNetwork +* Steve Sheppard +* Ben Shum, Bibliomation +* Roni Shwaish, eiNetwork +* Robert Soulliere, Mohawk College +* Remington Steed, Calvin College +* Jeanette Lundgren, CW MARS +* Tim Spindler, CW MARS +* Jane Sandberg, Linn-Benton Community College +* Lindsay Stratton, Pioneer Library System +* Yamil Suarez, Berklee College of Music +* Jenny Turner, PALS +* Debbie Luchenbill, MOBIUS +* Blake Graham-Henderson, MOBIUS +* Ted Peterson, MOBIUS diff --git a/docs/modules/shared/pages/end_matter.adoc b/docs/modules/shared/pages/end_matter.adoc new file mode 100644 index 0000000000..8600828ae0 --- /dev/null +++ b/docs/modules/shared/pages/end_matter.adoc @@ -0,0 +1,14 @@ +[[licensing]] +[#appendix] += Licensing = + +image::media/ccbysa.png["CC-BY-SA",link="http://creativecommons.org/licenses/by-sa/3.0/"] + +This work is licensed under a +link:http://creativecommons.org/licenses/by-sa/3.0/[Creative +Commons Attribution-ShareAlike 3.0 Unported License]. + + +[#index] +== Index == + diff --git a/docs/modules/shared/pages/index.adoc b/docs/modules/shared/pages/index.adoc new file mode 100644 index 0000000000..fa9fe8c110 --- /dev/null +++ b/docs/modules/shared/pages/index.adoc @@ -0,0 +1,3 @@ +[#index] += Index = + diff --git a/docs/modules/shared/pages/licensing.adoc b/docs/modules/shared/pages/licensing.adoc new file mode 100644 index 0000000000..ce4967538b --- /dev/null +++ b/docs/modules/shared/pages/licensing.adoc @@ -0,0 +1,11 @@ +[[licensing]] +[#appendix] += Licensing = + +image::media/ccbysa.png["CC-BY-SA",link="http://creativecommons.org/licenses/by-sa/3.0/"] + +This work is licensed under a +link:http://creativecommons.org/licenses/by-sa/3.0/[Creative +Commons Attribution-ShareAlike 3.0 Unported License]. + + diff --git a/docs/modules/shared/pages/workstation_settings.adoc b/docs/modules/shared/pages/workstation_settings.adoc new file mode 100644 index 0000000000..56ec62ca4c --- /dev/null +++ b/docs/modules/shared/pages/workstation_settings.adoc @@ -0,0 +1,30 @@ += Configuring Evergreen for your workstation = + +== Setting search defaults == + +* Go to Administration -> Workstation. +* Use the dropdown menu to select an appropriate +_Default Search Library_. The default search library +setting determines what library is searched from the +advanced search screen and portal page by default. +You can override this setting when you are actually +searching by selecting a different library. +One recommendation is to set the search library to the +highest point you would normally want to search. +* Use the dropdown menu to select an appropriate +_Preferred Library_. The preferred library is used to +show copies and electronic resource URIs regardless +of the library searched. One recommendation is to set +this to your home library so that local copies show up +first in search results. +* Use the dropdown menu to select an appropriate +_Advanced Search Default Pane_. Advanced search has +secondary panes for Numeric and MARC Expert searching. +You can change which one is loaded by default when +opening a new catalog window here. + +== Turning off sounds == + +* Go to Administration -> Workstation. +* Click the checkbox labeled _Disable Sounds?_ + diff --git a/docs/modules/sys_admin/_attributes.adoc b/docs/modules/sys_admin/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/sys_admin/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/sys_admin/nav.adoc b/docs/modules/sys_admin/nav.adoc new file mode 100644 index 0000000000..ddc3c2c568 --- /dev/null +++ b/docs/modules/sys_admin/nav.adoc @@ -0,0 +1,24 @@ +* xref:sys_admin:introduction.adoc[System Administration From the Staff Client] +** xref:admin:acquisitions_admin.adoc[Acquisitions Administration] +** xref:admin:age_hold_protection.adoc[Age hold protection] +** xref:admin:authorities.adoc[Authorities] +** xref:admin:Best_Hold_Selection_Sort_Order.adoc[Best-Hold Selection Sort Order] +** xref:admin:booking-admin.adoc[Booking Module Administration] +** xref:admin:cn_prefixes_and_suffixes.adoc[Call Number Prefixes and Suffixes] +** xref:admin:desk_payments.adoc[Cash Reports] +** xref:admin:circulation_limit_groups.adoc[Circulation Limit Sets] +** xref:admin:copy_statuses.adoc[Item Status] +** xref:admin:floating_groups.adoc[Floating Groups] +** xref:admin:MARC_Import_Remove_Fields.adoc[MARC Import Remove Fields] +** xref:admin:copy_tags_admin.adoc[Item Tags (Digital Bookplates)] +** xref:admin:MARC_RAD_MVF_CRA.adoc[MARC Record Attributes] +*** xref:admin:multilingual_search.adoc[Multilingual Search in Evergreen] +*** xref:admin:infrastructure_auth_browse.adoc[Infrastructure Changes to Authority Browse] +*** xref:admin:virtual_index_defs.adoc[Virtual Index Definitions] +** xref:admin:Org_Unit_Proximity_Adjustments.adoc[Org Unit Proximity Adjustments] +** xref:admin:physical_char_wizard_db.adoc[Administering the Physical Characteristics Wizard] +** xref:admin:copy_locations.adoc[Administering shelving locations] +** xref:admin:permissions.adoc[User and Group Permissions] +** xref:admin:SMS_messaging.adoc[SMS Text Messaging] +** xref:admin:user_activity_type.adoc[User Activity Types] +** xref:admin:restrict_Z39.50_sources_by_perm_group.adoc[Z39.50 Servers] diff --git a/docs/modules/sys_admin/pages/_attributes.adoc b/docs/modules/sys_admin/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/sys_admin/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/sys_admin/pages/introduction.adoc b/docs/modules/sys_admin/pages/introduction.adoc new file mode 100644 index 0000000000..1863a675cc --- /dev/null +++ b/docs/modules/sys_admin/pages/introduction.adoc @@ -0,0 +1,3 @@ += Introduction = +This part deals with the options in the Server Administration menu found in the +staff client. diff --git a/docs/modules/using_staff_client/_attributes.adoc b/docs/modules/using_staff_client/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/modules/using_staff_client/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/modules/using_staff_client/nav.adoc b/docs/modules/using_staff_client/nav.adoc new file mode 100644 index 0000000000..b47b91c63a --- /dev/null +++ b/docs/modules/using_staff_client/nav.adoc @@ -0,0 +1,7 @@ +* xref:using_staff_client:introduction.adoc[Using the Browser Staff Client] +** xref:admin:web_client-login.adoc[Logging into Evergreen] +** xref:admin:web-client-browser-best-practices.adoc[Best Practices for Using the Browser] +** xref:admin:staff_client-column_picker.adoc[Column Picker] +** xref:admin:staff_client-recent_searches.adoc[Recent Staff Searches] +** xref:admin:workstation_admin.adoc[Workstation Administration] +*** xref:admin:receipt_template_editor.adoc[Receipt Template Editor] diff --git a/docs/modules/using_staff_client/pages/_attributes.adoc b/docs/modules/using_staff_client/pages/_attributes.adoc new file mode 100644 index 0000000000..fb982443d7 --- /dev/null +++ b/docs/modules/using_staff_client/pages/_attributes.adoc @@ -0,0 +1,2 @@ +:moduledir: .. +include::{moduledir}/_attributes.adoc[] diff --git a/docs/modules/using_staff_client/pages/introduction.adoc b/docs/modules/using_staff_client/pages/introduction.adoc new file mode 100644 index 0000000000..c6e1d5a298 --- /dev/null +++ b/docs/modules/using_staff_client/pages/introduction.adoc @@ -0,0 +1,8 @@ += Introduction = + +This part of the documentation deals with general Browser Client usage including +logging in, navigation and shortcuts. + +For information about the XUL client, consult the +http://docs.evergreen-ils.org/2.11/[Evergreen 2.11 documentation]. + diff --git a/docs/setup_lunr.yml b/docs/setup_lunr.yml new file mode 100644 index 0000000000..583fe3678d --- /dev/null +++ b/docs/setup_lunr.yml @@ -0,0 +1,24 @@ +--- +- hosts: 'localhost' + connection: local + remote_user: user + become_method: sudo + tasks: + - name: Insert const generateIndex + lineinfile: + path: node_modules/@antora/site-generator-default/lib/generate-site.js + insertafter: 'use strict' + line: "const generateIndex = require('antora-lunr')" + + - name: Insert const index + lineinfile: + path: node_modules/@antora/site-generator-default/lib/generate-site.js + insertafter: 'const siteFiles = mapSit' + line: " const index = generateIndex(playbook, pages)" + + - name: Insert line siteFiles.push(generateIndex.createIndexFile(index)) + lineinfile: + path: node_modules/@antora/site-generator-default/lib/generate-site.js + insertafter: 'const index = generateIn' + line: " siteFiles.push(generateIndex.createIndexFile(index))" +... \ No newline at end of file diff --git a/docs/site.yml b/docs/site.yml new file mode 100644 index 0000000000..fb26eda3ed --- /dev/null +++ b/docs/site.yml @@ -0,0 +1,17 @@ +site: + title: Evergreen Documentation + start_page: docs:shared:about_this_documentation + url: http://localhost/prod +content: + sources: + - url: ../ +# - url: git@git.evergreen-ils.org:working/Evergreen.git + branches: LP1848524_antora_ize_docs + start_path: docs-antora +ui: + bundle: + url: ./../../eg-antora/build/ui-bundle.zip + supplemental_files: ./ui/ui-lunr + +output: + dir: /var/www/html/prod diff --git a/docs/topics/acquisitions/antora.yml b/docs/topics/acquisitions/antora.yml new file mode 100644 index 0000000000..2c13efd3f4 --- /dev/null +++ b/docs/topics/acquisitions/antora.yml @@ -0,0 +1,5 @@ +name: acq +title: Evergreen Acquisitions Manual +version: 'latest' +nav: +- modules/ROOT/nav.adoc diff --git a/docs/topics/acquisitions/modules/ROOT/_attributes.adoc b/docs/topics/acquisitions/modules/ROOT/_attributes.adoc new file mode 100644 index 0000000000..dec438a296 --- /dev/null +++ b/docs/topics/acquisitions/modules/ROOT/_attributes.adoc @@ -0,0 +1,4 @@ +:attachmentsdir: {moduledir}/assets/attachments +:examplesdir: {moduledir}/examples +:imagesdir: {moduledir}/assets/images +:partialsdir: {moduledir}/pages/_partials diff --git a/docs/topics/acquisitions/modules/ROOT/nav.adoc b/docs/topics/acquisitions/modules/ROOT/nav.adoc new file mode 100644 index 0000000000..df52d41beb --- /dev/null +++ b/docs/topics/acquisitions/modules/ROOT/nav.adoc @@ -0,0 +1,18 @@ +* xref:ROOT:index.adoc[Introduction] +** xref:docs:shared:about_evergreen.adoc[About Evergreen] +* xref:docs:admin:web_client-login.adoc[Logging into Evergreen] +* xref:docs:admin:web-client-browser-best-practices.adoc[Best Practices for Using the Browser] +* xref:docs:shared:workstation_settings.adoc[Configuring Evergreen for Your Workstation] +* xref:docs:acquisitions:introduction.adoc[Acquisitions] +* xref:docs:acquisitions:selection_lists_po.adoc[Selection Lists and Purchase Orders] +* xref:docs:acquisitions:vandelay_acquisitions_integration.adoc[Load MARC Order Records] +* xref:docs:acquisitions:invoices.adoc[Invoices] +* xref:docs:acquisitions:receive_items_from_invoice.adoc[] +* xref:docs:acquisitions:purchase_requests_management.adoc[Managing Patron Purchase Requests] +* xref:docs:acquisitions:purchase_requests_patron_view.adoc[] +* xref:docs:opac:using_the_public_access_catalog.adoc[Using the Public Access Catalog] +* xref:docs:shared:attributions.adoc[Attributions] +* xref:docs:shared:attributions.adoc[Appendix A. Attributions] +* xref:docs:shared:licensing.adoc[Appendix B. Licensing] +* xref:docs:appendix:glossary.adoc[Glossary] +* xref:docs:shared:index.adoc[Index] diff --git a/docs/topics/acquisitions/modules/ROOT/pages/index.adoc b/docs/topics/acquisitions/modules/ROOT/pages/index.adoc new file mode 100644 index 0000000000..86bf5a31c7 --- /dev/null +++ b/docs/topics/acquisitions/modules/ROOT/pages/index.adoc @@ -0,0 +1,5 @@ += Evergreen Acquisitions Manual = + +This guide to Evergreen is intended to meet the needs of library workers who use +Evergreen's Acquisitions module. It is organized into Parts, Chapters, and +Sections addressing key aspects of the software. diff --git a/docs/ui/.editorconfig b/docs/ui/.editorconfig new file mode 100644 index 0000000000..c6c8b36219 --- /dev/null +++ b/docs/ui/.editorconfig @@ -0,0 +1,9 @@ +root = true + +[*] +indent_style = space +indent_size = 2 +end_of_line = lf +charset = utf-8 +trim_trailing_whitespace = true +insert_final_newline = true diff --git a/docs/ui/.eslintrc b/docs/ui/.eslintrc new file mode 100644 index 0000000000..f8fb261492 --- /dev/null +++ b/docs/ui/.eslintrc @@ -0,0 +1,9 @@ +{ + "extends": "standard", + "rules": { + "arrow-parens": ["error", "always"], + "comma-dangle": ["error", "always-multiline"], + "max-len": [1, 120, 2], + "spaced-comment": "off" + } +} diff --git a/docs/ui/.gitignore b/docs/ui/.gitignore new file mode 100644 index 0000000000..57834a1291 --- /dev/null +++ b/docs/ui/.gitignore @@ -0,0 +1,3 @@ +/build/ +/node_modules/ +/public/ diff --git a/docs/ui/.gitlab-ci.yml b/docs/ui/.gitlab-ci.yml new file mode 100644 index 0000000000..b183e33c59 --- /dev/null +++ b/docs/ui/.gitlab-ci.yml @@ -0,0 +1,55 @@ +image: node:10.14.2-stretch +stages: [setup, verify, deploy] +install: + stage: setup + cache: + paths: + - .cache/npm + script: + - &npm_install + npm install --quiet --no-progress --cache=.cache/npm +lint: + stage: verify + cache: &pull_cache + policy: pull + paths: + - .cache/npm + script: + - *npm_install + - node_modules/.bin/gulp lint +bundle-stable: + stage: deploy + only: + - master@antora/antora-ui-default + cache: *pull_cache + script: + - *npm_install + - node_modules/.bin/gulp bundle + artifacts: + paths: + - build/ui-bundle.zip +bundle-dev: + stage: deploy + except: + - master + cache: *pull_cache + script: + - *npm_install + - node_modules/.bin/gulp bundle + artifacts: + expire_in: 1 day # unless marked as keep from job page + paths: + - build/ui-bundle.zip +pages: + stage: deploy + only: + - master@antora/antora-ui-default + cache: *pull_cache + script: + - *npm_install + - node_modules/.bin/gulp preview:build + # FIXME figure out a way to avoid copying these files to preview site + - rm -rf public/_/{helpers,layouts,partials} + artifacts: + paths: + - public diff --git a/docs/ui/.gulp.json b/docs/ui/.gulp.json new file mode 100644 index 0000000000..2da9b16c1e --- /dev/null +++ b/docs/ui/.gulp.json @@ -0,0 +1,4 @@ +{ + "description": "Build tasks for the Antora default UI project", + "flags.tasksDepth": 1 +} diff --git a/docs/ui/.nvmrc b/docs/ui/.nvmrc new file mode 100644 index 0000000000..f599e28b8a --- /dev/null +++ b/docs/ui/.nvmrc @@ -0,0 +1 @@ +10 diff --git a/docs/ui/.stylelintrc b/docs/ui/.stylelintrc new file mode 100644 index 0000000000..344318f3c5 --- /dev/null +++ b/docs/ui/.stylelintrc @@ -0,0 +1,7 @@ +{ + "extends": "stylelint-config-standard", + "rules": { + "comment-empty-line-before": null, + "no-descending-specificity": null, + } +} diff --git a/docs/ui/ui-lunr/css/search.css b/docs/ui/ui-lunr/css/search.css new file mode 100644 index 0000000000..d9af4ac3ba --- /dev/null +++ b/docs/ui/ui-lunr/css/search.css @@ -0,0 +1,115 @@ +.navbar-brand .navbar-item + .navbar-item { + flex-grow: 1; + justify-content: flex-end; +} + +@media screen and (min-width: 1024px) { + .navbar-brand { + flex-grow: 1; + } + + .navbar-menu { + flex-grow: 0; + } +} + +#search-input { + color: #333; + font-family: inherit; + font-size: 0.95rem; + width: 150px; + border: 1px solid #dbdbdb; + border-radius: 0.1em; + line-height: 1.5; + padding: 0 0.25em; +} + +@media screen and (min-width: 769px) { + #search-input { + width: 200px; + } +} + +.search-result-dropdown-menu { + position: absolute; + z-index: 100; + display: block; + right: 0; + left: inherit; + top: 100%; + border-radius: 4px; + margin: 6px 0 0; + padding: 0; + text-align: left; + height: auto; + background: transparent; + border: none; + max-width: 600px; + min-width: 500px; + box-shadow: 0 1px 0 0 rgba(0, 0, 0, 0.2), 0 2px 3px 0 rgba(0, 0, 0, 0.1); +} + +@media screen and (max-width: 768px) { + .navbar-brand .navbar-item + .navbar-item { + padding-left: 0; + padding-right: 0; + } + + .search-result-dropdown-menu { + min-width: calc(100vw - 3.75rem); + } +} + +.search-result-dataset { + position: relative; + border: 1px solid #d9d9d9; + background: #fff; + border-radius: 4px; + overflow: auto; + padding: 0 8px 8px; + max-height: calc(100vh - 5.25rem); + color: #333; +} + +.search-result-highlight { + color: #174d8c; + background: rgba(143, 187, 237, 0.1); + padding: .1em .05em; +} + +.search-result-item { + display: flex; + font-size: 1rem; + margin-bottom: 0.5rem; + margin-top: 0.5rem; +} + +.search-result-document-title { + width: 33%; + border-right: 1px solid #ddd; + color: #a4a7ae; + font-size: 0.8rem; + padding: 0.25rem 0.5rem 0.25rem 0; + text-align: right; + position: relative; + word-wrap: break-word; +} + +.search-result-document-hit { + flex: 1; + font-size: 0.75em; + color: #02060c; + font-weight: 700; +} + +.search-result-document-hit > a { + color: inherit; + display: block; + padding: 0.5rem 0 0.5rem 1rem; + margin-bottom: 0.25rem; +} + +.search-result-document-hit > a:hover { + background-color: rgba(69, 142, 225, 0.05); +} + diff --git a/docs/ui/ui-lunr/js/vendor/lunr.js b/docs/ui/ui-lunr/js/vendor/lunr.js new file mode 100644 index 0000000000..c3537658a6 --- /dev/null +++ b/docs/ui/ui-lunr/js/vendor/lunr.js @@ -0,0 +1,3475 @@ +/** + * lunr - http://lunrjs.com - A bit like Solr, but much smaller and not as bright - 2.3.8 + * Copyright (C) 2019 Oliver Nightingale + * @license MIT + */ + +;(function(){ + +/** + * A convenience function for configuring and constructing + * a new lunr Index. + * + * A lunr.Builder instance is created and the pipeline setup + * with a trimmer, stop word filter and stemmer. + * + * This builder object is yielded to the configuration function + * that is passed as a parameter, allowing the list of fields + * and other builder parameters to be customised. + * + * All documents _must_ be added within the passed config function. + * + * @example + * var idx = lunr(function () { + * this.field('title') + * this.field('body') + * this.ref('id') + * + * documents.forEach(function (doc) { + * this.add(doc) + * }, this) + * }) + * + * @see {@link lunr.Builder} + * @see {@link lunr.Pipeline} + * @see {@link lunr.trimmer} + * @see {@link lunr.stopWordFilter} + * @see {@link lunr.stemmer} + * @namespace {function} lunr + */ +var lunr = function (config) { + var builder = new lunr.Builder + + builder.pipeline.add( + lunr.trimmer, + lunr.stopWordFilter, + lunr.stemmer + ) + + builder.searchPipeline.add( + lunr.stemmer + ) + + config.call(builder, builder) + return builder.build() +} + +lunr.version = "2.3.8" +/*! + * lunr.utils + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * A namespace containing utils for the rest of the lunr library + * @namespace lunr.utils + */ +lunr.utils = {} + +/** + * Print a warning message to the console. + * + * @param {String} message The message to be printed. + * @memberOf lunr.utils + * @function + */ +lunr.utils.warn = (function (global) { + /* eslint-disable no-console */ + return function (message) { + if (global.console && console.warn) { + console.warn(message) + } + } + /* eslint-enable no-console */ +})(this) + +/** + * Convert an object to a string. + * + * In the case of `null` and `undefined` the function returns + * the empty string, in all other cases the result of calling + * `toString` on the passed object is returned. + * + * @param {Any} obj The object to convert to a string. + * @return {String} string representation of the passed object. + * @memberOf lunr.utils + */ +lunr.utils.asString = function (obj) { + if (obj === void 0 || obj === null) { + return "" + } else { + return obj.toString() + } +} + +/** + * Clones an object. + * + * Will create a copy of an existing object such that any mutations + * on the copy cannot affect the original. + * + * Only shallow objects are supported, passing a nested object to this + * function will cause a TypeError. + * + * Objects with primitives, and arrays of primitives are supported. + * + * @param {Object} obj The object to clone. + * @return {Object} a clone of the passed object. + * @throws {TypeError} when a nested object is passed. + * @memberOf Utils + */ +lunr.utils.clone = function (obj) { + if (obj === null || obj === undefined) { + return obj + } + + var clone = Object.create(null), + keys = Object.keys(obj) + + for (var i = 0; i < keys.length; i++) { + var key = keys[i], + val = obj[key] + + if (Array.isArray(val)) { + clone[key] = val.slice() + continue + } + + if (typeof val === 'string' || + typeof val === 'number' || + typeof val === 'boolean') { + clone[key] = val + continue + } + + throw new TypeError("clone is not deep and does not support nested objects") + } + + return clone +} +lunr.FieldRef = function (docRef, fieldName, stringValue) { + this.docRef = docRef + this.fieldName = fieldName + this._stringValue = stringValue +} + +lunr.FieldRef.joiner = "/" + +lunr.FieldRef.fromString = function (s) { + var n = s.indexOf(lunr.FieldRef.joiner) + + if (n === -1) { + throw "malformed field ref string" + } + + var fieldRef = s.slice(0, n), + docRef = s.slice(n + 1) + + return new lunr.FieldRef (docRef, fieldRef, s) +} + +lunr.FieldRef.prototype.toString = function () { + if (this._stringValue == undefined) { + this._stringValue = this.fieldName + lunr.FieldRef.joiner + this.docRef + } + + return this._stringValue +} +/*! + * lunr.Set + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * A lunr set. + * + * @constructor + */ +lunr.Set = function (elements) { + this.elements = Object.create(null) + + if (elements) { + this.length = elements.length + + for (var i = 0; i < this.length; i++) { + this.elements[elements[i]] = true + } + } else { + this.length = 0 + } +} + +/** + * A complete set that contains all elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.complete = { + intersect: function (other) { + return other + }, + + union: function (other) { + return other + }, + + contains: function () { + return true + } +} + +/** + * An empty set that contains no elements. + * + * @static + * @readonly + * @type {lunr.Set} + */ +lunr.Set.empty = { + intersect: function () { + return this + }, + + union: function (other) { + return other + }, + + contains: function () { + return false + } +} + +/** + * Returns true if this set contains the specified object. + * + * @param {object} object - Object whose presence in this set is to be tested. + * @returns {boolean} - True if this set contains the specified object. + */ +lunr.Set.prototype.contains = function (object) { + return !!this.elements[object] +} + +/** + * Returns a new set containing only the elements that are present in both + * this set and the specified set. + * + * @param {lunr.Set} other - set to intersect with this set. + * @returns {lunr.Set} a new set that is the intersection of this and the specified set. + */ + +lunr.Set.prototype.intersect = function (other) { + var a, b, elements, intersection = [] + + if (other === lunr.Set.complete) { + return this + } + + if (other === lunr.Set.empty) { + return other + } + + if (this.length < other.length) { + a = this + b = other + } else { + a = other + b = this + } + + elements = Object.keys(a.elements) + + for (var i = 0; i < elements.length; i++) { + var element = elements[i] + if (element in b.elements) { + intersection.push(element) + } + } + + return new lunr.Set (intersection) +} + +/** + * Returns a new set combining the elements of this and the specified set. + * + * @param {lunr.Set} other - set to union with this set. + * @return {lunr.Set} a new set that is the union of this and the specified set. + */ + +lunr.Set.prototype.union = function (other) { + if (other === lunr.Set.complete) { + return lunr.Set.complete + } + + if (other === lunr.Set.empty) { + return this + } + + return new lunr.Set(Object.keys(this.elements).concat(Object.keys(other.elements))) +} +/** + * A function to calculate the inverse document frequency for + * a posting. This is shared between the builder and the index + * + * @private + * @param {object} posting - The posting for a given term + * @param {number} documentCount - The total number of documents. + */ +lunr.idf = function (posting, documentCount) { + var documentsWithTerm = 0 + + for (var fieldName in posting) { + if (fieldName == '_index') continue // Ignore the term index, its not a field + documentsWithTerm += Object.keys(posting[fieldName]).length + } + + var x = (documentCount - documentsWithTerm + 0.5) / (documentsWithTerm + 0.5) + + return Math.log(1 + Math.abs(x)) +} + +/** + * A token wraps a string representation of a token + * as it is passed through the text processing pipeline. + * + * @constructor + * @param {string} [str=''] - The string token being wrapped. + * @param {object} [metadata={}] - Metadata associated with this token. + */ +lunr.Token = function (str, metadata) { + this.str = str || "" + this.metadata = metadata || {} +} + +/** + * Returns the token string that is being wrapped by this object. + * + * @returns {string} + */ +lunr.Token.prototype.toString = function () { + return this.str +} + +/** + * A token update function is used when updating or optionally + * when cloning a token. + * + * @callback lunr.Token~updateFunction + * @param {string} str - The string representation of the token. + * @param {Object} metadata - All metadata associated with this token. + */ + +/** + * Applies the given function to the wrapped string token. + * + * @example + * token.update(function (str, metadata) { + * return str.toUpperCase() + * }) + * + * @param {lunr.Token~updateFunction} fn - A function to apply to the token string. + * @returns {lunr.Token} + */ +lunr.Token.prototype.update = function (fn) { + this.str = fn(this.str, this.metadata) + return this +} + +/** + * Creates a clone of this token. Optionally a function can be + * applied to the cloned token. + * + * @param {lunr.Token~updateFunction} [fn] - An optional function to apply to the cloned token. + * @returns {lunr.Token} + */ +lunr.Token.prototype.clone = function (fn) { + fn = fn || function (s) { return s } + return new lunr.Token (fn(this.str, this.metadata), this.metadata) +} +/*! + * lunr.tokenizer + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * A function for splitting a string into tokens ready to be inserted into + * the search index. Uses `lunr.tokenizer.separator` to split strings, change + * the value of this property to change how strings are split into tokens. + * + * This tokenizer will convert its parameter to a string by calling `toString` and + * then will split this string on the character in `lunr.tokenizer.separator`. + * Arrays will have their elements converted to strings and wrapped in a lunr.Token. + * + * Optional metadata can be passed to the tokenizer, this metadata will be cloned and + * added as metadata to every token that is created from the object to be tokenized. + * + * @static + * @param {?(string|object|object[])} obj - The object to convert into tokens + * @param {?object} metadata - Optional metadata to associate with every token + * @returns {lunr.Token[]} + * @see {@link lunr.Pipeline} + */ +lunr.tokenizer = function (obj, metadata) { + if (obj == null || obj == undefined) { + return [] + } + + if (Array.isArray(obj)) { + return obj.map(function (t) { + return new lunr.Token( + lunr.utils.asString(t).toLowerCase(), + lunr.utils.clone(metadata) + ) + }) + } + + var str = obj.toString().toLowerCase(), + len = str.length, + tokens = [] + + for (var sliceEnd = 0, sliceStart = 0; sliceEnd <= len; sliceEnd++) { + var char = str.charAt(sliceEnd), + sliceLength = sliceEnd - sliceStart + + if ((char.match(lunr.tokenizer.separator) || sliceEnd == len)) { + + if (sliceLength > 0) { + var tokenMetadata = lunr.utils.clone(metadata) || {} + tokenMetadata["position"] = [sliceStart, sliceLength] + tokenMetadata["index"] = tokens.length + + tokens.push( + new lunr.Token ( + str.slice(sliceStart, sliceEnd), + tokenMetadata + ) + ) + } + + sliceStart = sliceEnd + 1 + } + + } + + return tokens +} + +/** + * The separator used to split a string into tokens. Override this property to change the behaviour of + * `lunr.tokenizer` behaviour when tokenizing strings. By default this splits on whitespace and hyphens. + * + * @static + * @see lunr.tokenizer + */ +lunr.tokenizer.separator = /[\s\-]+/ +/*! + * lunr.Pipeline + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * lunr.Pipelines maintain an ordered list of functions to be applied to all + * tokens in documents entering the search index and queries being ran against + * the index. + * + * An instance of lunr.Index created with the lunr shortcut will contain a + * pipeline with a stop word filter and an English language stemmer. Extra + * functions can be added before or after either of these functions or these + * default functions can be removed. + * + * When run the pipeline will call each function in turn, passing a token, the + * index of that token in the original list of all tokens and finally a list of + * all the original tokens. + * + * The output of functions in the pipeline will be passed to the next function + * in the pipeline. To exclude a token from entering the index the function + * should return undefined, the rest of the pipeline will not be called with + * this token. + * + * For serialisation of pipelines to work, all functions used in an instance of + * a pipeline should be registered with lunr.Pipeline. Registered functions can + * then be loaded. If trying to load a serialised pipeline that uses functions + * that are not registered an error will be thrown. + * + * If not planning on serialising the pipeline then registering pipeline functions + * is not necessary. + * + * @constructor + */ +lunr.Pipeline = function () { + this._stack = [] +} + +lunr.Pipeline.registeredFunctions = Object.create(null) + +/** + * A pipeline function maps lunr.Token to lunr.Token. A lunr.Token contains the token + * string as well as all known metadata. A pipeline function can mutate the token string + * or mutate (or add) metadata for a given token. + * + * A pipeline function can indicate that the passed token should be discarded by returning + * null, undefined or an empty string. This token will not be passed to any downstream pipeline + * functions and will not be added to the index. + * + * Multiple tokens can be returned by returning an array of tokens. Each token will be passed + * to any downstream pipeline functions and all will returned tokens will be added to the index. + * + * Any number of pipeline functions may be chained together using a lunr.Pipeline. + * + * @interface lunr.PipelineFunction + * @param {lunr.Token} token - A token from the document being processed. + * @param {number} i - The index of this token in the complete list of tokens for this document/field. + * @param {lunr.Token[]} tokens - All tokens for this document/field. + * @returns {(?lunr.Token|lunr.Token[])} + */ + +/** + * Register a function with the pipeline. + * + * Functions that are used in the pipeline should be registered if the pipeline + * needs to be serialised, or a serialised pipeline needs to be loaded. + * + * Registering a function does not add it to a pipeline, functions must still be + * added to instances of the pipeline for them to be used when running a pipeline. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @param {String} label - The label to register this function with + */ +lunr.Pipeline.registerFunction = function (fn, label) { + if (label in this.registeredFunctions) { + lunr.utils.warn('Overwriting existing registered function: ' + label) + } + + fn.label = label + lunr.Pipeline.registeredFunctions[fn.label] = fn +} + +/** + * Warns if the function is not registered as a Pipeline function. + * + * @param {lunr.PipelineFunction} fn - The function to check for. + * @private + */ +lunr.Pipeline.warnIfFunctionNotRegistered = function (fn) { + var isRegistered = fn.label && (fn.label in this.registeredFunctions) + + if (!isRegistered) { + lunr.utils.warn('Function is not registered with pipeline. This may cause problems when serialising the index.\n', fn) + } +} + +/** + * Loads a previously serialised pipeline. + * + * All functions to be loaded must already be registered with lunr.Pipeline. + * If any function from the serialised data has not been registered then an + * error will be thrown. + * + * @param {Object} serialised - The serialised pipeline to load. + * @returns {lunr.Pipeline} + */ +lunr.Pipeline.load = function (serialised) { + var pipeline = new lunr.Pipeline + + serialised.forEach(function (fnName) { + var fn = lunr.Pipeline.registeredFunctions[fnName] + + if (fn) { + pipeline.add(fn) + } else { + throw new Error('Cannot load unregistered function: ' + fnName) + } + }) + + return pipeline +} + +/** + * Adds new functions to the end of the pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction[]} functions - Any number of functions to add to the pipeline. + */ +lunr.Pipeline.prototype.add = function () { + var fns = Array.prototype.slice.call(arguments) + + fns.forEach(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + this._stack.push(fn) + }, this) +} + +/** + * Adds a single function after a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.after = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + pos = pos + 1 + this._stack.splice(pos, 0, newFn) +} + +/** + * Adds a single function before a function that already exists in the + * pipeline. + * + * Logs a warning if the function has not been registered. + * + * @param {lunr.PipelineFunction} existingFn - A function that already exists in the pipeline. + * @param {lunr.PipelineFunction} newFn - The new function to add to the pipeline. + */ +lunr.Pipeline.prototype.before = function (existingFn, newFn) { + lunr.Pipeline.warnIfFunctionNotRegistered(newFn) + + var pos = this._stack.indexOf(existingFn) + if (pos == -1) { + throw new Error('Cannot find existingFn') + } + + this._stack.splice(pos, 0, newFn) +} + +/** + * Removes a function from the pipeline. + * + * @param {lunr.PipelineFunction} fn The function to remove from the pipeline. + */ +lunr.Pipeline.prototype.remove = function (fn) { + var pos = this._stack.indexOf(fn) + if (pos == -1) { + return + } + + this._stack.splice(pos, 1) +} + +/** + * Runs the current list of functions that make up the pipeline against the + * passed tokens. + * + * @param {Array} tokens The tokens to run through the pipeline. + * @returns {Array} + */ +lunr.Pipeline.prototype.run = function (tokens) { + var stackLength = this._stack.length + + for (var i = 0; i < stackLength; i++) { + var fn = this._stack[i] + var memo = [] + + for (var j = 0; j < tokens.length; j++) { + var result = fn(tokens[j], j, tokens) + + if (result === null || result === void 0 || result === '') continue + + if (Array.isArray(result)) { + for (var k = 0; k < result.length; k++) { + memo.push(result[k]) + } + } else { + memo.push(result) + } + } + + tokens = memo + } + + return tokens +} + +/** + * Convenience method for passing a string through a pipeline and getting + * strings out. This method takes care of wrapping the passed string in a + * token and mapping the resulting tokens back to strings. + * + * @param {string} str - The string to pass through the pipeline. + * @param {?object} metadata - Optional metadata to associate with the token + * passed to the pipeline. + * @returns {string[]} + */ +lunr.Pipeline.prototype.runString = function (str, metadata) { + var token = new lunr.Token (str, metadata) + + return this.run([token]).map(function (t) { + return t.toString() + }) +} + +/** + * Resets the pipeline by removing any existing processors. + * + */ +lunr.Pipeline.prototype.reset = function () { + this._stack = [] +} + +/** + * Returns a representation of the pipeline ready for serialisation. + * + * Logs a warning if the function has not been registered. + * + * @returns {Array} + */ +lunr.Pipeline.prototype.toJSON = function () { + return this._stack.map(function (fn) { + lunr.Pipeline.warnIfFunctionNotRegistered(fn) + + return fn.label + }) +} +/*! + * lunr.Vector + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * A vector is used to construct the vector space of documents and queries. These + * vectors support operations to determine the similarity between two documents or + * a document and a query. + * + * Normally no parameters are required for initializing a vector, but in the case of + * loading a previously dumped vector the raw elements can be provided to the constructor. + * + * For performance reasons vectors are implemented with a flat array, where an elements + * index is immediately followed by its value. E.g. [index, value, index, value]. This + * allows the underlying array to be as sparse as possible and still offer decent + * performance when being used for vector calculations. + * + * @constructor + * @param {Number[]} [elements] - The flat list of element index and element value pairs. + */ +lunr.Vector = function (elements) { + this._magnitude = 0 + this.elements = elements || [] +} + + +/** + * Calculates the position within the vector to insert a given index. + * + * This is used internally by insert and upsert. If there are duplicate indexes then + * the position is returned as if the value for that index were to be updated, but it + * is the callers responsibility to check whether there is a duplicate at that index + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @returns {Number} + */ +lunr.Vector.prototype.positionForIndex = function (index) { + // For an empty vector the tuple can be inserted at the beginning + if (this.elements.length == 0) { + return 0 + } + + var start = 0, + end = this.elements.length / 2, + sliceLength = end - start, + pivotPoint = Math.floor(sliceLength / 2), + pivotIndex = this.elements[pivotPoint * 2] + + while (sliceLength > 1) { + if (pivotIndex < index) { + start = pivotPoint + } + + if (pivotIndex > index) { + end = pivotPoint + } + + if (pivotIndex == index) { + break + } + + sliceLength = end - start + pivotPoint = start + Math.floor(sliceLength / 2) + pivotIndex = this.elements[pivotPoint * 2] + } + + if (pivotIndex == index) { + return pivotPoint * 2 + } + + if (pivotIndex > index) { + return pivotPoint * 2 + } + + if (pivotIndex < index) { + return (pivotPoint + 1) * 2 + } +} + +/** + * Inserts an element at an index within the vector. + * + * Does not allow duplicates, will throw an error if there is already an entry + * for this index. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + */ +lunr.Vector.prototype.insert = function (insertIdx, val) { + this.upsert(insertIdx, val, function () { + throw "duplicate index" + }) +} + +/** + * Inserts or updates an existing index within the vector. + * + * @param {Number} insertIdx - The index at which the element should be inserted. + * @param {Number} val - The value to be inserted into the vector. + * @param {function} fn - A function that is called for updates, the existing value and the + * requested value are passed as arguments + */ +lunr.Vector.prototype.upsert = function (insertIdx, val, fn) { + this._magnitude = 0 + var position = this.positionForIndex(insertIdx) + + if (this.elements[position] == insertIdx) { + this.elements[position + 1] = fn(this.elements[position + 1], val) + } else { + this.elements.splice(position, 0, insertIdx, val) + } +} + +/** + * Calculates the magnitude of this vector. + * + * @returns {Number} + */ +lunr.Vector.prototype.magnitude = function () { + if (this._magnitude) return this._magnitude + + var sumOfSquares = 0, + elementsLength = this.elements.length + + for (var i = 1; i < elementsLength; i += 2) { + var val = this.elements[i] + sumOfSquares += val * val + } + + return this._magnitude = Math.sqrt(sumOfSquares) +} + +/** + * Calculates the dot product of this vector and another vector. + * + * @param {lunr.Vector} otherVector - The vector to compute the dot product with. + * @returns {Number} + */ +lunr.Vector.prototype.dot = function (otherVector) { + var dotProduct = 0, + a = this.elements, b = otherVector.elements, + aLen = a.length, bLen = b.length, + aVal = 0, bVal = 0, + i = 0, j = 0 + + while (i < aLen && j < bLen) { + aVal = a[i], bVal = b[j] + if (aVal < bVal) { + i += 2 + } else if (aVal > bVal) { + j += 2 + } else if (aVal == bVal) { + dotProduct += a[i + 1] * b[j + 1] + i += 2 + j += 2 + } + } + + return dotProduct +} + +/** + * Calculates the similarity between this vector and another vector. + * + * @param {lunr.Vector} otherVector - The other vector to calculate the + * similarity with. + * @returns {Number} + */ +lunr.Vector.prototype.similarity = function (otherVector) { + return this.dot(otherVector) / this.magnitude() || 0 +} + +/** + * Converts the vector to an array of the elements within the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toArray = function () { + var output = new Array (this.elements.length / 2) + + for (var i = 1, j = 0; i < this.elements.length; i += 2, j++) { + output[j] = this.elements[i] + } + + return output +} + +/** + * A JSON serializable representation of the vector. + * + * @returns {Number[]} + */ +lunr.Vector.prototype.toJSON = function () { + return this.elements +} +/* eslint-disable */ +/*! + * lunr.stemmer + * Copyright (C) 2019 Oliver Nightingale + * Includes code from - http://tartarus.org/~martin/PorterStemmer/js.txt + */ + +/** + * lunr.stemmer is an english language stemmer, this is a JavaScript + * implementation of the PorterStemmer taken from http://tartarus.org/~martin + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token - The string to stem + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + * @function + */ +lunr.stemmer = (function(){ + var step2list = { + "ational" : "ate", + "tional" : "tion", + "enci" : "ence", + "anci" : "ance", + "izer" : "ize", + "bli" : "ble", + "alli" : "al", + "entli" : "ent", + "eli" : "e", + "ousli" : "ous", + "ization" : "ize", + "ation" : "ate", + "ator" : "ate", + "alism" : "al", + "iveness" : "ive", + "fulness" : "ful", + "ousness" : "ous", + "aliti" : "al", + "iviti" : "ive", + "biliti" : "ble", + "logi" : "log" + }, + + step3list = { + "icate" : "ic", + "ative" : "", + "alize" : "al", + "iciti" : "ic", + "ical" : "ic", + "ful" : "", + "ness" : "" + }, + + c = "[^aeiou]", // consonant + v = "[aeiouy]", // vowel + C = c + "[^aeiouy]*", // consonant sequence + V = v + "[aeiou]*", // vowel sequence + + mgr0 = "^(" + C + ")?" + V + C, // [C]VC... is m>0 + meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$", // [C]VC[V] is m=1 + mgr1 = "^(" + C + ")?" + V + C + V + C, // [C]VCVC... is m>1 + s_v = "^(" + C + ")?" + v; // vowel in stem + + var re_mgr0 = new RegExp(mgr0); + var re_mgr1 = new RegExp(mgr1); + var re_meq1 = new RegExp(meq1); + var re_s_v = new RegExp(s_v); + + var re_1a = /^(.+?)(ss|i)es$/; + var re2_1a = /^(.+?)([^s])s$/; + var re_1b = /^(.+?)eed$/; + var re2_1b = /^(.+?)(ed|ing)$/; + var re_1b_2 = /.$/; + var re2_1b_2 = /(at|bl|iz)$/; + var re3_1b_2 = new RegExp("([^aeiouylsz])\\1$"); + var re4_1b_2 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var re_1c = /^(.+?[^aeiou])y$/; + var re_2 = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + + var re_3 = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + + var re_4 = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + var re2_4 = /^(.+?)(s|t)(ion)$/; + + var re_5 = /^(.+?)e$/; + var re_5_1 = /ll$/; + var re3_5 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + + var porterStemmer = function porterStemmer(w) { + var stem, + suffix, + firstch, + re, + re2, + re3, + re4; + + if (w.length < 3) { return w; } + + firstch = w.substr(0,1); + if (firstch == "y") { + w = firstch.toUpperCase() + w.substr(1); + } + + // Step 1a + re = re_1a + re2 = re2_1a; + + if (re.test(w)) { w = w.replace(re,"$1$2"); } + else if (re2.test(w)) { w = w.replace(re2,"$1$2"); } + + // Step 1b + re = re_1b; + re2 = re2_1b; + if (re.test(w)) { + var fp = re.exec(w); + re = re_mgr0; + if (re.test(fp[1])) { + re = re_1b_2; + w = w.replace(re,""); + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = re_s_v; + if (re2.test(stem)) { + w = stem; + re2 = re2_1b_2; + re3 = re3_1b_2; + re4 = re4_1b_2; + if (re2.test(w)) { w = w + "e"; } + else if (re3.test(w)) { re = re_1b_2; w = w.replace(re,""); } + else if (re4.test(w)) { w = w + "e"; } + } + } + + // Step 1c - replace suffix y or Y by i if preceded by a non-vowel which is not the first letter of the word (so cry -> cri, by -> by, say -> say) + re = re_1c; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + w = stem + "i"; + } + + // Step 2 + re = re_2; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step2list[suffix]; + } + } + + // Step 3 + re = re_3; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = re_mgr0; + if (re.test(stem)) { + w = stem + step3list[suffix]; + } + } + + // Step 4 + re = re_4; + re2 = re2_4; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + if (re.test(stem)) { + w = stem; + } + } else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = re_mgr1; + if (re2.test(stem)) { + w = stem; + } + } + + // Step 5 + re = re_5; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = re_mgr1; + re2 = re_meq1; + re3 = re3_5; + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) { + w = stem; + } + } + + re = re_5_1; + re2 = re_mgr1; + if (re.test(w) && re2.test(w)) { + re = re_1b_2; + w = w.replace(re,""); + } + + // and turn initial Y back to y + + if (firstch == "y") { + w = firstch.toLowerCase() + w.substr(1); + } + + return w; + }; + + return function (token) { + return token.update(porterStemmer); + } +})(); + +lunr.Pipeline.registerFunction(lunr.stemmer, 'stemmer') +/*! + * lunr.stopWordFilter + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * lunr.generateStopWordFilter builds a stopWordFilter function from the provided + * list of stop words. + * + * The built in lunr.stopWordFilter is built using this generator and can be used + * to generate custom stopWordFilters for applications or non English languages. + * + * @function + * @param {Array} token The token to pass through the filter + * @returns {lunr.PipelineFunction} + * @see lunr.Pipeline + * @see lunr.stopWordFilter + */ +lunr.generateStopWordFilter = function (stopWords) { + var words = stopWords.reduce(function (memo, stopWord) { + memo[stopWord] = stopWord + return memo + }, {}) + + return function (token) { + if (token && words[token.toString()] !== token.toString()) return token + } +} + +/** + * lunr.stopWordFilter is an English language stop word list filter, any words + * contained in the list will not be passed through the filter. + * + * This is intended to be used in the Pipeline. If the token does not pass the + * filter then undefined will be returned. + * + * @function + * @implements {lunr.PipelineFunction} + * @params {lunr.Token} token - A token to check for being a stop word. + * @returns {lunr.Token} + * @see {@link lunr.Pipeline} + */ +lunr.stopWordFilter = lunr.generateStopWordFilter([ + 'a', + 'able', + 'about', + 'across', + 'after', + 'all', + 'almost', + 'also', + 'am', + 'among', + 'an', + 'and', + 'any', + 'are', + 'as', + 'at', + 'be', + 'because', + 'been', + 'but', + 'by', + 'can', + 'cannot', + 'could', + 'dear', + 'did', + 'do', + 'does', + 'either', + 'else', + 'ever', + 'every', + 'for', + 'from', + 'get', + 'got', + 'had', + 'has', + 'have', + 'he', + 'her', + 'hers', + 'him', + 'his', + 'how', + 'however', + 'i', + 'if', + 'in', + 'into', + 'is', + 'it', + 'its', + 'just', + 'least', + 'let', + 'like', + 'likely', + 'may', + 'me', + 'might', + 'most', + 'must', + 'my', + 'neither', + 'no', + 'nor', + 'not', + 'of', + 'off', + 'often', + 'on', + 'only', + 'or', + 'other', + 'our', + 'own', + 'rather', + 'said', + 'say', + 'says', + 'she', + 'should', + 'since', + 'so', + 'some', + 'than', + 'that', + 'the', + 'their', + 'them', + 'then', + 'there', + 'these', + 'they', + 'this', + 'tis', + 'to', + 'too', + 'twas', + 'us', + 'wants', + 'was', + 'we', + 'were', + 'what', + 'when', + 'where', + 'which', + 'while', + 'who', + 'whom', + 'why', + 'will', + 'with', + 'would', + 'yet', + 'you', + 'your' +]) + +lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'stopWordFilter') +/*! + * lunr.trimmer + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * lunr.trimmer is a pipeline function for trimming non word + * characters from the beginning and end of tokens before they + * enter the index. + * + * This implementation may not work correctly for non latin + * characters and should either be removed or adapted for use + * with languages with non-latin characters. + * + * @static + * @implements {lunr.PipelineFunction} + * @param {lunr.Token} token The token to pass through the filter + * @returns {lunr.Token} + * @see lunr.Pipeline + */ +lunr.trimmer = function (token) { + return token.update(function (s) { + return s.replace(/^\W+/, '').replace(/\W+$/, '') + }) +} + +lunr.Pipeline.registerFunction(lunr.trimmer, 'trimmer') +/*! + * lunr.TokenSet + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * A token set is used to store the unique list of all tokens + * within an index. Token sets are also used to represent an + * incoming query to the index, this query token set and index + * token set are then intersected to find which tokens to look + * up in the inverted index. + * + * A token set can hold multiple tokens, as in the case of the + * index token set, or it can hold a single token as in the + * case of a simple query token set. + * + * Additionally token sets are used to perform wildcard matching. + * Leading, contained and trailing wildcards are supported, and + * from this edit distance matching can also be provided. + * + * Token sets are implemented as a minimal finite state automata, + * where both common prefixes and suffixes are shared between tokens. + * This helps to reduce the space used for storing the token set. + * + * @constructor + */ +lunr.TokenSet = function () { + this.final = false + this.edges = {} + this.id = lunr.TokenSet._nextId + lunr.TokenSet._nextId += 1 +} + +/** + * Keeps track of the next, auto increment, identifier to assign + * to a new tokenSet. + * + * TokenSets require a unique identifier to be correctly minimised. + * + * @private + */ +lunr.TokenSet._nextId = 1 + +/** + * Creates a TokenSet instance from the given sorted array of words. + * + * @param {String[]} arr - A sorted array of strings to create the set from. + * @returns {lunr.TokenSet} + * @throws Will throw an error if the input array is not sorted. + */ +lunr.TokenSet.fromArray = function (arr) { + var builder = new lunr.TokenSet.Builder + + for (var i = 0, len = arr.length; i < len; i++) { + builder.insert(arr[i]) + } + + builder.finish() + return builder.root +} + +/** + * Creates a token set from a query clause. + * + * @private + * @param {Object} clause - A single clause from lunr.Query. + * @param {string} clause.term - The query clause term. + * @param {number} [clause.editDistance] - The optional edit distance for the term. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromClause = function (clause) { + if ('editDistance' in clause) { + return lunr.TokenSet.fromFuzzyString(clause.term, clause.editDistance) + } else { + return lunr.TokenSet.fromString(clause.term) + } +} + +/** + * Creates a token set representing a single string with a specified + * edit distance. + * + * Insertions, deletions, substitutions and transpositions are each + * treated as an edit distance of 1. + * + * Increasing the allowed edit distance will have a dramatic impact + * on the performance of both creating and intersecting these TokenSets. + * It is advised to keep the edit distance less than 3. + * + * @param {string} str - The string to create the token set from. + * @param {number} editDistance - The allowed edit distance to match. + * @returns {lunr.Vector} + */ +lunr.TokenSet.fromFuzzyString = function (str, editDistance) { + var root = new lunr.TokenSet + + var stack = [{ + node: root, + editsRemaining: editDistance, + str: str + }] + + while (stack.length) { + var frame = stack.pop() + + // no edit + if (frame.str.length > 0) { + var char = frame.str.charAt(0), + noEditNode + + if (char in frame.node.edges) { + noEditNode = frame.node.edges[char] + } else { + noEditNode = new lunr.TokenSet + frame.node.edges[char] = noEditNode + } + + if (frame.str.length == 1) { + noEditNode.final = true + } + + stack.push({ + node: noEditNode, + editsRemaining: frame.editsRemaining, + str: frame.str.slice(1) + }) + } + + if (frame.editsRemaining == 0) { + continue + } + + // insertion + if ("*" in frame.node.edges) { + var insertionNode = frame.node.edges["*"] + } else { + var insertionNode = new lunr.TokenSet + frame.node.edges["*"] = insertionNode + } + + if (frame.str.length == 0) { + insertionNode.final = true + } + + stack.push({ + node: insertionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str + }) + + // deletion + // can only do a deletion if we have enough edits remaining + // and if there are characters left to delete in the string + if (frame.str.length > 1) { + stack.push({ + node: frame.node, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // deletion + // just removing the last character from the str + if (frame.str.length == 1) { + frame.node.final = true + } + + // substitution + // can only do a substitution if we have enough edits remaining + // and if there are characters left to substitute + if (frame.str.length >= 1) { + if ("*" in frame.node.edges) { + var substitutionNode = frame.node.edges["*"] + } else { + var substitutionNode = new lunr.TokenSet + frame.node.edges["*"] = substitutionNode + } + + if (frame.str.length == 1) { + substitutionNode.final = true + } + + stack.push({ + node: substitutionNode, + editsRemaining: frame.editsRemaining - 1, + str: frame.str.slice(1) + }) + } + + // transposition + // can only do a transposition if there are edits remaining + // and there are enough characters to transpose + if (frame.str.length > 1) { + var charA = frame.str.charAt(0), + charB = frame.str.charAt(1), + transposeNode + + if (charB in frame.node.edges) { + transposeNode = frame.node.edges[charB] + } else { + transposeNode = new lunr.TokenSet + frame.node.edges[charB] = transposeNode + } + + if (frame.str.length == 1) { + transposeNode.final = true + } + + stack.push({ + node: transposeNode, + editsRemaining: frame.editsRemaining - 1, + str: charA + frame.str.slice(2) + }) + } + } + + return root +} + +/** + * Creates a TokenSet from a string. + * + * The string may contain one or more wildcard characters (*) + * that will allow wildcard matching when intersecting with + * another TokenSet. + * + * @param {string} str - The string to create a TokenSet from. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.fromString = function (str) { + var node = new lunr.TokenSet, + root = node + + /* + * Iterates through all characters within the passed string + * appending a node for each character. + * + * When a wildcard character is found then a self + * referencing edge is introduced to continually match + * any number of any characters. + */ + for (var i = 0, len = str.length; i < len; i++) { + var char = str[i], + final = (i == len - 1) + + if (char == "*") { + node.edges[char] = node + node.final = final + + } else { + var next = new lunr.TokenSet + next.final = final + + node.edges[char] = next + node = next + } + } + + return root +} + +/** + * Converts this TokenSet into an array of strings + * contained within the TokenSet. + * + * This is not intended to be used on a TokenSet that + * contains wildcards, in these cases the results are + * undefined and are likely to cause an infinite loop. + * + * @returns {string[]} + */ +lunr.TokenSet.prototype.toArray = function () { + var words = [] + + var stack = [{ + prefix: "", + node: this + }] + + while (stack.length) { + var frame = stack.pop(), + edges = Object.keys(frame.node.edges), + len = edges.length + + if (frame.node.final) { + /* In Safari, at this point the prefix is sometimes corrupted, see: + * https://github.com/olivernn/lunr.js/issues/279 Calling any + * String.prototype method forces Safari to "cast" this string to what + * it's supposed to be, fixing the bug. */ + frame.prefix.charAt(0) + words.push(frame.prefix) + } + + for (var i = 0; i < len; i++) { + var edge = edges[i] + + stack.push({ + prefix: frame.prefix.concat(edge), + node: frame.node.edges[edge] + }) + } + } + + return words +} + +/** + * Generates a string representation of a TokenSet. + * + * This is intended to allow TokenSets to be used as keys + * in objects, largely to aid the construction and minimisation + * of a TokenSet. As such it is not designed to be a human + * friendly representation of the TokenSet. + * + * @returns {string} + */ +lunr.TokenSet.prototype.toString = function () { + // NOTE: Using Object.keys here as this.edges is very likely + // to enter 'hash-mode' with many keys being added + // + // avoiding a for-in loop here as it leads to the function + // being de-optimised (at least in V8). From some simple + // benchmarks the performance is comparable, but allowing + // V8 to optimize may mean easy performance wins in the future. + + if (this._str) { + return this._str + } + + var str = this.final ? '1' : '0', + labels = Object.keys(this.edges).sort(), + len = labels.length + + for (var i = 0; i < len; i++) { + var label = labels[i], + node = this.edges[label] + + str = str + label + node.id + } + + return str +} + +/** + * Returns a new TokenSet that is the intersection of + * this TokenSet and the passed TokenSet. + * + * This intersection will take into account any wildcards + * contained within the TokenSet. + * + * @param {lunr.TokenSet} b - An other TokenSet to intersect with. + * @returns {lunr.TokenSet} + */ +lunr.TokenSet.prototype.intersect = function (b) { + var output = new lunr.TokenSet, + frame = undefined + + var stack = [{ + qNode: b, + output: output, + node: this + }] + + while (stack.length) { + frame = stack.pop() + + // NOTE: As with the #toString method, we are using + // Object.keys and a for loop instead of a for-in loop + // as both of these objects enter 'hash' mode, causing + // the function to be de-optimised in V8 + var qEdges = Object.keys(frame.qNode.edges), + qLen = qEdges.length, + nEdges = Object.keys(frame.node.edges), + nLen = nEdges.length + + for (var q = 0; q < qLen; q++) { + var qEdge = qEdges[q] + + for (var n = 0; n < nLen; n++) { + var nEdge = nEdges[n] + + if (nEdge == qEdge || qEdge == '*') { + var node = frame.node.edges[nEdge], + qNode = frame.qNode.edges[qEdge], + final = node.final && qNode.final, + next = undefined + + if (nEdge in frame.output.edges) { + // an edge already exists for this character + // no need to create a new node, just set the finality + // bit unless this node is already final + next = frame.output.edges[nEdge] + next.final = next.final || final + + } else { + // no edge exists yet, must create one + // set the finality bit and insert it + // into the output + next = new lunr.TokenSet + next.final = final + frame.output.edges[nEdge] = next + } + + stack.push({ + qNode: qNode, + output: next, + node: node + }) + } + } + } + } + + return output +} +lunr.TokenSet.Builder = function () { + this.previousWord = "" + this.root = new lunr.TokenSet + this.uncheckedNodes = [] + this.minimizedNodes = {} +} + +lunr.TokenSet.Builder.prototype.insert = function (word) { + var node, + commonPrefix = 0 + + if (word < this.previousWord) { + throw new Error ("Out of order word insertion") + } + + for (var i = 0; i < word.length && i < this.previousWord.length; i++) { + if (word[i] != this.previousWord[i]) break + commonPrefix++ + } + + this.minimize(commonPrefix) + + if (this.uncheckedNodes.length == 0) { + node = this.root + } else { + node = this.uncheckedNodes[this.uncheckedNodes.length - 1].child + } + + for (var i = commonPrefix; i < word.length; i++) { + var nextNode = new lunr.TokenSet, + char = word[i] + + node.edges[char] = nextNode + + this.uncheckedNodes.push({ + parent: node, + char: char, + child: nextNode + }) + + node = nextNode + } + + node.final = true + this.previousWord = word +} + +lunr.TokenSet.Builder.prototype.finish = function () { + this.minimize(0) +} + +lunr.TokenSet.Builder.prototype.minimize = function (downTo) { + for (var i = this.uncheckedNodes.length - 1; i >= downTo; i--) { + var node = this.uncheckedNodes[i], + childKey = node.child.toString() + + if (childKey in this.minimizedNodes) { + node.parent.edges[node.char] = this.minimizedNodes[childKey] + } else { + // Cache the key for this node since + // we know it can't change anymore + node.child._str = childKey + + this.minimizedNodes[childKey] = node.child + } + + this.uncheckedNodes.pop() + } +} +/*! + * lunr.Index + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * An index contains the built index of all documents and provides a query interface + * to the index. + * + * Usually instances of lunr.Index will not be created using this constructor, instead + * lunr.Builder should be used to construct new indexes, or lunr.Index.load should be + * used to load previously built and serialized indexes. + * + * @constructor + * @param {Object} attrs - The attributes of the built search index. + * @param {Object} attrs.invertedIndex - An index of term/field to document reference. + * @param {Object} attrs.fieldVectors - Field vectors + * @param {lunr.TokenSet} attrs.tokenSet - An set of all corpus tokens. + * @param {string[]} attrs.fields - The names of indexed document fields. + * @param {lunr.Pipeline} attrs.pipeline - The pipeline to use for search terms. + */ +lunr.Index = function (attrs) { + this.invertedIndex = attrs.invertedIndex + this.fieldVectors = attrs.fieldVectors + this.tokenSet = attrs.tokenSet + this.fields = attrs.fields + this.pipeline = attrs.pipeline +} + +/** + * A result contains details of a document matching a search query. + * @typedef {Object} lunr.Index~Result + * @property {string} ref - The reference of the document this result represents. + * @property {number} score - A number between 0 and 1 representing how similar this document is to the query. + * @property {lunr.MatchData} matchData - Contains metadata about this match including which term(s) caused the match. + */ + +/** + * Although lunr provides the ability to create queries using lunr.Query, it also provides a simple + * query language which itself is parsed into an instance of lunr.Query. + * + * For programmatically building queries it is advised to directly use lunr.Query, the query language + * is best used for human entered text rather than program generated text. + * + * At its simplest queries can just be a single term, e.g. `hello`, multiple terms are also supported + * and will be combined with OR, e.g `hello world` will match documents that contain either 'hello' + * or 'world', though those that contain both will rank higher in the results. + * + * Wildcards can be included in terms to match one or more unspecified characters, these wildcards can + * be inserted anywhere within the term, and more than one wildcard can exist in a single term. Adding + * wildcards will increase the number of documents that will be found but can also have a negative + * impact on query performance, especially with wildcards at the beginning of a term. + * + * Terms can be restricted to specific fields, e.g. `title:hello`, only documents with the term + * hello in the title field will match this query. Using a field not present in the index will lead + * to an error being thrown. + * + * Modifiers can also be added to terms, lunr supports edit distance and boost modifiers on terms. A term + * boost will make documents matching that term score higher, e.g. `foo^5`. Edit distance is also supported + * to provide fuzzy matching, e.g. 'hello~2' will match documents with hello with an edit distance of 2. + * Avoid large values for edit distance to improve query performance. + * + * Each term also supports a presence modifier. By default a term's presence in document is optional, however + * this can be changed to either required or prohibited. For a term's presence to be required in a document the + * term should be prefixed with a '+', e.g. `+foo bar` is a search for documents that must contain 'foo' and + * optionally contain 'bar'. Conversely a leading '-' sets the terms presence to prohibited, i.e. it must not + * appear in a document, e.g. `-foo bar` is a search for documents that do not contain 'foo' but may contain 'bar'. + * + * To escape special characters the backslash character '\' can be used, this allows searches to include + * characters that would normally be considered modifiers, e.g. `foo\~2` will search for a term "foo~2" instead + * of attempting to apply a boost of 2 to the search term "foo". + * + * @typedef {string} lunr.Index~QueryString + * @example Simple single term query + * hello + * @example Multiple term query + * hello world + * @example term scoped to a field + * title:hello + * @example term with a boost of 10 + * hello^10 + * @example term with an edit distance of 2 + * hello~2 + * @example terms with presence modifiers + * -foo +bar baz + */ + +/** + * Performs a search against the index using lunr query syntax. + * + * Results will be returned sorted by their score, the most relevant results + * will be returned first. For details on how the score is calculated, please see + * the {@link https://lunrjs.com/guides/searching.html#scoring|guide}. + * + * For more programmatic querying use lunr.Index#query. + * + * @param {lunr.Index~QueryString} queryString - A string containing a lunr query. + * @throws {lunr.QueryParseError} If the passed query string cannot be parsed. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.search = function (queryString) { + return this.query(function (query) { + var parser = new lunr.QueryParser(queryString, query) + parser.parse() + }) +} + +/** + * A query builder callback provides a query object to be used to express + * the query to perform on the index. + * + * @callback lunr.Index~queryBuilder + * @param {lunr.Query} query - The query object to build up. + * @this lunr.Query + */ + +/** + * Performs a query against the index using the yielded lunr.Query object. + * + * If performing programmatic queries against the index, this method is preferred + * over lunr.Index#search so as to avoid the additional query parsing overhead. + * + * A query object is yielded to the supplied function which should be used to + * express the query to be run against the index. + * + * Note that although this function takes a callback parameter it is _not_ an + * asynchronous operation, the callback is just yielded a query object to be + * customized. + * + * @param {lunr.Index~queryBuilder} fn - A function that is used to build the query. + * @returns {lunr.Index~Result[]} + */ +lunr.Index.prototype.query = function (fn) { + // for each query clause + // * process terms + // * expand terms from token set + // * find matching documents and metadata + // * get document vectors + // * score documents + + var query = new lunr.Query(this.fields), + matchingFields = Object.create(null), + queryVectors = Object.create(null), + termFieldCache = Object.create(null), + requiredMatches = Object.create(null), + prohibitedMatches = Object.create(null) + + /* + * To support field level boosts a query vector is created per + * field. An empty vector is eagerly created to support negated + * queries. + */ + for (var i = 0; i < this.fields.length; i++) { + queryVectors[this.fields[i]] = new lunr.Vector + } + + fn.call(query, query) + + for (var i = 0; i < query.clauses.length; i++) { + /* + * Unless the pipeline has been disabled for this term, which is + * the case for terms with wildcards, we need to pass the clause + * term through the search pipeline. A pipeline returns an array + * of processed terms. Pipeline functions may expand the passed + * term, which means we may end up performing multiple index lookups + * for a single query term. + */ + var clause = query.clauses[i], + terms = null, + clauseMatches = lunr.Set.complete + + if (clause.usePipeline) { + terms = this.pipeline.runString(clause.term, { + fields: clause.fields + }) + } else { + terms = [clause.term] + } + + for (var m = 0; m < terms.length; m++) { + var term = terms[m] + + /* + * Each term returned from the pipeline needs to use the same query + * clause object, e.g. the same boost and or edit distance. The + * simplest way to do this is to re-use the clause object but mutate + * its term property. + */ + clause.term = term + + /* + * From the term in the clause we create a token set which will then + * be used to intersect the indexes token set to get a list of terms + * to lookup in the inverted index + */ + var termTokenSet = lunr.TokenSet.fromClause(clause), + expandedTerms = this.tokenSet.intersect(termTokenSet).toArray() + + /* + * If a term marked as required does not exist in the tokenSet it is + * impossible for the search to return any matches. We set all the field + * scoped required matches set to empty and stop examining any further + * clauses. + */ + if (expandedTerms.length === 0 && clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = lunr.Set.empty + } + + break + } + + for (var j = 0; j < expandedTerms.length; j++) { + /* + * For each term get the posting and termIndex, this is required for + * building the query vector. + */ + var expandedTerm = expandedTerms[j], + posting = this.invertedIndex[expandedTerm], + termIndex = posting._index + + for (var k = 0; k < clause.fields.length; k++) { + /* + * For each field that this query term is scoped by (by default + * all fields are in scope) we need to get all the document refs + * that have this term in that field. + * + * The posting is the entry in the invertedIndex for the matching + * term from above. + */ + var field = clause.fields[k], + fieldPosting = posting[field], + matchingDocumentRefs = Object.keys(fieldPosting), + termField = expandedTerm + "/" + field, + matchingDocumentsSet = new lunr.Set(matchingDocumentRefs) + + /* + * if the presence of this term is required ensure that the matching + * documents are added to the set of required matches for this clause. + * + */ + if (clause.presence == lunr.Query.presence.REQUIRED) { + clauseMatches = clauseMatches.union(matchingDocumentsSet) + + if (requiredMatches[field] === undefined) { + requiredMatches[field] = lunr.Set.complete + } + } + + /* + * if the presence of this term is prohibited ensure that the matching + * documents are added to the set of prohibited matches for this field, + * creating that set if it does not yet exist. + */ + if (clause.presence == lunr.Query.presence.PROHIBITED) { + if (prohibitedMatches[field] === undefined) { + prohibitedMatches[field] = lunr.Set.empty + } + + prohibitedMatches[field] = prohibitedMatches[field].union(matchingDocumentsSet) + + /* + * Prohibited matches should not be part of the query vector used for + * similarity scoring and no metadata should be extracted so we continue + * to the next field + */ + continue + } + + /* + * The query field vector is populated using the termIndex found for + * the term and a unit value with the appropriate boost applied. + * Using upsert because there could already be an entry in the vector + * for the term we are working with. In that case we just add the scores + * together. + */ + queryVectors[field].upsert(termIndex, clause.boost, function (a, b) { return a + b }) + + /** + * If we've already seen this term, field combo then we've already collected + * the matching documents and metadata, no need to go through all that again + */ + if (termFieldCache[termField]) { + continue + } + + for (var l = 0; l < matchingDocumentRefs.length; l++) { + /* + * All metadata for this term/field/document triple + * are then extracted and collected into an instance + * of lunr.MatchData ready to be returned in the query + * results + */ + var matchingDocumentRef = matchingDocumentRefs[l], + matchingFieldRef = new lunr.FieldRef (matchingDocumentRef, field), + metadata = fieldPosting[matchingDocumentRef], + fieldMatch + + if ((fieldMatch = matchingFields[matchingFieldRef]) === undefined) { + matchingFields[matchingFieldRef] = new lunr.MatchData (expandedTerm, field, metadata) + } else { + fieldMatch.add(expandedTerm, field, metadata) + } + + } + + termFieldCache[termField] = true + } + } + } + + /** + * If the presence was required we need to update the requiredMatches field sets. + * We do this after all fields for the term have collected their matches because + * the clause terms presence is required in _any_ of the fields not _all_ of the + * fields. + */ + if (clause.presence === lunr.Query.presence.REQUIRED) { + for (var k = 0; k < clause.fields.length; k++) { + var field = clause.fields[k] + requiredMatches[field] = requiredMatches[field].intersect(clauseMatches) + } + } + } + + /** + * Need to combine the field scoped required and prohibited + * matching documents into a global set of required and prohibited + * matches + */ + var allRequiredMatches = lunr.Set.complete, + allProhibitedMatches = lunr.Set.empty + + for (var i = 0; i < this.fields.length; i++) { + var field = this.fields[i] + + if (requiredMatches[field]) { + allRequiredMatches = allRequiredMatches.intersect(requiredMatches[field]) + } + + if (prohibitedMatches[field]) { + allProhibitedMatches = allProhibitedMatches.union(prohibitedMatches[field]) + } + } + + var matchingFieldRefs = Object.keys(matchingFields), + results = [], + matches = Object.create(null) + + /* + * If the query is negated (contains only prohibited terms) + * we need to get _all_ fieldRefs currently existing in the + * index. This is only done when we know that the query is + * entirely prohibited terms to avoid any cost of getting all + * fieldRefs unnecessarily. + * + * Additionally, blank MatchData must be created to correctly + * populate the results. + */ + if (query.isNegated()) { + matchingFieldRefs = Object.keys(this.fieldVectors) + + for (var i = 0; i < matchingFieldRefs.length; i++) { + var matchingFieldRef = matchingFieldRefs[i] + var fieldRef = lunr.FieldRef.fromString(matchingFieldRef) + matchingFields[matchingFieldRef] = new lunr.MatchData + } + } + + for (var i = 0; i < matchingFieldRefs.length; i++) { + /* + * Currently we have document fields that match the query, but we + * need to return documents. The matchData and scores are combined + * from multiple fields belonging to the same document. + * + * Scores are calculated by field, using the query vectors created + * above, and combined into a final document score using addition. + */ + var fieldRef = lunr.FieldRef.fromString(matchingFieldRefs[i]), + docRef = fieldRef.docRef + + if (!allRequiredMatches.contains(docRef)) { + continue + } + + if (allProhibitedMatches.contains(docRef)) { + continue + } + + var fieldVector = this.fieldVectors[fieldRef], + score = queryVectors[fieldRef.fieldName].similarity(fieldVector), + docMatch + + if ((docMatch = matches[docRef]) !== undefined) { + docMatch.score += score + docMatch.matchData.combine(matchingFields[fieldRef]) + } else { + var match = { + ref: docRef, + score: score, + matchData: matchingFields[fieldRef] + } + matches[docRef] = match + results.push(match) + } + } + + /* + * Sort the results objects by score, highest first. + */ + return results.sort(function (a, b) { + return b.score - a.score + }) +} + +/** + * Prepares the index for JSON serialization. + * + * The schema for this JSON blob will be described in a + * separate JSON schema file. + * + * @returns {Object} + */ +lunr.Index.prototype.toJSON = function () { + var invertedIndex = Object.keys(this.invertedIndex) + .sort() + .map(function (term) { + return [term, this.invertedIndex[term]] + }, this) + + var fieldVectors = Object.keys(this.fieldVectors) + .map(function (ref) { + return [ref, this.fieldVectors[ref].toJSON()] + }, this) + + return { + version: lunr.version, + fields: this.fields, + fieldVectors: fieldVectors, + invertedIndex: invertedIndex, + pipeline: this.pipeline.toJSON() + } +} + +/** + * Loads a previously serialized lunr.Index + * + * @param {Object} serializedIndex - A previously serialized lunr.Index + * @returns {lunr.Index} + */ +lunr.Index.load = function (serializedIndex) { + var attrs = {}, + fieldVectors = {}, + serializedVectors = serializedIndex.fieldVectors, + invertedIndex = Object.create(null), + serializedInvertedIndex = serializedIndex.invertedIndex, + tokenSetBuilder = new lunr.TokenSet.Builder, + pipeline = lunr.Pipeline.load(serializedIndex.pipeline) + + if (serializedIndex.version != lunr.version) { + lunr.utils.warn("Version mismatch when loading serialised index. Current version of lunr '" + lunr.version + "' does not match serialized index '" + serializedIndex.version + "'") + } + + for (var i = 0; i < serializedVectors.length; i++) { + var tuple = serializedVectors[i], + ref = tuple[0], + elements = tuple[1] + + fieldVectors[ref] = new lunr.Vector(elements) + } + + for (var i = 0; i < serializedInvertedIndex.length; i++) { + var tuple = serializedInvertedIndex[i], + term = tuple[0], + posting = tuple[1] + + tokenSetBuilder.insert(term) + invertedIndex[term] = posting + } + + tokenSetBuilder.finish() + + attrs.fields = serializedIndex.fields + + attrs.fieldVectors = fieldVectors + attrs.invertedIndex = invertedIndex + attrs.tokenSet = tokenSetBuilder.root + attrs.pipeline = pipeline + + return new lunr.Index(attrs) +} +/*! + * lunr.Builder + * Copyright (C) 2019 Oliver Nightingale + */ + +/** + * lunr.Builder performs indexing on a set of documents and + * returns instances of lunr.Index ready for querying. + * + * All configuration of the index is done via the builder, the + * fields to index, the document reference, the text processing + * pipeline and document scoring parameters are all set on the + * builder before indexing. + * + * @constructor + * @property {string} _ref - Internal reference to the document reference field. + * @property {string[]} _fields - Internal reference to the document fields to index. + * @property {object} invertedIndex - The inverted index maps terms to document fields. + * @property {object} documentTermFrequencies - Keeps track of document term frequencies. + * @property {object} documentLengths - Keeps track of the length of documents added to the index. + * @property {lunr.tokenizer} tokenizer - Function for splitting strings into tokens for indexing. + * @property {lunr.Pipeline} pipeline - The pipeline performs text processing on tokens before indexing. + * @property {lunr.Pipeline} searchPipeline - A pipeline for processing search terms before querying the index. + * @property {number} documentCount - Keeps track of the total number of documents indexed. + * @property {number} _b - A parameter to control field length normalization, setting this to 0 disabled normalization, 1 fully normalizes field lengths, the default value is 0.75. + * @property {number} _k1 - A parameter to control how quickly an increase in term frequency results in term frequency saturation, the default value is 1.2. + * @property {number} termIndex - A counter incremented for each unique term, used to identify a terms position in the vector space. + * @property {array} metadataWhitelist - A list of metadata keys that have been whitelisted for entry in the index. + */ +lunr.Builder = function () { + this._ref = "id" + this._fields = Object.create(null) + this._documents = Object.create(null) + this.invertedIndex = Object.create(null) + this.fieldTermFrequencies = {} + this.fieldLengths = {} + this.tokenizer = lunr.tokenizer + this.pipeline = new lunr.Pipeline + this.searchPipeline = new lunr.Pipeline + this.documentCount = 0 + this._b = 0.75 + this._k1 = 1.2 + this.termIndex = 0 + this.metadataWhitelist = [] +} + +/** + * Sets the document field used as the document reference. Every document must have this field. + * The type of this field in the document should be a string, if it is not a string it will be + * coerced into a string by calling toString. + * + * The default ref is 'id'. + * + * The ref should _not_ be changed during indexing, it should be set before any documents are + * added to the index. Changing it during indexing can lead to inconsistent results. + * + * @param {string} ref - The name of the reference field in the document. + */ +lunr.Builder.prototype.ref = function (ref) { + this._ref = ref +} + +/** + * A function that is used to extract a field from a document. + * + * Lunr expects a field to be at the top level of a document, if however the field + * is deeply nested within a document an extractor function can be used to extract + * the right field for indexing. + * + * @callback fieldExtractor + * @param {object} doc - The document being added to the index. + * @returns {?(string|object|object[])} obj - The object that will be indexed for this field. + * @example Extracting a nested field + * function (doc) { return doc.nested.field } + */ + +/** + * Adds a field to the list of document fields that will be indexed. Every document being + * indexed should have this field. Null values for this field in indexed documents will + * not cause errors but will limit the chance of that document being retrieved by searches. + * + * All fields should be added before adding documents to the index. Adding fields after + * a document has been indexed will have no effect on already indexed documents. + * + * Fields can be boosted at build time. This allows terms within that field to have more + * importance when ranking search results. Use a field boost to specify that matches within + * one field are more important than other fields. + * + * @param {string} fieldName - The name of a field to index in all documents. + * @param {object} attributes - Optional attributes associated with this field. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this field. + * @param {fieldExtractor} [attributes.extractor] - Function to extract a field from a document. + * @throws {RangeError} fieldName cannot contain unsupported characters '/' + */ +lunr.Builder.prototype.field = function (fieldName, attributes) { + if (/\//.test(fieldName)) { + throw new RangeError ("Field '" + fieldName + "' contains illegal character '/'") + } + + this._fields[fieldName] = attributes || {} +} + +/** + * A parameter to tune the amount of field length normalisation that is applied when + * calculating relevance scores. A value of 0 will completely disable any normalisation + * and a value of 1 will fully normalise field lengths. The default is 0.75. Values of b + * will be clamped to the range 0 - 1. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.b = function (number) { + if (number < 0) { + this._b = 0 + } else if (number > 1) { + this._b = 1 + } else { + this._b = number + } +} + +/** + * A parameter that controls the speed at which a rise in term frequency results in term + * frequency saturation. The default value is 1.2. Setting this to a higher value will give + * slower saturation levels, a lower value will result in quicker saturation. + * + * @param {number} number - The value to set for this tuning parameter. + */ +lunr.Builder.prototype.k1 = function (number) { + this._k1 = number +} + +/** + * Adds a document to the index. + * + * Before adding fields to the index the index should have been fully setup, with the document + * ref and all fields to index already having been specified. + * + * The document must have a field name as specified by the ref (by default this is 'id') and + * it should have all fields defined for indexing, though null or undefined values will not + * cause errors. + * + * Entire documents can be boosted at build time. Applying a boost to a document indicates that + * this document should rank higher in search results than other documents. + * + * @param {object} doc - The document to add to the index. + * @param {object} attributes - Optional attributes associated with this document. + * @param {number} [attributes.boost=1] - Boost applied to all terms within this document. + */ +lunr.Builder.prototype.add = function (doc, attributes) { + var docRef = doc[this._ref], + fields = Object.keys(this._fields) + + this._documents[docRef] = attributes || {} + this.documentCount += 1 + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i], + extractor = this._fields[fieldName].extractor, + field = extractor ? extractor(doc) : doc[fieldName], + tokens = this.tokenizer(field, { + fields: [fieldName] + }), + terms = this.pipeline.run(tokens), + fieldRef = new lunr.FieldRef (docRef, fieldName), + fieldTerms = Object.create(null) + + this.fieldTermFrequencies[fieldRef] = fieldTerms + this.fieldLengths[fieldRef] = 0 + + // store the length of this field for this document + this.fieldLengths[fieldRef] += terms.length + + // calculate term frequencies for this field + for (var j = 0; j < terms.length; j++) { + var term = terms[j] + + if (fieldTerms[term] == undefined) { + fieldTerms[term] = 0 + } + + fieldTerms[term] += 1 + + // add to inverted index + // create an initial posting if one doesn't exist + if (this.invertedIndex[term] == undefined) { + var posting = Object.create(null) + posting["_index"] = this.termIndex + this.termIndex += 1 + + for (var k = 0; k < fields.length; k++) { + posting[fields[k]] = Object.create(null) + } + + this.invertedIndex[term] = posting + } + + // add an entry for this term/fieldName/docRef to the invertedIndex + if (this.invertedIndex[term][fieldName][docRef] == undefined) { + this.invertedIndex[term][fieldName][docRef] = Object.create(null) + } + + // store all whitelisted metadata about this token in the + // inverted index + for (var l = 0; l < this.metadataWhitelist.length; l++) { + var metadataKey = this.metadataWhitelist[l], + metadata = term.metadata[metadataKey] + + if (this.invertedIndex[term][fieldName][docRef][metadataKey] == undefined) { + this.invertedIndex[term][fieldName][docRef][metadataKey] = [] + } + + this.invertedIndex[term][fieldName][docRef][metadataKey].push(metadata) + } + } + + } +} + +/** + * Calculates the average document length for this index + * + * @private + */ +lunr.Builder.prototype.calculateAverageFieldLengths = function () { + + var fieldRefs = Object.keys(this.fieldLengths), + numberOfFields = fieldRefs.length, + accumulator = {}, + documentsWithField = {} + + for (var i = 0; i < numberOfFields; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + field = fieldRef.fieldName + + documentsWithField[field] || (documentsWithField[field] = 0) + documentsWithField[field] += 1 + + accumulator[field] || (accumulator[field] = 0) + accumulator[field] += this.fieldLengths[fieldRef] + } + + var fields = Object.keys(this._fields) + + for (var i = 0; i < fields.length; i++) { + var fieldName = fields[i] + accumulator[fieldName] = accumulator[fieldName] / documentsWithField[fieldName] + } + + this.averageFieldLength = accumulator +} + +/** + * Builds a vector space model of every document using lunr.Vector + * + * @private + */ +lunr.Builder.prototype.createFieldVectors = function () { + var fieldVectors = {}, + fieldRefs = Object.keys(this.fieldTermFrequencies), + fieldRefsLength = fieldRefs.length, + termIdfCache = Object.create(null) + + for (var i = 0; i < fieldRefsLength; i++) { + var fieldRef = lunr.FieldRef.fromString(fieldRefs[i]), + fieldName = fieldRef.fieldName, + fieldLength = this.fieldLengths[fieldRef], + fieldVector = new lunr.Vector, + termFrequencies = this.fieldTermFrequencies[fieldRef], + terms = Object.keys(termFrequencies), + termsLength = terms.length + + + var fieldBoost = this._fields[fieldName].boost || 1, + docBoost = this._documents[fieldRef.docRef].boost || 1 + + for (var j = 0; j < termsLength; j++) { + var term = terms[j], + tf = termFrequencies[term], + termIndex = this.invertedIndex[term]._index, + idf, score, scoreWithPrecision + + if (termIdfCache[term] === undefined) { + idf = lunr.idf(this.invertedIndex[term], this.documentCount) + termIdfCache[term] = idf + } else { + idf = termIdfCache[term] + } + + score = idf * ((this._k1 + 1) * tf) / (this._k1 * (1 - this._b + this._b * (fieldLength / this.averageFieldLength[fieldName])) + tf) + score *= fieldBoost + score *= docBoost + scoreWithPrecision = Math.round(score * 1000) / 1000 + // Converts 1.23456789 to 1.234. + // Reducing the precision so that the vectors take up less + // space when serialised. Doing it now so that they behave + // the same before and after serialisation. Also, this is + // the fastest approach to reducing a number's precision in + // JavaScript. + + fieldVector.insert(termIndex, scoreWithPrecision) + } + + fieldVectors[fieldRef] = fieldVector + } + + this.fieldVectors = fieldVectors +} + +/** + * Creates a token set of all tokens in the index using lunr.TokenSet + * + * @private + */ +lunr.Builder.prototype.createTokenSet = function () { + this.tokenSet = lunr.TokenSet.fromArray( + Object.keys(this.invertedIndex).sort() + ) +} + +/** + * Builds the index, creating an instance of lunr.Index. + * + * This completes the indexing process and should only be called + * once all documents have been added to the index. + * + * @returns {lunr.Index} + */ +lunr.Builder.prototype.build = function () { + this.calculateAverageFieldLengths() + this.createFieldVectors() + this.createTokenSet() + + return new lunr.Index({ + invertedIndex: this.invertedIndex, + fieldVectors: this.fieldVectors, + tokenSet: this.tokenSet, + fields: Object.keys(this._fields), + pipeline: this.searchPipeline + }) +} + +/** + * Applies a plugin to the index builder. + * + * A plugin is a function that is called with the index builder as its context. + * Plugins can be used to customise or extend the behaviour of the index + * in some way. A plugin is just a function, that encapsulated the custom + * behaviour that should be applied when building the index. + * + * The plugin function will be called with the index builder as its argument, additional + * arguments can also be passed when calling use. The function will be called + * with the index builder as its context. + * + * @param {Function} plugin The plugin to apply. + */ +lunr.Builder.prototype.use = function (fn) { + var args = Array.prototype.slice.call(arguments, 1) + args.unshift(this) + fn.apply(this, args) +} +/** + * Contains and collects metadata about a matching document. + * A single instance of lunr.MatchData is returned as part of every + * lunr.Index~Result. + * + * @constructor + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + * @property {object} metadata - A cloned collection of metadata associated with this document. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData = function (term, field, metadata) { + var clonedMetadata = Object.create(null), + metadataKeys = Object.keys(metadata || {}) + + // Cloning the metadata to prevent the original + // being mutated during match data combination. + // Metadata is kept in an array within the inverted + // index so cloning the data can be done with + // Array#slice + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + clonedMetadata[key] = metadata[key].slice() + } + + this.metadata = Object.create(null) + + if (term !== undefined) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = clonedMetadata + } +} + +/** + * An instance of lunr.MatchData will be created for every term that matches a + * document. However only one instance is required in a lunr.Index~Result. This + * method combines metadata from another instance of lunr.MatchData with this + * objects metadata. + * + * @param {lunr.MatchData} otherMatchData - Another instance of match data to merge with this one. + * @see {@link lunr.Index~Result} + */ +lunr.MatchData.prototype.combine = function (otherMatchData) { + var terms = Object.keys(otherMatchData.metadata) + + for (var i = 0; i < terms.length; i++) { + var term = terms[i], + fields = Object.keys(otherMatchData.metadata[term]) + + if (this.metadata[term] == undefined) { + this.metadata[term] = Object.create(null) + } + + for (var j = 0; j < fields.length; j++) { + var field = fields[j], + keys = Object.keys(otherMatchData.metadata[term][field]) + + if (this.metadata[term][field] == undefined) { + this.metadata[term][field] = Object.create(null) + } + + for (var k = 0; k < keys.length; k++) { + var key = keys[k] + + if (this.metadata[term][field][key] == undefined) { + this.metadata[term][field][key] = otherMatchData.metadata[term][field][key] + } else { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(otherMatchData.metadata[term][field][key]) + } + + } + } + } +} + +/** + * Add metadata for a term/field pair to this instance of match data. + * + * @param {string} term - The term this match data is associated with + * @param {string} field - The field in which the term was found + * @param {object} metadata - The metadata recorded about this term in this field + */ +lunr.MatchData.prototype.add = function (term, field, metadata) { + if (!(term in this.metadata)) { + this.metadata[term] = Object.create(null) + this.metadata[term][field] = metadata + return + } + + if (!(field in this.metadata[term])) { + this.metadata[term][field] = metadata + return + } + + var metadataKeys = Object.keys(metadata) + + for (var i = 0; i < metadataKeys.length; i++) { + var key = metadataKeys[i] + + if (key in this.metadata[term][field]) { + this.metadata[term][field][key] = this.metadata[term][field][key].concat(metadata[key]) + } else { + this.metadata[term][field][key] = metadata[key] + } + } +} +/** + * A lunr.Query provides a programmatic way of defining queries to be performed + * against a {@link lunr.Index}. + * + * Prefer constructing a lunr.Query using the {@link lunr.Index#query} method + * so the query object is pre-initialized with the right index fields. + * + * @constructor + * @property {lunr.Query~Clause[]} clauses - An array of query clauses. + * @property {string[]} allFields - An array of all available fields in a lunr.Index. + */ +lunr.Query = function (allFields) { + this.clauses = [] + this.allFields = allFields +} + +/** + * Constants for indicating what kind of automatic wildcard insertion will be used when constructing a query clause. + * + * This allows wildcards to be added to the beginning and end of a term without having to manually do any string + * concatenation. + * + * The wildcard constants can be bitwise combined to select both leading and trailing wildcards. + * + * @constant + * @default + * @property {number} wildcard.NONE - The term will have no wildcards inserted, this is the default behaviour + * @property {number} wildcard.LEADING - Prepend the term with a wildcard, unless a leading wildcard already exists + * @property {number} wildcard.TRAILING - Append a wildcard to the term, unless a trailing wildcard already exists + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with trailing wildcard + * query.term('foo', { wildcard: lunr.Query.wildcard.TRAILING }) + * @example query term with leading and trailing wildcard + * query.term('foo', { + * wildcard: lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING + * }) + */ + +lunr.Query.wildcard = new String ("*") +lunr.Query.wildcard.NONE = 0 +lunr.Query.wildcard.LEADING = 1 +lunr.Query.wildcard.TRAILING = 2 + +/** + * Constants for indicating what kind of presence a term must have in matching documents. + * + * @constant + * @enum {number} + * @see lunr.Query~Clause + * @see lunr.Query#clause + * @see lunr.Query#term + * @example query term with required presence + * query.term('foo', { presence: lunr.Query.presence.REQUIRED }) + */ +lunr.Query.presence = { + /** + * Term's presence in a document is optional, this is the default value. + */ + OPTIONAL: 1, + + /** + * Term's presence in a document is required, documents that do not contain + * this term will not be returned. + */ + REQUIRED: 2, + + /** + * Term's presence in a document is prohibited, documents that do contain + * this term will not be returned. + */ + PROHIBITED: 3 +} + +/** + * A single clause in a {@link lunr.Query} contains a term and details on how to + * match that term against a {@link lunr.Index}. + * + * @typedef {Object} lunr.Query~Clause + * @property {string[]} fields - The fields in an index this clause should be matched against. + * @property {number} [boost=1] - Any boost that should be applied when matching this clause. + * @property {number} [editDistance] - Whether the term should have fuzzy matching applied, and how fuzzy the match should be. + * @property {boolean} [usePipeline] - Whether the term should be passed through the search pipeline. + * @property {number} [wildcard=lunr.Query.wildcard.NONE] - Whether the term should have wildcards appended or prepended. + * @property {number} [presence=lunr.Query.presence.OPTIONAL] - The terms presence in any matching documents. + */ + +/** + * Adds a {@link lunr.Query~Clause} to this query. + * + * Unless the clause contains the fields to be matched all fields will be matched. In addition + * a default boost of 1 is applied to the clause. + * + * @param {lunr.Query~Clause} clause - The clause to add to this query. + * @see lunr.Query~Clause + * @returns {lunr.Query} + */ +lunr.Query.prototype.clause = function (clause) { + if (!('fields' in clause)) { + clause.fields = this.allFields + } + + if (!('boost' in clause)) { + clause.boost = 1 + } + + if (!('usePipeline' in clause)) { + clause.usePipeline = true + } + + if (!('wildcard' in clause)) { + clause.wildcard = lunr.Query.wildcard.NONE + } + + if ((clause.wildcard & lunr.Query.wildcard.LEADING) && (clause.term.charAt(0) != lunr.Query.wildcard)) { + clause.term = "*" + clause.term + } + + if ((clause.wildcard & lunr.Query.wildcard.TRAILING) && (clause.term.slice(-1) != lunr.Query.wildcard)) { + clause.term = "" + clause.term + "*" + } + + if (!('presence' in clause)) { + clause.presence = lunr.Query.presence.OPTIONAL + } + + this.clauses.push(clause) + + return this +} + +/** + * A negated query is one in which every clause has a presence of + * prohibited. These queries require some special processing to return + * the expected results. + * + * @returns boolean + */ +lunr.Query.prototype.isNegated = function () { + for (var i = 0; i < this.clauses.length; i++) { + if (this.clauses[i].presence != lunr.Query.presence.PROHIBITED) { + return false + } + } + + return true +} + +/** + * Adds a term to the current query, under the covers this will create a {@link lunr.Query~Clause} + * to the list of clauses that make up this query. + * + * The term is used as is, i.e. no tokenization will be performed by this method. Instead conversion + * to a token or token-like string should be done before calling this method. + * + * The term will be converted to a string by calling `toString`. Multiple terms can be passed as an + * array, each term in the array will share the same options. + * + * @param {object|object[]} term - The term(s) to add to the query. + * @param {object} [options] - Any additional properties to add to the query clause. + * @returns {lunr.Query} + * @see lunr.Query#clause + * @see lunr.Query~Clause + * @example adding a single term to a query + * query.term("foo") + * @example adding a single term to a query and specifying search fields, term boost and automatic trailing wildcard + * query.term("foo", { + * fields: ["title"], + * boost: 10, + * wildcard: lunr.Query.wildcard.TRAILING + * }) + * @example using lunr.tokenizer to convert a string to tokens before using them as terms + * query.term(lunr.tokenizer("foo bar")) + */ +lunr.Query.prototype.term = function (term, options) { + if (Array.isArray(term)) { + term.forEach(function (t) { this.term(t, lunr.utils.clone(options)) }, this) + return this + } + + var clause = options || {} + clause.term = term.toString() + + this.clause(clause) + + return this +} +lunr.QueryParseError = function (message, start, end) { + this.name = "QueryParseError" + this.message = message + this.start = start + this.end = end +} + +lunr.QueryParseError.prototype = new Error +lunr.QueryLexer = function (str) { + this.lexemes = [] + this.str = str + this.length = str.length + this.pos = 0 + this.start = 0 + this.escapeCharPositions = [] +} + +lunr.QueryLexer.prototype.run = function () { + var state = lunr.QueryLexer.lexText + + while (state) { + state = state(this) + } +} + +lunr.QueryLexer.prototype.sliceString = function () { + var subSlices = [], + sliceStart = this.start, + sliceEnd = this.pos + + for (var i = 0; i < this.escapeCharPositions.length; i++) { + sliceEnd = this.escapeCharPositions[i] + subSlices.push(this.str.slice(sliceStart, sliceEnd)) + sliceStart = sliceEnd + 1 + } + + subSlices.push(this.str.slice(sliceStart, this.pos)) + this.escapeCharPositions.length = 0 + + return subSlices.join('') +} + +lunr.QueryLexer.prototype.emit = function (type) { + this.lexemes.push({ + type: type, + str: this.sliceString(), + start: this.start, + end: this.pos + }) + + this.start = this.pos +} + +lunr.QueryLexer.prototype.escapeCharacter = function () { + this.escapeCharPositions.push(this.pos - 1) + this.pos += 1 +} + +lunr.QueryLexer.prototype.next = function () { + if (this.pos >= this.length) { + return lunr.QueryLexer.EOS + } + + var char = this.str.charAt(this.pos) + this.pos += 1 + return char +} + +lunr.QueryLexer.prototype.width = function () { + return this.pos - this.start +} + +lunr.QueryLexer.prototype.ignore = function () { + if (this.start == this.pos) { + this.pos += 1 + } + + this.start = this.pos +} + +lunr.QueryLexer.prototype.backup = function () { + this.pos -= 1 +} + +lunr.QueryLexer.prototype.acceptDigitRun = function () { + var char, charCode + + do { + char = this.next() + charCode = char.charCodeAt(0) + } while (charCode > 47 && charCode < 58) + + if (char != lunr.QueryLexer.EOS) { + this.backup() + } +} + +lunr.QueryLexer.prototype.more = function () { + return this.pos < this.length +} + +lunr.QueryLexer.EOS = 'EOS' +lunr.QueryLexer.FIELD = 'FIELD' +lunr.QueryLexer.TERM = 'TERM' +lunr.QueryLexer.EDIT_DISTANCE = 'EDIT_DISTANCE' +lunr.QueryLexer.BOOST = 'BOOST' +lunr.QueryLexer.PRESENCE = 'PRESENCE' + +lunr.QueryLexer.lexField = function (lexer) { + lexer.backup() + lexer.emit(lunr.QueryLexer.FIELD) + lexer.ignore() + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexTerm = function (lexer) { + if (lexer.width() > 1) { + lexer.backup() + lexer.emit(lunr.QueryLexer.TERM) + } + + lexer.ignore() + + if (lexer.more()) { + return lunr.QueryLexer.lexText + } +} + +lunr.QueryLexer.lexEditDistance = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.EDIT_DISTANCE) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexBoost = function (lexer) { + lexer.ignore() + lexer.acceptDigitRun() + lexer.emit(lunr.QueryLexer.BOOST) + return lunr.QueryLexer.lexText +} + +lunr.QueryLexer.lexEOS = function (lexer) { + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } +} + +// This matches the separator used when tokenising fields +// within a document. These should match otherwise it is +// not possible to search for some tokens within a document. +// +// It is possible for the user to change the separator on the +// tokenizer so it _might_ clash with any other of the special +// characters already used within the search string, e.g. :. +// +// This means that it is possible to change the separator in +// such a way that makes some words unsearchable using a search +// string. +lunr.QueryLexer.termSeparator = lunr.tokenizer.separator + +lunr.QueryLexer.lexText = function (lexer) { + while (true) { + var char = lexer.next() + + if (char == lunr.QueryLexer.EOS) { + return lunr.QueryLexer.lexEOS + } + + // Escape character is '\' + if (char.charCodeAt(0) == 92) { + lexer.escapeCharacter() + continue + } + + if (char == ":") { + return lunr.QueryLexer.lexField + } + + if (char == "~") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexEditDistance + } + + if (char == "^") { + lexer.backup() + if (lexer.width() > 0) { + lexer.emit(lunr.QueryLexer.TERM) + } + return lunr.QueryLexer.lexBoost + } + + // "+" indicates term presence is required + // checking for length to ensure that only + // leading "+" are considered + if (char == "+" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + // "-" indicates term presence is prohibited + // checking for length to ensure that only + // leading "-" are considered + if (char == "-" && lexer.width() === 1) { + lexer.emit(lunr.QueryLexer.PRESENCE) + return lunr.QueryLexer.lexText + } + + if (char.match(lunr.QueryLexer.termSeparator)) { + return lunr.QueryLexer.lexTerm + } + } +} + +lunr.QueryParser = function (str, query) { + this.lexer = new lunr.QueryLexer (str) + this.query = query + this.currentClause = {} + this.lexemeIdx = 0 +} + +lunr.QueryParser.prototype.parse = function () { + this.lexer.run() + this.lexemes = this.lexer.lexemes + + var state = lunr.QueryParser.parseClause + + while (state) { + state = state(this) + } + + return this.query +} + +lunr.QueryParser.prototype.peekLexeme = function () { + return this.lexemes[this.lexemeIdx] +} + +lunr.QueryParser.prototype.consumeLexeme = function () { + var lexeme = this.peekLexeme() + this.lexemeIdx += 1 + return lexeme +} + +lunr.QueryParser.prototype.nextClause = function () { + var completedClause = this.currentClause + this.query.clause(completedClause) + this.currentClause = {} +} + +lunr.QueryParser.parseClause = function (parser) { + var lexeme = parser.peekLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.type) { + case lunr.QueryLexer.PRESENCE: + return lunr.QueryParser.parsePresence + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expected either a field or a term, found " + lexeme.type + + if (lexeme.str.length >= 1) { + errorMessage += " with value '" + lexeme.str + "'" + } + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } +} + +lunr.QueryParser.parsePresence = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + switch (lexeme.str) { + case "-": + parser.currentClause.presence = lunr.Query.presence.PROHIBITED + break + case "+": + parser.currentClause.presence = lunr.Query.presence.REQUIRED + break + default: + var errorMessage = "unrecognised presence operator'" + lexeme.str + "'" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term or field, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.FIELD: + return lunr.QueryParser.parseField + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term or field, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseField = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + if (parser.query.allFields.indexOf(lexeme.str) == -1) { + var possibleFields = parser.query.allFields.map(function (f) { return "'" + f + "'" }).join(', '), + errorMessage = "unrecognised field '" + lexeme.str + "', possible fields: " + possibleFields + + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.fields = [lexeme.str] + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + var errorMessage = "expecting term, found nothing" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + return lunr.QueryParser.parseTerm + default: + var errorMessage = "expecting term, found '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseTerm = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + parser.currentClause.term = lexeme.str.toLowerCase() + + if (lexeme.str.indexOf("*") != -1) { + parser.currentClause.usePipeline = false + } + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseEditDistance = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var editDistance = parseInt(lexeme.str, 10) + + if (isNaN(editDistance)) { + var errorMessage = "edit distance must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.editDistance = editDistance + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + +lunr.QueryParser.parseBoost = function (parser) { + var lexeme = parser.consumeLexeme() + + if (lexeme == undefined) { + return + } + + var boost = parseInt(lexeme.str, 10) + + if (isNaN(boost)) { + var errorMessage = "boost must be numeric" + throw new lunr.QueryParseError (errorMessage, lexeme.start, lexeme.end) + } + + parser.currentClause.boost = boost + + var nextLexeme = parser.peekLexeme() + + if (nextLexeme == undefined) { + parser.nextClause() + return + } + + switch (nextLexeme.type) { + case lunr.QueryLexer.TERM: + parser.nextClause() + return lunr.QueryParser.parseTerm + case lunr.QueryLexer.FIELD: + parser.nextClause() + return lunr.QueryParser.parseField + case lunr.QueryLexer.EDIT_DISTANCE: + return lunr.QueryParser.parseEditDistance + case lunr.QueryLexer.BOOST: + return lunr.QueryParser.parseBoost + case lunr.QueryLexer.PRESENCE: + parser.nextClause() + return lunr.QueryParser.parsePresence + default: + var errorMessage = "Unexpected lexeme type '" + nextLexeme.type + "'" + throw new lunr.QueryParseError (errorMessage, nextLexeme.start, nextLexeme.end) + } +} + + /** + * export the module via AMD, CommonJS or as a browser global + * Export code from https://github.com/umdjs/umd/blob/master/returnExports.js + */ + ;(function (root, factory) { + if (typeof define === 'function' && define.amd) { + // AMD. Register as an anonymous module. + define(factory) + } else if (typeof exports === 'object') { + /** + * Node. Does not work with strict CommonJS, but + * only CommonJS-like enviroments that support module.exports, + * like Node. + */ + module.exports = factory() + } else { + // Browser globals (root is window) + root.lunr = factory() + } + }(this, function () { + /** + * Just return a value to define the module export. + * This example returns an object, but the module + * can return a function as the exported value. + */ + return lunr + })) +})(); diff --git a/docs/ui/ui-lunr/js/vendor/search.js b/docs/ui/ui-lunr/js/vendor/search.js new file mode 100644 index 0000000000..fcf4046150 --- /dev/null +++ b/docs/ui/ui-lunr/js/vendor/search.js @@ -0,0 +1,212 @@ +/* eslint-env browser */ +window.antoraLunr = (function (lunr) { + var searchInput = document.getElementById('search-input') + var searchResult = document.createElement('div') + searchResult.classList.add('search-result-dropdown-menu') + searchInput.parentNode.appendChild(searchResult) + + function highlightText (doc, position) { + var hits = [] + var start = position[0] + var length = position[1] + + var text = doc.text + var highlightSpan = document.createElement('span') + highlightSpan.classList.add('search-result-highlight') + highlightSpan.innerText = text.substr(start, length) + + var end = start + length + var textEnd = text.length - 1 + var contextOffset = 15 + var contextAfter = end + contextOffset > textEnd ? textEnd : end + contextOffset + var contextBefore = start - contextOffset < 0 ? 0 : start - contextOffset + if (start === 0 && end === textEnd) { + hits.push(highlightSpan) + } else if (start === 0) { + hits.push(highlightSpan) + hits.push(document.createTextNode(text.substr(end, contextAfter))) + } else if (end === textEnd) { + hits.push(document.createTextNode(text.substr(0, start))) + hits.push(highlightSpan) + } else { + hits.push(document.createTextNode('...' + text.substr(contextBefore, start - contextBefore))) + hits.push(highlightSpan) + hits.push(document.createTextNode(text.substr(end, contextAfter - end) + '...')) + } + return hits + } + + function highlightTitle (hash, doc, position) { + var hits = [] + var start = position[0] + var length = position[1] + + var highlightSpan = document.createElement('span') + highlightSpan.classList.add('search-result-highlight') + var title + if (hash) { + title = doc.titles.filter(function (item) { + return item.id === hash + })[0].text + } else { + title = doc.title + } + highlightSpan.innerText = title.substr(start, length) + + var end = start + length + var titleEnd = title.length - 1 + if (start === 0 && end === titleEnd) { + hits.push(highlightSpan) + } else if (start === 0) { + hits.push(highlightSpan) + hits.push(document.createTextNode(title.substr(length, titleEnd))) + } else if (end === titleEnd) { + hits.push(document.createTextNode(title.substr(0, start))) + hits.push(highlightSpan) + } else { + hits.push(document.createTextNode(title.substr(0, start))) + hits.push(highlightSpan) + hits.push(document.createTextNode(title.substr(end, titleEnd))) + } + return hits + } + + function highlightHit (metadata, hash, doc) { + var hits = [] + for (var token in metadata) { + var fields = metadata[token] + for (var field in fields) { + var positions = fields[field] + if (positions.position) { + var position = positions.position[0] // only higlight the first match + if (field === 'title') { + hits = highlightTitle(hash, doc, position) + } else if (field === 'text') { + hits = highlightText(doc, position) + } + } + } + } + return hits + } + + function createSearchResult(result, store, searchResultDataset) { + result.forEach(function (item) { + var url = item.ref + var hash + if (url.includes('#')) { + hash = url.substring(url.indexOf('#') + 1) + url = url.replace('#' + hash, '') + } + var doc = store[url] + var metadata = item.matchData.metadata + var hits = highlightHit(metadata, hash, doc) + searchResultDataset.appendChild(createSearchResultItem(doc, item, hits)) + }) + } + + function createSearchResultItem (doc, item, hits) { + var documentTitle = document.createElement('div') + documentTitle.classList.add('search-result-document-title') + documentTitle.innerText = doc.title + var documentHit = document.createElement('div') + documentHit.classList.add('search-result-document-hit') + var documentHitLink = document.createElement('a') + var rootPath = window.antora.basePath + documentHitLink.href = rootPath + item.ref + documentHit.appendChild(documentHitLink) + hits.forEach(function (hit) { + documentHitLink.appendChild(hit) + }) + var searchResultItem = document.createElement('div') + searchResultItem.classList.add('search-result-item') + searchResultItem.appendChild(documentTitle) + searchResultItem.appendChild(documentHit) + searchResultItem.addEventListener('mousedown', function (e) { + e.preventDefault() + }) + return searchResultItem + } + + function createNoResult (text) { + var searchResultItem = document.createElement('div') + searchResultItem.classList.add('search-result-item') + var documentHit = document.createElement('div') + documentHit.classList.add('search-result-document-hit') + var message = document.createElement('strong') + message.innerText = 'No results found for query "' + text + '"' + documentHit.appendChild(message) + searchResultItem.appendChild(documentHit) + return searchResultItem + } + + function search (index, text) { + // execute an exact match search + var result = index.search(text) + if (result.length > 0) { + return result + } + // no result, use a begins with search + result = index.search(text + '*') + if (result.length > 0) { + return result + } + // no result, use a contains search + result = index.search('*' + text + '*') + return result + } + + function searchIndex (index, store, text) { + // reset search result + while (searchResult.firstChild) { + searchResult.removeChild(searchResult.firstChild) + } + if (text.trim() === '') { + return + } + var result = search(index, text) + var searchResultDataset = document.createElement('div') + searchResultDataset.classList.add('search-result-dataset') + searchResult.appendChild(searchResultDataset) + if (result.length > 0) { + createSearchResult(result, store, searchResultDataset) + } else { + searchResultDataset.appendChild(createNoResult(text)) + } + } + + function debounce (func, wait, immediate) { + var timeout + return function () { + var context = this + var args = arguments + var later = function () { + timeout = null + if (!immediate) func.apply(context, args) + } + var callNow = immediate && !timeout + clearTimeout(timeout) + timeout = setTimeout(later, wait) + if (callNow) func.apply(context, args) + } + } + + function init (data) { + var index = Object.assign({index: lunr.Index.load(data.index), store: data.store}) + var search = debounce(function () { + searchIndex(index.index, index.store, searchInput.value) + }, 100) + searchInput.addEventListener('keydown', search) + + // this is prevented in case of mousedown attached to SearchResultItem + searchInput.addEventListener('blur', function (e) { + while (searchResult.firstChild) { + searchResult.removeChild(searchResult.firstChild) + } + }) + } + + return { + init: init, + } +})(window.lunr) diff --git a/docs/ui/ui-lunr/partials/footer-scripts.hbs b/docs/ui/ui-lunr/partials/footer-scripts.hbs new file mode 100644 index 0000000000..7d4519ea08 --- /dev/null +++ b/docs/ui/ui-lunr/partials/footer-scripts.hbs @@ -0,0 +1,12 @@ + + +{{#if (eq env.DOCSEARCH_ENGINE 'lunr')}} + + + +{{/if}} + diff --git a/docs/ui/ui-lunr/partials/head-meta.hbs b/docs/ui/ui-lunr/partials/head-meta.hbs new file mode 100644 index 0000000000..c9883c2baf --- /dev/null +++ b/docs/ui/ui-lunr/partials/head-meta.hbs @@ -0,0 +1 @@ +