Skip to content

Commit

Permalink
Merge pull request #4222 from IQSS/3672-timer-problem
Browse files Browse the repository at this point in the history
installer and guide fixes for the timer problem (3672)
  • Loading branch information
kcondon authored Oct 23, 2017
2 parents 724b243 + 37c755e commit 5efe63b
Show file tree
Hide file tree
Showing 9 changed files with 42 additions and 17 deletions.
2 changes: 1 addition & 1 deletion doc/sphinx-guides/source/_static/util/createsequence.sql
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ CACHE 1;

ALTER TABLE datasetidentifier_seq OWNER TO "dvnapp";

-- And now create a PostgresQL FUNCTION, for JPA to
-- And now create a PostgreSQL FUNCTION, for JPA to
-- access as a NamedStoredProcedure:

CREATE OR REPLACE FUNCTION generateIdentifierAsSequentialNumber(
Expand Down
10 changes: 6 additions & 4 deletions doc/sphinx-guides/source/admin/timers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,21 +24,23 @@ The following JVM option instructs the application to act as the dedicated timer

**IMPORTANT:** Note that this option is automatically set by the Dataverse installer script. That means that when **configuring a multi-server cluster**, it will be the responsibility of the installer to remove the option from the :fixedwidthplain:`domain.xml` of every node except the one intended to be the timer server. We also recommend that the following entry in the :fixedwidthplain:`domain.xml`: ``<ejb-timer-service timer-datasource="jdbc/VDCNetDS">`` is changed back to ``<ejb-timer-service>`` on all the non-timer server nodes. Similarly, this option is automatically set by the installer script. Changing it back to the default setting on a server that doesn't need to run the timer will prevent a potential race condition, where multiple servers try to get a lock on the timer database.

**Note** that for the timer to work, the version of the PostgreSQL JDBC driver your instance is using must match the version of your PostgreSQL database. See the 'Timer not working' section of the :doc:`/admin/troubleshooting` guide.

Harvesting Timers
-----------------

These timers are created when scheduled harvesting is enabled by a local admin user (via the "Manage Harvesting Clients" page).

In a multi-node cluster, all these timers will be created on the dedicated timer node (and not necessarily on the node where the harvesting clients was created and/or saved).
In a multi-node cluster, all these timers will be created on the dedicated timer node (and not necessarily on the node where the harvesting clients were created and/or saved).

A timer will be automatically removed, when a harvesting client with an active schedule is deleted, or if the schedule is turned off for an existing client.
A timer will be automatically removed when a harvesting client with an active schedule is deleted, or if the schedule is turned off for an existing client.

Metadata Export Timer
---------------------

This timer is created automatically whenever the application is deployed or restarted. There is no admin user-accessible configuration for this timer.

This timer runs a daily job that tries to export all the local, published datasets that haven't been exported yet, in all the supported metdata formats, and cache the results on the filesystem. (Note that, normally, an export will happen automatically whenever a dataset is published. So this scheduled job is there to catch any datasets for which that export did not succeed, for one reason or another). Also, since this functionality has been added in version 4.5: if you are upgrading from a previous version, none of your datasets are exported yet. So the first time this job runs, it will attempt to export them all.
This timer runs a daily job that tries to export all the local, published datasets that haven't been exported yet, in all supported metadata formats, and cache the results on the filesystem. (Note that normally an export will happen automatically whenever a dataset is published. This scheduled job is there to catch any datasets for which that export did not succeed, for one reason or another). Also, since this functionality has been added in version 4.5: if you are upgrading from a previous version, none of your datasets are exported yet. So the first time this job runs, it will attempt to export them all.

This daily job will also update all the harvestable OAI sets configured on your server, adding new and/or newly published datasets or marking deaccessioned datasets as "deleted" in the corresponding sets as needed.

Expand All @@ -47,4 +49,4 @@ This job is automatically scheduled to run at 2AM local time every night. If rea
Known Issues
------------

We've got several reports of an intermittent issue where the applicaiton fails to deploy with the error message "EJB Timer Service is not available." Please see the :doc:`/admin/troubleshooting` section of this guide for a workaround.
We've received several reports of an intermittent issue where the application fails to deploy with the error message "EJB Timer Service is not available." Please see the :doc:`/admin/troubleshooting` section of this guide for a workaround.
20 changes: 20 additions & 0 deletions doc/sphinx-guides/source/admin/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,3 +38,23 @@ Note that it may or may not work on your system, so it is provided as an example

.. literalinclude:: ../_static/util/clear_timer.sh

Timer not working
-----------------

Dataverse relies on EJB timers to perform scheduled tasks: harvesting from remote servers, updating the local OAI sets and running metadata exports. (See :doc:`timers` for details.) If these scheduled jobs are not running on your server, this may be the result of the incompatibility between the version of PostgreSQL database you are using, and PostgreSQL JDBC driver in use by your instance of Glassfish. The symptoms:

If you are seeing the following in your server.log...

:fixedwidthplain:`Handling timeout on` ...

followed by an Exception stack trace with these lines in it:

:fixedwidthplain:`Internal Exception: java.io.StreamCorruptedException: invalid stream header` ...

:fixedwidthplain:`Exception Description: Could not deserialize object from byte array` ...


... it most likely means that it is the JDBC driver incompatibility that's preventing the timer from working correctly.
Make sure you install the correct version of the driver. For example, if you are running the version 9.3 of PostgreSQL, make sure you have the driver postgresql-9.3-1104.jdbc4.jar in your :fixedwidthplain:`<GLASSFISH FOLDER>/glassfish/lib` directory. Go `here <https://jdbc.postgresql.org/download.html>`_
to download the correct version of the driver. If you have an older driver in glassfish/lib, make sure to remove it, replace it with the new version and restart Glassfish. (You may need to remove the entire contents of :fixedwidthplain:`<GLASSFISH FOLDER>/glassfish/domains/domain1/generated` before you start Glassfish).

16 changes: 10 additions & 6 deletions doc/sphinx-guides/source/user/data-exploration/worldmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ WorldMap: Geospatial Data Exploration
Dataverse and WorldMap
======================

`WorldMap <http://worldmap.harvard.edu/>`_ is developed by the Center for Geographic Analysis (CGA) at Harvard and is open source software that helps researchers visualize and explore their data in maps. The WorldMap and Dataverse collaboration allows researchers to upload shapefiles or tabular files to Dataverse for long term storage and receive a persistent identifier (through DOI), then easily navigate into WorldMap to interact with the data and save to WorldMap as well.
`WorldMap <http://worldmap.harvard.edu/>`_ is developed by the Center for Geographic Analysis (CGA) at Harvard and is open source software that helps researchers visualize and explore their data in maps. The WorldMap and Dataverse collaboration allows researchers to upload shapefiles or tabular files to Dataverse for long term storage and receive a persistent identifier (through DOI), then easily navigate into WorldMap to interact with the data.

Note: WorldMap hosts their own `user guide <http://worldmap.harvard.edu/static/docs/WorldMap_Help_en.pdf>`_ that covers some of the same material as this page.

Expand All @@ -33,11 +33,15 @@ Once you have uploaded your .zip shapefile, a Map Data button will appear next t

To get started with visualizing your shapefile, click on the blue "Visualize on WorldMap" button in Geoconnect. It may take up to 45 seconds for the data to be sent to WorldMap and then back to Geoconnect.

Once this process has finished, you will be taken to a new page where you can style your map through Attribute, Classification Method, Number of Intervals, and Colors. Clicking "View on WorldMap" will open WorldMap in a new tab, allowing you to see how your map will be displayed there.
Once this process has finished, you will be taken to a new page where you can style your map through Attribute, Classification Method, Number of Intervals, and Colors. Clicking "Apply Changes" will send your map to both Dataverse and WorldMap, creating a preview of your map that will be visible on your file page and your dataset page.

After styling your map, you can either save it by clicking "Return to Dataverse" or delete it with the "Delete" button. If you decide to delete the map, it will no longer appear on WorldMap. Returning to Dataverse will send the styled map layer to both Dataverse and WorldMap. A preview of your map will now be visible on your file page and your dataset page.
Clicking "View on WorldMap" will open WorldMap in a new tab, allowing you to see how your map will be displayed there.

To replace your shapefile's map with a new one, simply click the Map Data button again.
You can delete your map with the "Delete" button. If you decide to delete the map, it will no longer appear on WorldMap, and your dataset in Dataverse will no longer display the map preview.

When you're satisfied with your map, you may click "Return to the Dataverse" to go back to Dataverse.

In the future, to replace your shapefile's map with a new one, simply click the Map Data button on the dataset or file page to return to the Geoconnect edit map page.

Mapping tabular files with Geoconnect
=====================================
Expand Down Expand Up @@ -121,9 +125,9 @@ Now that you have created your map:

- Dataverse will contain a preview of the map and links to the larger version on WorldMap.

The map editor (pictured above) provides a set of options you can use to style your map. The "Return to the Dataverse" button saves your map and brings you back to Dataverse. "View on WorldMap" takes you to the map's page on WorldMap, which offers additional views and options.
The map editor (pictured above) provides a set of options you can use to style your map. Clicking "Apply Changes" saves the current version of your map to Dataverse and Worldmap. The "Return to the Dataverse" button brings you back to Dataverse. "View on WorldMap" takes you to the map's page on WorldMap, which offers additional views and options.

If you'd like to make future changes to your map, you can return to the editor by clicking the "Map Data" button on your file.
If you'd like to make further changes to your map in the future, you can return to the editor by clicking the "Map Data" button on your file.

Removing your map
=================
Expand Down
11 changes: 5 additions & 6 deletions scripts/installer/install
Original file line number Diff line number Diff line change
Expand Up @@ -155,15 +155,14 @@ my $API_URL = "http://localhost:8080/api";
# doesn't get paranoid)

my %POSTGRES_DRIVERS = (
# "8_4", "postgresql-8.3-603.jdbc4.jar",
"8_4", "postgresql-8.4-703.jdbc4.jar",
"9_0", "postgresql-9.0-802.jdbc4.jar",
"9_1", "postgresql-9.1-902.jdbc4.jar",
"9_2", "postgresql-9.1-902.jdbc4.jar",
"9_3", "postgresql-9.1-902.jdbc4.jar",
"9_4", "postgresql-9.1-902.jdbc4.jar",
"9_5", "postgresql-9.1-902.jdbc4.jar",
"9_6", "postgresql-9.1-902.jdbc4.jar"
"9_2", "postgresql-9.2-1004.jdbc4.jar",
"9_3", "postgresql-9.3-1104.jdbc4.jar",
"9_4", "postgresql-9.4.1212.jar",
"9_5", "postgresql-42.1.4.jar",
"9_6", "postgresql-42.1.4.jar"
);

# A few preliminary checks:
Expand Down
Binary file added scripts/installer/pgdriver/postgresql-42.1.4.jar
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.

0 comments on commit 5efe63b

Please sign in to comment.