Skip to content

Commit

Permalink
Merge pull request #7297 from IQSS/develop
Browse files Browse the repository at this point in the history
v5.1
  • Loading branch information
kcondon authored Oct 5, 2020
2 parents 993d0a3 + a23548d commit 7a0eef0
Show file tree
Hide file tree
Showing 65 changed files with 2,864 additions and 1,582 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ oauth-credentials.md

/src/main/webapp/oauth2/newAccount.html
scripts/api/setup-all.sh*
scripts/api/setup-all.*.log

# ctags generated tag file
tags
Expand Down
6 changes: 5 additions & 1 deletion conf/solr/7.7.2/schema_dv_mdb_copies.xml
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,13 @@
<copyField source="studyAssayOtherMeasurmentType" dest="_text_" maxChars="3000"/>
<copyField source="studyAssayOtherOrganism" dest="_text_" maxChars="3000"/>
<copyField source="studyAssayPlatform" dest="_text_" maxChars="3000"/>
<copyField source="studyAssayOtherPlatform" dest="_text_" maxChars="3000"/>
<copyField source="studyAssayTechnologyType" dest="_text_" maxChars="3000"/>
<copyField source="studyAssayOtherTechnologyType" dest="_text_" maxChars="3000"/>
<copyField source="studyDesignType" dest="_text_" maxChars="3000"/>
<copyField source="studyOtherDesignType" dest="_text_" maxChars="3000"/>
<copyField source="studyFactorType" dest="_text_" maxChars="3000"/>
<copyField source="studyOtherFactorType" dest="_text_" maxChars="3000"/>
<copyField source="subject" dest="_text_" maxChars="3000"/>
<copyField source="subtitle" dest="_text_" maxChars="3000"/>
<copyField source="targetSampleActualSize" dest="_text_" maxChars="3000"/>
Expand All @@ -154,4 +158,4 @@
<copyField source="universe" dest="_text_" maxChars="3000"/>
<copyField source="weighting" dest="_text_" maxChars="3000"/>
<copyField source="westLongitude" dest="_text_" maxChars="3000"/>
</schema>
</schema>
6 changes: 5 additions & 1 deletion conf/solr/7.7.2/schema_dv_mdb_fields.xml
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,13 @@
<field name="studyAssayOtherMeasurmentType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyAssayOtherOrganism" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyAssayPlatform" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyAssayOtherPlatform" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyAssayTechnologyType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyAssayOtherTechnologyType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyDesignType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyOtherDesignType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyFactorType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="studyOtherFactorType" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="subject" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="subtitle" type="text_en" multiValued="false" stored="true" indexed="true"/>
<field name="targetSampleActualSize" type="text_en" multiValued="false" stored="true" indexed="true"/>
Expand All @@ -154,4 +158,4 @@
<field name="universe" type="text_en" multiValued="true" stored="true" indexed="true"/>
<field name="weighting" type="text_en" multiValued="false" stored="true" indexed="true"/>
<field name="westLongitude" type="text_en" multiValued="true" stored="true" indexed="true"/>
</fields>
</fields>
8 changes: 5 additions & 3 deletions doc/release-notes/5.0-release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,13 +302,15 @@ Add the below JVM options beneath the -Ddataverse settings:

For production environments:

`/usr/local/payara5/bin/asadmin create-jvm-options "\-Ddoi.dataciterestapiurlstring=https://api.datacite.org"`
`/usr/local/payara5/bin/asadmin create-jvm-options "\-Ddoi.dataciterestapiurlstring=https\://api.datacite.org"`

For test environments:

`/usr/local/payara5/bin/asadmin create-jvm-options "\-Ddoi.dataciterestapiurlstring=https://api.test.datacite.org"`
`/usr/local/payara5/bin/asadmin create-jvm-options "\-Ddoi.dataciterestapiurlstring=https\://api.test.datacite.org"`

The JVM option `doi.mdcbaseurlstring` should be deleted if it was previously set.
The JVM option `doi.mdcbaseurlstring` should be deleted if it was previously set, for example:

`/usr/local/payara5/bin/asadmin delete-jvm-options "\-Ddoi.mdcbaseurlstring=https\://api.test.datacite.org"`

4. (Recommended for installations using DataCite) Pre-register DOIs

Expand Down
95 changes: 95 additions & 0 deletions doc/release-notes/5.1-release-notes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# Dataverse 5.1

This release brings new features, enhancements, and bug fixes to Dataverse. Thank you to all of the community members who contributed code, suggestions, bug reports, and other assistance across the project.

## Release Highlights

### Large File Upload for Installations Using AWS S3

The added support for multipart upload through the API and UI (Issue #6763) will allow files larger than 5 GB to be uploaded to Dataverse when an installation is running on AWS S3. Previously, only non-AWS S3 storage configurations would allow uploads larger than 5 GB.

### Dataset-Specific Stores

In previous releases, configuration options were added that allow each dataverse to have a specific store enabled. This release adds even more granularity, with the ability to set a dataset-level store.

## Major Use Cases

Newly-supported use cases in this release include:

- Users can now upload files larger than 5 GB on installations running AWS S3 (Issue #6763, PR #6995)
- Administrators will now be able to specify a store at the dataset level in addition to the Dataverse level (Issue #6872, PR #7272)
- Users will have their dataset's directory structure retained when uploading a dataset with shapefiles (Issue #6873, PR #7279)
- Users will now be able to download zip files through the experimental Zipper service when the set of downloaded files have duplicate names (Issue [#80](https://github.com/IQSS/dataverse.harvard.edu/issues/80), PR #7276)
- Users will now be able to download zip files with the proper file structure through the experiment Zipper service (Issue #7255, PR #7258)
- Administrators will be able to use new APIs to keep the Solr index and the DB in sync, allowing easier resolution of an issue that would occasionally cause stale search results to not load. (Issue #4225, PR #7211)

## Notes for Dataverse Installation Administrators

### New API for setting a Dataset-level Store

- This release adds a new API for setting a dataset-specific store. Learn more in the Managing Dataverse and Datasets section of the [Admin Guide](http://guides.dataverse.org/en/5.1/admin/solr-search-index.html).

### Multipart Upload Storage Monitoring, Recommended Use for Multipart Upload

Charges may be incurred for storage reserved for multipart uploads that are not completed or cancelled. Administrators may want to do periodic manual or automated checks for open multipart uploads. Learn more in the Big Data Support section of the [Developers Guide](http://guides.dataverse.org/en/5.1/developer/big-data-support.html).

While multipart uploads can support much larger files, and can have advantages in terms of robust transfer and speed, they are more complex than single part direct uploads. Administrators should consider taking advantage of the options to limit use of multipart uploads to specific users by using multiple stores and configuring access to stores with high file size limits to specific Dataverses (added in 4.20) or Datasets (added in this release).

### New APIs for keeping Solr records in sync

This release adds new APIs to keep the Solr index and the DB in sync, allowing easier resolution of an issue that would occasionally cause search results to not load. Learn more in the Solr section of the [Admin Guide](http://guides.dataverse.org/en/5.1/admin/solr-search-index.html).

### Documentation for Purging the Ingest Queue

At times, it may be necessary to cancel long-running Ingest jobs in the interest of system stability. The Troubleshooting section of the [Admin Guide](http://guides.dataverse.org/en/5.1/admin/) now has specific steps.

### Biomedical Metadata Block Updated

The Life Science Metadata block (biomedical.tsv) was updated. "Other Design Type", "Other Factor Type", "Other Technology Type", "Other Technology Platform" boxes were added. See the "Additional Upgrade Steps" below if you use this in your installation.

## Notes for Tool Developers and Integrators

### Spaces in File Names

Dataverse Installations using S3 storage will no longer replace spaces in file names of downloaded files with the + character. If your tool or integration has any special handling around this, you may need to make further adjustments to maintain backwards compatibility while also supporting Dataverse installations on 5.1+.

## Complete List of Changes

For the complete list of code changes in this release, see the [5.1 Milestone](https://github.com/IQSS/dataverse/milestone/90?closed=1) in Github.

For help with upgrading, installing, or general questions please post to the [Dataverse Google Group](https://groups.google.com/forum/#!forum/dataverse-community) or email support@dataverse.org.

## Installation

If this is a new installation, please see our [Installation Guide](http://guides.dataverse.org/en/5.1/installation/)

## Upgrade Instructions

0. These instructions assume that you've already successfully upgraded from Dataverse 4.x to Dataverse 5 following the instructions in the [Dataverse 5 Release Notes](https://github.com/IQSS/dataverse/releases/tag/v5.0).

1. Undeploy the previous version.

<payara install path>/payara/bin/asadmin list-applications
<payara install path>/payara/bin/asadmin undeploy dataverse

2. Stop payara and remove the generated directory, start.

- service payara stop
- remove the generated directory: rm -rf <payara install path>payara/payara/domains/domain1/generated
- service payara start

3. Deploy this version.
<payara install path>/payara/bin/asadmin deploy <path>dataverse-5.1.war

4. Restart payara

### Additional Upgrade Steps

1. Update Biomedical Metadata Block (if used), Reload Solr, ReExportAll

`wget https://github.com/IQSS/dataverse/releases/download/5.1/biomedical.tsv`
`curl http://localhost:8080/api/admin/datasetfield/load -X POST --data-binary @biomedical.tsv -H "Content-type: text/tab-separated-values"`
- copy schema_dv_mdb_fields.xml and schema_dv_mdb_copies.xml to solr server, for example into /usr/local/solr/solr-7.7.2/server/solr/collection1/conf/ directory
- reload Solr, for example, http://localhost:8983/solr/admin/cores?action=RELOAD&core=collection1
- Run ReExportall to update JSON Exports
<http://guides.dataverse.org/en/5.1/admin/metadataexport.html?highlight=export#batch-exports-through-the-api>
22 changes: 22 additions & 0 deletions doc/sphinx-guides/source/admin/dataverses-datasets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ The available drivers can be listed with::

curl -H "X-Dataverse-key: $API_TOKEN" http://$SERVER/api/admin/dataverse/storageDrivers
(Individual datasets can be configured to use specific file stores as well. See the "Datasets" section below.)


Datasets
--------
Expand Down Expand Up @@ -130,3 +132,23 @@ Diagnose Constraint Violations Issues in Datasets

To identify invalid data values in specific datasets (if, for example, an attempt to edit a dataset results in a ConstraintViolationException in the server log), or to check all the datasets in the Dataverse for constraint violations, see :ref:`Dataset Validation <dataset-validation-api>` in the :doc:`/api/native-api` section of the User Guide.

Configure a Dataset to store all new files in a specific file store
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Configure a dataset to use a specific file store (this API can only be used by a superuser) ::
curl -H "X-Dataverse-key: $API_TOKEN" -X PUT -d $storageDriverLabel http://$SERVER/api/datasets/$dataset-id/storageDriver
The current driver can be seen using::

curl http://$SERVER/api/datasets/$dataset-id/storageDriver

It can be reset to the default store as follows (only a superuser can do this) ::

curl -H "X-Dataverse-key: $API_TOKEN" -X DELETE http://$SERVER/api/datasets/$dataset-id/storageDriver
The available drivers can be listed with::

curl -H "X-Dataverse-key: $API_TOKEN" http://$SERVER/api/admin/dataverse/storageDrivers

6 changes: 2 additions & 4 deletions doc/sphinx-guides/source/admin/mail-groups.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,9 @@ To list just that Mail Domain Group, you can include the alias in the curl comma
Creating a Mail Domain Group
----------------------------

Mail Domain Groups can be created with a simple JSON file:
Mail Domain Groups can be created with a simple JSON file such as domainGroup1.json:

.. code-block:: json
:caption: domainGroup1.json
:name: domainGroup1.json
{
"name": "Users from @example.org",
Expand All @@ -60,7 +58,7 @@ To load it into your Dataverse installation, either use a ``POST`` or ``PUT`` re
Updating a Mail Domain Group
----------------------------

Editing a group is done by replacing it. Grab your group definition like the :ref:`above example <domainGroup1.json>`,
Editing a group is done by replacing it. Grab your group definition like the domainGroup1.json example above,
change it as you like and ``PUT`` it into your installation:

``curl -X PUT -H 'Content-type: application/json' http://localhost:8080/api/admin/groups/domain/domainGroup1 --upload-file domainGroup1.json``
Expand Down
14 changes: 13 additions & 1 deletion doc/sphinx-guides/source/admin/solr-search-index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,18 @@ There are two ways to perform a full reindex of the Dataverse search index. Star
Clear and Reindex
+++++++++++++++++


Index and Database Consistency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Get a list of all database objects that are missing in Solr, and Solr documents that are missing in the database:

``curl http://localhost:8080/api/admin/index/status``

Remove all Solr documents that are orphaned (ie not associated with objects in the database):

``curl http://localhost:8080/api/admin/index/clear-orphans``

Clearing Data from Solr
~~~~~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -81,4 +93,4 @@ If you suspect something isn't indexed properly in solr, you may bypass the Data

``curl "http://localhost:8983/solr/collection1/select?q=dsPersistentId:doi:10.15139/S3/HFV0AO"``

to see the JSON you were hopefully expecting to see passed along to Dataverse.
to see the JSON you were hopefully expecting to see passed along to Dataverse.
20 changes: 20 additions & 0 deletions doc/sphinx-guides/source/admin/troubleshooting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,26 @@ A User Needs Their Account to Be Converted From Institutional (Shibboleth), ORCI

See :ref:`converting-shibboleth-users-to-local` and :ref:`converting-oauth-users-to-local`.

.. _troubleshooting-ingest:

Ingest
------

Long-Running Ingest Jobs Have Exhausted System Resources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Ingest is both CPU- and memory-intensive, and depending on your system resources and the size and format of tabular data files uploaded, may render Dataverse unresponsive or nearly inoperable. It is possible to cancel these jobs by purging the ingest queue.

``/usr/local/payara5/mq/bin/imqcmd -u admin query dst -t q -n DataverseIngest`` will query the DataverseIngest destination. The password, unless you have changed it, matches the username.

``/usr/local/payara5/mq/bin/imqcmd -u admin purge dst -t q -n DataverseIngest`` will purge the DataverseIngest queue, and prompt for your confirmation.

Finally, list destinations to verify that the purge was successful::

``/usr/local/payara5/mq/bin/imqcmd -u admin list dst``

If you are still running Glassfish, substitute glassfish4 for payara5 above. If you have installed Dataverse in some other location, adjust the above paths accordingly.

.. _troubleshooting-payara:

Payara
Expand Down
5 changes: 5 additions & 0 deletions doc/sphinx-guides/source/api/native-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1654,6 +1654,11 @@ The fully expanded example above (without environment variables) looks like this
Calling the destroy endpoint is permanent and irreversible. It will remove the dataset and its datafiles, then re-index the parent dataverse in Solr. This endpoint requires the API token of a superuser.
Configure a Dataset to Use a Specific File Store
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``/api/datasets/$dataset-id/storageDriver`` can be used to check, configure or reset the designated file store (storage driver) for a dataset. Please see the :doc:`/admin/dataverses-datasets` section of the guide for more information on this API.
Files
-----
Expand Down
4 changes: 2 additions & 2 deletions doc/sphinx-guides/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,9 @@
# built documents.
#
# The short X.Y version.
version = '5.0'
version = '5.1'
# The full version, including alpha/beta/rc tags.
release = '5.0'
release = '5.1'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
18 changes: 13 additions & 5 deletions doc/sphinx-guides/source/developers/big-data-support.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,17 @@ This option can handle files >40GB and could be appropriate for files up to a TB
To configure these options, an administrator must set two JVM options for the Dataverse server using the same process as for other configuration options:

``./asadmin create-jvm-options "-Ddataverse.files.<id>.download-redirect=true"``

``./asadmin create-jvm-options "-Ddataverse.files.<id>.upload-redirect=true"``


With multiple stores configured, it is possible to configure one S3 store with direct upload and/or download to support large files (in general or for specific dataverses) while configuring only direct download, or no direct access for another store.
With multiple stores configured, it is possible to configure one S3 store with direct upload and/or download to support large files (in general or for specific dataverses) while configuring only direct download, or no direct access for another store.

The direct upload option now switches between uploading the file in one piece (up to 1 GB by default) and sending it as multiple parts. The default can be changed by setting:

``./asadmin create-jvm-options "-Ddataverse.files.<id>.min-part-size=<size in bytes>"``

For AWS, the minimum allowed part size is 5*1024*1024 bytes and the maximum is 5 GB (5*1024**3). Other providers may set different limits.

It is also possible to set file upload size limits per store. See the :MaxFileUploadSizeInBytes setting described in the :doc:`/installation/config` guide.

Expand All @@ -30,8 +37,8 @@ At present, one potential drawback for direct-upload is that files are only part
``./asadmin create-jvm-options "-Ddataverse.files.<id>.ingestsizelimit=<size in bytes>"``


**IMPORTANT:** One additional step that is required to enable direct download to work with previewers is to allow cross site (CORS) requests on your S3 store.
The example below shows how to enable the minimum needed CORS rules on a bucket using the AWS CLI command line tool. Note that you may need to add more methods and/or locations, if you also need to support certain previewers and external tools.
**IMPORTANT:** One additional step that is required to enable direct uploads via Dataverse and for direct download to work with previewers is to allow cross site (CORS) requests on your S3 store.
The example below shows how to enable CORS rules (to support upload and download) on a bucket using the AWS CLI command line tool. Note that you may want to limit the AllowedOrigins and/or AllowedHeaders further. https://github.com/GlobalDataverseCommunityConsortium/dataverse-previewers/wiki/Using-Previewers-with-download-redirects-from-S3 has some additional information about doing this.

``aws s3api put-bucket-cors --bucket <BUCKET_NAME> --cors-configuration file://cors.json``

Expand All @@ -42,9 +49,10 @@ with the contents of the file cors.json as follows:
{
"CORSRules": [
{
"AllowedOrigins": ["https://<DATAVERSE SERVER>"],
"AllowedOrigins": ["*"],
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "GET"]
"AllowedMethods": ["PUT", "GET"],
"ExposeHeaders": ["ETag"]
}
]
}
Expand Down
Loading

0 comments on commit 7a0eef0

Please sign in to comment.