Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GDCC/8605-add-archival-status-support #8696

Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
de62791
Archival status success/pending/failure/null support
qqmyers May 13, 2022
8c82c61
flyway to update existing
qqmyers May 13, 2022
b354bc3
fix typos/mistakes
qqmyers May 13, 2022
9c9ac65
basic status logging in existing archivers
qqmyers May 13, 2022
221ca0b
API docs
qqmyers May 13, 2022
8902d9a
Merge remote-tracking branch 'IQSS/develop' into GDCC/8605-add-archiv…
qqmyers May 24, 2022
a37922b
Merge remote-tracking branch 'IQSS/develop' into GDCC/8605-add-archiv…
qqmyers May 26, 2022
cefa12c
rename flyway
qqmyers May 26, 2022
e1c62af
Merge remote-tracking branch 'IQSS/develop' into GDCC/8605-add-archiv…
qqmyers May 27, 2022
d2bf93c
Merge remote-tracking branch 'IQSS/develop' into GDCC/8605-add-archiv…
qqmyers Jun 26, 2022
ae1c97c
Merge remote-tracking branch 'IQSS/develop' into GDCC/8605-add-archiv…
qqmyers Jul 14, 2022
d3a7b04
update flyway naming
qqmyers Jul 14, 2022
5295bcd
Merge remote-tracking branch 'IQSS/develop' into GDCC/8605-add-archiv…
qqmyers Jul 15, 2022
9223e7d
updates per review
qqmyers Jul 15, 2022
f5396d8
swap native update
qqmyers Jul 15, 2022
986f9ff
Merge remote-tracking branch 'IQSS/develop' into
qqmyers Jul 18, 2022
8750e62
missed logger.fine
qqmyers Jul 18, 2022
5d617f0
test tweak
qqmyers Jul 19, 2022
8fcb59c
fix jsonpath
qqmyers Jul 19, 2022
d2d817e
fix URLs
qqmyers Jul 19, 2022
6a70d42
add content type on set
qqmyers Jul 19, 2022
e498417
application/json
qqmyers Jul 19, 2022
8a99685
in docs, show verbs for clarity, s/Json/JSON/ #8605
pdurbin Jul 19, 2022
7362e1c
lower logging #8605
pdurbin Jul 19, 2022
7410c5b
format urls in docs
qqmyers Jul 21, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions doc/sphinx-guides/source/api/native-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1873,6 +1873,61 @@ The API call requires a Json body that includes the list of the fileIds that the
export JSON='{"fileIds":[300,301]}'

curl -H "X-Dataverse-key: $API_TOKEN" -H "Content-Type:application/json" "$SERVER_URL/api/datasets/:persistentId/files/actions/:unset-embargo?persistentId=$PERSISTENT_IDENTIFIER" -d "$JSON"


Get the Archival Status of a Dataset By Version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Archiving is an optional feature that may be configured for a Dataverse instance. When enabled, this API call be used to retrieve the status. Note that this requires "superuser" credentials.

/api/datasets/submitDatasetVersionToArchive/$dataset-id/$version/status returns the archival status of the specified dataset version.

The response is a Json object that will contain a "status" which may be "success", "pending", or "failure" and a "message" which is archive system specific. For "success" the message should provide an identifier or link to the archival copy. For example:

.. code-block:: bash

export API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export SERVER_URL=https://demo.dataverse.org
export PERSISTENT_IDENTIFIER=doi:10.5072/FK2/7U7YBV
export VERSION=1.0

curl -H "X-Dataverse-key: $API_TOKEN" -H "Accept:application/json" "$SERVER_URL/api/datasets/submitDatasetVersionToArchive/$VERSION/status?persistentId=$PERSISTENT_IDENTIFIER"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come :persistentId isn't in the URL? Are database IDs supported as well as PIDs? They should be, like all other native API endpoints.


Set the Archival Status of a Dataset By Version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Archiving is an optional feature that may be configured for a Dataverse instance. When enabled, this API call be used to set the status. Note that this is intended to be used by the archival system and requires "superuser" credentials.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to give DRS (or whatever archival system) a superuser token? Hmm, seems a bit suboptimal but I suppose anything else is not an easy fix.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - this is an example of info that shouldn't be editable by a user who can touch that dataset (as it represents the state of an external archiving system) yet doesn't seem to fit being in /api/admin (limited to localhost usually which would make it inaccessible, or with unblock-key access would allow the archiver to make all the other admin calls). It may be that signed URLs would help here, e.g. giving the archiver URLs to set archival status for a limited time.


/api/datasets/submitDatasetVersionToArchive/$dataset-id/$version/status sets the archival status of the specified dataset version.

The body is a Json object that must contain a "status" which may be "success", "pending", or "failure" and a "message" which is archive system specific. For "success" the message should provide an identifier or link to the archival copy. For example:

.. code-block:: bash

export API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export SERVER_URL=https://demo.dataverse.org
export PERSISTENT_IDENTIFIER=doi:10.5072/FK2/7U7YBV
export VERSION=1.0
export JSON='{"status":"failure","message":"Something went wrong"}'

curl -H "X-Dataverse-key: $API_TOKEN" -H "Content-Type:application/json" -X PUT "$SERVER_URL/api/datasets/submitDatasetVersionToArchive/$VERSION/status?persistentId=$PERSISTENT_IDENTIFIER" -d "$JSON"

Delete the Archival Status of a Dataset By Version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Archiving is an optional feature that may be configured for a Dataverse instance. When enabled, this API call be used to delete the status. Note that this is intended to be used by the archival system and requires "superuser" credentials.

/api/datasets/submitDatasetVersionToArchive/$dataset-id/$version/status deletes the archival status of the specified dataset version.

.. code-block:: bash

export API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export SERVER_URL=https://demo.dataverse.org
export PERSISTENT_IDENTIFIER=doi:10.5072/FK2/7U7YBV
export VERSION=1.0

curl -H "X-Dataverse-key: $API_TOKEN" -H "Content-Type:application/json" -X DELETE "$SERVER_URL/api/datasets/submitDatasetVersionToArchive/$VERSION/status?persistentId=$PERSISTENT_IDENTIFIER"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need application/json on DELETE? Less is more, right?



Files
-----
Expand Down
43 changes: 42 additions & 1 deletion src/main/java/edu/harvard/iq/dataverse/DatasetVersion.java
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,11 @@
import edu.harvard.iq.dataverse.branding.BrandingUtil;
import edu.harvard.iq.dataverse.dataset.DatasetUtil;
import edu.harvard.iq.dataverse.license.License;
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.FileUtil;
import edu.harvard.iq.dataverse.util.StringUtil;
import edu.harvard.iq.dataverse.util.SystemConfig;
import edu.harvard.iq.dataverse.util.DateUtil;
import edu.harvard.iq.dataverse.util.json.JsonUtil;
import edu.harvard.iq.dataverse.util.json.NullSafeJsonBuilder;
import edu.harvard.iq.dataverse.workflows.WorkflowComment;
import java.io.Serializable;
Expand All @@ -27,6 +27,7 @@
import javax.json.Json;
import javax.json.JsonArray;
import javax.json.JsonArrayBuilder;
import javax.json.JsonObject;
import javax.json.JsonObjectBuilder;
import javax.persistence.CascadeType;
import javax.persistence.Column;
Expand Down Expand Up @@ -94,6 +95,14 @@ public enum VersionState {
public static final int ARCHIVE_NOTE_MAX_LENGTH = 1000;
public static final int VERSION_NOTE_MAX_LENGTH = 1000;

//Archival copies: Status message required components
public static final String STATUS = "status";
public static final String MESSAGE = "message";
//Archival Copies: Allowed Statuses
public static final String PENDING = "pending";
public static final String SUCCESS = "success";
public static final String FAILURE = "failure";
pdurbin marked this conversation as resolved.
Show resolved Hide resolved
pdurbin marked this conversation as resolved.
Show resolved Hide resolved

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Expand Down Expand Up @@ -180,6 +189,8 @@ public enum VersionState {
@Transient
private DatasetVersionDifference dvd;

@Transient
private JsonObject archivalStatus;

public Long getId() {
return this.id;
Expand Down Expand Up @@ -319,9 +330,39 @@ public void setArchiveNote(String note) {
public String getArchivalCopyLocation() {
return archivalCopyLocation;
pdurbin marked this conversation as resolved.
Show resolved Hide resolved
}

public String getArchivalCopyLocationStatus() {
populateArchivalStatus(false);

if(archivalStatus!=null) {
return archivalStatus.getString(STATUS);
}
return null;
}
public String getArchivalCopyLocationMessage() {
populateArchivalStatus(false);
if(archivalStatus!=null) {
return archivalStatus.getString(MESSAGE);
}
return null;
}

private void populateArchivalStatus(boolean force) {
if(archivalStatus ==null || force) {
if(archivalCopyLocation!=null) {
try {
archivalStatus = JsonUtil.getJsonObject(archivalCopyLocation);
} catch(Exception e) {
logger.warning("DatasetVersion id: " + id + "has a non-JsonObject value, parsing error: " + e.getMessage());
logger.info(archivalCopyLocation);
pdurbin marked this conversation as resolved.
Show resolved Hide resolved
}
}
}
}

public void setArchivalCopyLocation(String location) {
this.archivalCopyLocation = location;
populateArchivalStatus(true);
}

public String getDeaccessionLink() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1187,4 +1187,12 @@ private DatasetVersion getPreviousVersionWithUnf(DatasetVersion datasetVersion)
return null;
}

/**
* Merges the passed datasetversion to the persistence context.
* @param ver the DatasetVersion whose new state we want to persist.
* @return The managed entity representing {@code ver}.
*/
public DatasetVersion merge( DatasetVersion ver ) {
return em.merge(ver);
}
Comment on lines +1195 to +1197
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm surprised this merge method doesn't already exist on DatasetVersionServiceBean.java. It is because most changes to versions happen through commands? It is because once a version is published there's no need to go back and change the version (except for deaccessioning, I guess, which is a command)? I don't think it's bad to add this method but I wonder why we're only adding it now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - I think everything uses a Command of some sort. I was also surprised that it didn't exist as the dataset service has a merge() and the file service has several methods that don't do much more than a merge.

} // end class
107 changes: 107 additions & 0 deletions src/main/java/edu/harvard/iq/dataverse/api/Datasets.java
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,7 @@
import edu.harvard.iq.dataverse.util.json.JSONLDUtil;
import edu.harvard.iq.dataverse.util.json.JsonLDTerm;
import edu.harvard.iq.dataverse.util.json.JsonParseException;
import edu.harvard.iq.dataverse.util.json.JsonUtil;
import edu.harvard.iq.dataverse.search.IndexServiceBean;
import static edu.harvard.iq.dataverse.util.json.JsonPrinter.*;
import static edu.harvard.iq.dataverse.util.json.NullSafeJsonBuilder.jsonObjectBuilder;
Expand Down Expand Up @@ -216,6 +217,9 @@ public class Datasets extends AbstractApiBean {
@Inject
DataverseRoleServiceBean dataverseRoleService;

@EJB
DatasetVersionServiceBean datasetversionService;

/**
* Used to consolidate the way we parse and handle dataset versions.
* @param <T>
Expand Down Expand Up @@ -3282,4 +3286,107 @@ public Response getCurationStates() throws WrappedResponse {
csvSB.append("\n");
return ok(csvSB.toString(), MediaType.valueOf(FileUtil.MIME_TYPE_CSV), "datasets.status.csv");
}

//APIs to manage archival status

@GET
@Produces(MediaType.APPLICATION_JSON)
@Path("/submitDatasetVersionToArchive/{id}/{version}/status")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

submitDatasetVersionToArchive is a weird name. submitDataVersionToArchive (Data instead of Dataset) is under /api/admin and documented under installation/config.html

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. So far it ~mirrors the /api/admin/submitDatasetVersionToArchive call (name changed to say 'Dataset' in #8610 which hasn't merged yet), which seemed reasonable when it was a single call. With the status calls, I initially had them in /api/admin as well, but eventually decided they should move to /api/datasets (see the comment about superuser being required on those). With that, they could be renamed - e.g. to /api/datasets/<id>/<version>/archivalStatus .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the new name ending with /archivalStatus. Thanks.

public Response getDatasetVersionToArchiveStatus(@PathParam("id") String dsid,
@PathParam("version") String versionNumber) {

try {
AuthenticatedUser au = findAuthenticatedUserOrDie();
if (!au.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
}
Dataset ds = findDatasetOrDie(dsid);

DatasetVersion dv = datasetversionService.findByFriendlyVersionNumber(ds.getId(), versionNumber);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to use getDatasetVersionOrDie here (and in the other two calls to findByFriendlyVersionNumber in this PR)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I saw it but looking now, getDatasetVersionOrDie doesn't support the friendlyVersionNumber syntax which is a ~requirement here (that's the convention used in the Bag naming and metadata that the archiver gets). I can go ahead and add parsing for that which would have the presumably useful side effect of letting other datasetversion api calls support the friendly version number as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should. I'm seeing handleSpecific(long major, long minor). It's used by https://guides.dataverse.org/en/5.11/api/native-api.html#get-version-of-a-dataset which has a "friendly" example of "1.0".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep - you're right. I missed the string parsing in handleVersion(). I'll update the PR to use it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm - calls to this are counted with MakeDataCounts. I guess since these are API calls they should count? (although they are clearly system-level interactions and not end-user interaction with the data). In any case, I went ahead for now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dunno. I'd leave this out of Make Data Count. Like you said, these are systems setting and retrieving archival status messages. The spirit of Make Data Count is views/investigations and downloads/requests. People and machines looking at data.

if (dv.getArchivalCopyLocation() == null) {
return error(Status.NO_CONTENT, "This dataset version has not been archived");
} else {
JsonObject status = JsonUtil.getJsonObject(dv.getArchivalCopyLocation());
return ok(status);
}
} catch (WrappedResponse wr) {
return wr.getResponse();
}
}

@PUT
@Consumes(MediaType.APPLICATION_JSON)
@Path("/submitDatasetVersionToArchive/{id}/{version}/status")
public Response setDatasetVersionToArchiveStatus(@PathParam("id") String dsid,
@PathParam("version") String versionNumber, JsonObject update) {

logger.info(JsonUtil.prettyPrint(update));
pdurbin marked this conversation as resolved.
Show resolved Hide resolved
try {
AuthenticatedUser au = findAuthenticatedUserOrDie();

if (!au.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
}
} catch (WrappedResponse wr) {
return wr.getResponse();
}
if (update.containsKey(DatasetVersion.STATUS)
&& update.containsKey(DatasetVersion.MESSAGE)) {
String status = update.getString(DatasetVersion.STATUS);
if (status.equals(DatasetVersion.PENDING)
|| status.equals(DatasetVersion.FAILURE)
|| status.equals(DatasetVersion.SUCCESS)) {
pdurbin marked this conversation as resolved.
Show resolved Hide resolved

try {
Dataset ds;

ds = findDatasetOrDie(dsid);

DatasetVersion dv = datasetversionService.findByFriendlyVersionNumber(ds.getId(), versionNumber);
if(dv==null) {
return error(Status.NOT_FOUND, "Dataset version not found");
}

dv.setArchivalCopyLocation(JsonUtil.prettyPrint(update));
dv = datasetversionService.merge(dv);
logger.info("location now: " + dv.getArchivalCopyLocation());
logger.info("status now: " + dv.getArchivalCopyLocationStatus());
logger.info("message now: " + dv.getArchivalCopyLocationMessage());

return ok("Status updated");

} catch (WrappedResponse wr) {
return wr.getResponse();
}
}
}
return error(Status.BAD_REQUEST, "Unacceptable status format");
}

@DELETE
@Produces(MediaType.APPLICATION_JSON)
@Path("/submitDatasetVersionToArchive/{id}/{version}/status")
public Response deleteDatasetVersionToArchiveStatus(@PathParam("id") String dsid,
@PathParam("version") String versionNumber) {

try {
AuthenticatedUser au = findAuthenticatedUserOrDie();
if (!au.isSuperuser()) {
return error(Response.Status.FORBIDDEN, "Superusers only.");
}
Dataset ds = findDatasetOrDie(dsid);

DatasetVersion dv = datasetversionService.findByFriendlyVersionNumber(ds.getId(), versionNumber);
if (dv == null) {
return error(Status.NOT_FOUND, "Dataset version not found");
}
dv.setArchivalCopyLocation(null);
dv = datasetversionService.merge(dv);

return ok("Status deleted");

} catch (WrappedResponse wr) {
return wr.getResponse();
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@
import java.util.Map;
import java.util.logging.Logger;

import javax.json.Json;
import javax.json.JsonObjectBuilder;

import org.apache.commons.codec.binary.Hex;
import org.duracloud.client.ContentStore;
import org.duracloud.client.ContentStoreManager;
Expand Down Expand Up @@ -67,6 +70,11 @@ public WorkflowStepResult performArchiveSubmission(DatasetVersion dv, ApiToken t
.replace('.', '-').toLowerCase();

ContentStore store;
//Set a failure status that will be updated if we succeed
JsonObjectBuilder statusObject = Json.createObjectBuilder();
statusObject.add(DatasetVersion.STATUS, DatasetVersion.FAILURE);
statusObject.add(DatasetVersion.MESSAGE, "Bag not transferred");

try {
/*
* If there is a failure in creating a space, it is likely that a prior version
Expand Down Expand Up @@ -134,6 +142,7 @@ public void run() {
bagger.generateBag(out);
} catch (Exception e) {
logger.severe("Error creating bag: " + e.getMessage());
statusObject.add(DatasetVersion.MESSAGE, "Could not create bag");
// TODO Auto-generated catch block
e.printStackTrace();
throw new RuntimeException("Error creating bag: " + e.getMessage());
Expand Down Expand Up @@ -173,7 +182,9 @@ public void run() {
sb.append("/duradmin/spaces/sm/");
sb.append(store.getStoreId());
sb.append("/" + spaceName + "/" + fileName);
dv.setArchivalCopyLocation(sb.toString());
statusObject.add(DatasetVersion.STATUS, DatasetVersion.SUCCESS);
statusObject.add(DatasetVersion.MESSAGE, sb.toString());

logger.fine("DuraCloud Submission step complete: " + sb.toString());
} catch (ContentStoreException | IOException e) {
// TODO Auto-generated catch block
Expand All @@ -200,6 +211,9 @@ public void run() {
} catch (NoSuchAlgorithmException e) {
logger.severe("MD5 MessageDigest not available!");
}
finally {
dv.setArchivalCopyLocation(statusObject.build().toString());
}
} else {
logger.warning("DuraCloud Submision Workflow aborted: Dataset locked for finalizePublication, or because file validation failed");
return new Failure("Dataset locked");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@
import java.util.Map;
import java.util.logging.Logger;

import javax.json.Json;
import javax.json.JsonObjectBuilder;

import org.apache.commons.codec.binary.Hex;
import com.google.auth.oauth2.ServiceAccountCredentials;
import com.google.cloud.storage.Blob;
Expand All @@ -54,6 +57,11 @@ public WorkflowStepResult performArchiveSubmission(DatasetVersion dv, ApiToken t
logger.fine("Project: " + projectName + " Bucket: " + bucketName);
if (bucketName != null && projectName != null) {
Storage storage;
//Set a failure status that will be updated if we succeed
JsonObjectBuilder statusObject = Json.createObjectBuilder();
statusObject.add(DatasetVersion.STATUS, DatasetVersion.FAILURE);
statusObject.add(DatasetVersion.MESSAGE, "Bag not transferred");

try {
FileInputStream fis = new FileInputStream(System.getProperty("dataverse.files.directory") + System.getProperty("file.separator")+ "googlecloudkey.json");
storage = StorageOptions.newBuilder()
Expand All @@ -68,7 +76,7 @@ public WorkflowStepResult performArchiveSubmission(DatasetVersion dv, ApiToken t

String spaceName = dataset.getGlobalId().asString().replace(':', '-').replace('/', '-')
.replace('.', '-').toLowerCase();

DataCitation dc = new DataCitation(dv);
Map<String, String> metadata = dc.getDataCiteMetadata();
String dataciteXml = DOIDataCiteRegisterService.getMetadataFromDvObject(
Expand Down Expand Up @@ -125,6 +133,7 @@ public void run() {
bagger.setAuthenticationKey(token.getTokenString());
bagger.generateBag(out);
} catch (Exception e) {
statusObject.add(DatasetVersion.MESSAGE, "Could not create bag");
logger.severe("Error creating bag: " + e.getMessage());
// TODO Auto-generated catch block
e.printStackTrace();
Expand Down Expand Up @@ -203,7 +212,9 @@ public void run() {

StringBuffer sb = new StringBuffer("https://console.cloud.google.com/storage/browser/");
sb.append(blobIdString);
dv.setArchivalCopyLocation(sb.toString());
statusObject.add(DatasetVersion.STATUS, DatasetVersion.SUCCESS);
statusObject.add(DatasetVersion.MESSAGE, sb.toString());

} catch (RuntimeException rte) {
logger.severe("Error creating datacite xml file during GoogleCloud Archiving: " + rte.getMessage());
return new Failure("Error in generating datacite.xml file",
Expand All @@ -219,6 +230,8 @@ public void run() {
return new Failure("GoogleCloud Submission Failure",
e.getLocalizedMessage() + ": check log for details");

} finally {
dv.setArchivalCopyLocation(statusObject.build().toString());
}
return WorkflowStepResult.OK;
} else {
Expand Down
Loading