Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too much logging messages #5749

Closed
PaulBoon opened this issue Apr 10, 2019 · 6 comments
Closed

Too much logging messages #5749

PaulBoon opened this issue Apr 10, 2019 · 6 comments
Milestone

Comments

@PaulBoon
Copy link
Contributor

PaulBoon commented Apr 10, 2019

Just installed our fork of 4.10.1 and was hoping to get a bit cleaner logging,
however I still got a lot of warnings swamping my log file.

The most annoying one I get more then once every second:

[2019-04-10T02:46:23.158+0200] [glassfish 4.1] [WARNING] [] [] [tid: _ThreadID=53 _ThreadName=jk-connector(3)] [timeMillis: 1554857183158] [levelValue: 900] [[
Response has already been committed, and further write operations are not permitted. This may result in an IllegalStateException being triggered by the underlying application. To avoid this situation, consider adding a Rule .when(Direction.isInbound().and(Response.isCommitted())).perform(Lifecycle.abort()), or figure out where the response is being incorrectly committed and correct the bug in the offending code.]]

This has been mentioned in:
#4427 (comment)

The other one is even 'SEVERE':

[2019-04-10T11:42:31.981+0200] [glassfish 4.1] [SEVERE] [] [javax.enterprise.resource.webcontainer.jsf.context] [tid: _ThreadID=54 _ThreadName=jk-connector(4)] [timeMilli
s: 1554889351981] [levelValue: 1000] [[
javax.faces.application.ViewExpiredException: viewId:/dataverse.xhtml - View /dataverse.xhtml could not be restored.

Not sure which action causes this, but I see it quite often.

Possible it is related to:
"as a system administrator, I'd like glassfish to not throw exceptions from (from dataverseuser.xhtml) during normal operations" #5446

I would love to get rid of those, so the logs are better to use.
Maybe there is a fast way to suppress them while the real problem is being fixed?

@djbrooke
Copy link
Contributor

  • these happen enough that they are easy to replicate
  • we should make the decision about whether or not we are OK with suppressing them (performance considerations). If we try to fix it we may learn more about the cause

@mheppler
Copy link
Contributor

mheppler commented Jul 1, 2019

Another fairly common warning I am seeing in every one of my local server logs:

[2019-07-01T11:43:46.675-0400] [glassfish 4.1] [WARNING] [] [org.glassfish.jersey.internal.Errors] [tid: _ThreadID=42 _ThreadName=admin-listener(1)] [timeMillis: 1561995826675] [levelValue: 900] [[
  The following warnings have been detected: WARNING: The (sub)resource method oaiSets in edu.harvard.iq.dataverse.api.HarvestingServer contains empty path annotation.
WARNING: The (sub)resource method harvestingClients in edu.harvard.iq.dataverse.api.HarvestingClients contains empty path annotation.
WARNING: A HTTP GET method, public javax.ws.rs.core.Response edu.harvard.iq.dataverse.api.Files.getFileMetadata(java.lang.String,java.lang.String,javax.ws.rs.core.UriInfo,javax.ws.rs.core.HttpHeaders,javax.servlet.http.HttpServletResponse,java.lang.Boolean) throws edu.harvard.iq.dataverse.api.AbstractApiBean$WrappedResponse,java.lang.Exception, should not consume any entity.
WARNING: A HTTP GET method, public javax.ws.rs.core.Response edu.harvard.iq.dataverse.api.Files.getFileMetadataDraft(java.lang.String,java.lang.String,javax.ws.rs.core.UriInfo,javax.ws.rs.core.HttpHeaders,javax.servlet.http.HttpServletResponse,java.lang.Boolean) throws edu.harvard.iq.dataverse.api.AbstractApiBean$WrappedResponse,java.lang.Exception, should not consume any entity.
WARNING: A HTTP GET method, public javax.ws.rs.core.Response edu.harvard.iq.dataverse.api.Prov.getProvFreeForm(java.lang.String,java.lang.String), should not consume any entity.
WARNING: A HTTP GET method, public javax.ws.rs.core.Response edu.harvard.iq.dataverse.api.Prov.getProvJson(java.lang.String,java.lang.String), should not consume any entity.
]]

That warning was included in a snippet in #3801, as well as a dozen server logs included in RT and community group emails.

Looking up the error in Stack Overflow, I found this answer that suggests "a resource method with @Path("/") is redundant" and both HarvestingClients.java and HarverstingServer.java contain this.

Also, I do not have harvesting set up on my localhost, but still I see that warning, as well as other harvesting related msg's in the log.

[2019-07-02T13:51:30.295-0400] [glassfish 4.1] [INFO] [] [edu.harvard.iq.dataverse.timer.DataverseTimerServiceBean] [tid: _ThreadID=184 _ThreadName=__ejb-thread-pool14] [timeMillis: 1562089890295] [levelValue: 800] [[
 Behold! I am the Master Timer, king of all timers! I'm here to create all the lesser timers!]]

[2019-07-02T13:51:30.295-0400] [glassfish 4.1] [INFO] [] [edu.harvard.iq.dataverse.timer.DataverseTimerServiceBean] [tid: _ThreadID=184 _ThreadName=__ejb-thread-pool14] [timeMillis: 1562089890295] [levelValue: 800] [[
 Removing existing harvest timers..]]

[2019-07-02T13:51:30.298-0400] [glassfish 4.1] [INFO] [] [edu.harvard.iq.dataverse.timer.DataverseTimerServiceBean] [tid: _ThreadID=184 _ThreadName=__ejb-thread-pool14] [timeMillis: 1562089890298] [levelValue: 800] [[
 HarvesterService: checking timer 1]]

[2019-07-02T13:51:30.299-0400] [glassfish 4.1] [INFO] [] [edu.harvard.iq.dataverse.timer.DataverseTimerServiceBean] [tid: _ThreadID=184 _ThreadName=__ejb-thread-pool14] [timeMillis: 1562089890299] [levelValue: 800] [[
 HarvesterService: checking timer 2]]

[2019-07-02T14:50:00.578-0400] [glassfish 4.1] [INFO] [] [edu.harvard.iq.dataverse.timer.DataverseTimerServiceBean] [tid: _ThreadID=185 _ThreadName=__ejb-thread-pool15] [timeMillis: 1562093400578] [levelValue: 800] [[
 Handling timeout on dhcp-10-250-158-239.harvard.edu]]

@sekmiller sekmiller self-assigned this Sep 6, 2019
sekmiller added a commit that referenced this issue Sep 6, 2019
@sekmiller
Copy link
Contributor

I'm getting the "response has already been committed" jsf error only when I publish both the dataset and its parent dataverse at the same time ("do you want to publish both? popup), possibly because both success messages aren't displayed. I would venture that is a fairly rare occurrence. Does anyone know of a state that would cause the error to display every second as described by Paul above?

sekmiller added a commit that referenced this issue Sep 9, 2019
sekmiller added a commit that referenced this issue Sep 11, 2019
sekmiller added a commit that referenced this issue Sep 11, 2019
sekmiller added a commit that referenced this issue Sep 12, 2019
@sekmiller
Copy link
Contributor

Logging issues addressed so far:

  • autoupdate deprecated errors eliminated

  • response committed errors eliminated. (the fix for this was to enlarge the page buffer. With a single user on a localhost setup there is no perceptible change in performance. QA should include some stress testing)

  • messaging for normal processes downgraded from info to fine
    file upload
    file delete
    download from S3
    ingest messaging reduced by 1/2 (approx.)

@pdurbin
Copy link
Member

pdurbin commented Sep 12, 2019

As @mheppler @kcondon and I were arguing at standup yesterday, it would be nice also to look at production logs (from Harvard Dataverse) in Splunk so we can get a sense of what the logging is like in the real world. How much of this or that exception, etc.

@kcondon
Copy link
Contributor

kcondon commented Sep 12, 2019

A few things to be aware of when viewing Splunk counts:

  1. Our splunk combines entries from both server.log and access log. We are only addressing unnecessary server.log entries. There is likely a filter in Splunk for that.
  2. Several of the top entries were actually fragments of the same entry so were counted twice, eg. org.primefaces.component.outputpanel.OutputPanelRenderer and autoUpdate attribute is deprecated as well as edu.harvard.iq.dataverse.util.BundleUtil and staticSearchFields.subject_ss
  3. Adding up the ones that were addressed from the Splunk list would result in ~30% reduction in entries without accounting for the ~21% access log entries but accounting for them bumps it up to 38% reduction in server.log entries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants