Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trying to merge the field stats of field but the field type is incompatible #18135

Closed
nellicus opened this issue May 4, 2016 · 9 comments
Closed
Assignees
Labels
>bug :Data Management/Stats Statistics tracking and retrieval APIs

Comments

@nellicus
Copy link
Contributor

nellicus commented May 4, 2016

linked to elastic/kibana#7127

Elasticsearch version:
5.0.0-alpha2

JVM version:
java version "1.8.0_45"
OS version:
Linux
Description of the problem including expected versus actual behavior:
Error when calling _field_stats API on Logstash 5.0.0-alpha2 generated index

abonuccelli@w530 /opt/elk/PROD/scripts $ curl -XGET "https://192.168.1.105:9200/logstash-syslog-2016.05.04/_field_stats?fields=@timestamp&level=indices&pretty" -k --cacert /opt/elk/PROD/FS/secure/cacert.pem  -u elastic:xxxxxx
{
  "error" : {
    "root_cause" : [ {
      "type" : "illegal_state_exception",
      "reason" : "trying to merge the field stats of field [@timestamp] from index [logstash-syslog-2016.05.04] but the field type is incompatible, try to set the 'level' option to 'indices'"
    } ],
    "type" : "illegal_state_exception",
    "reason" : "trying to merge the field stats of field [@timestamp] from index [logstash-syslog-2016.05.04] but the field type is incompatible, try to set the 'level' option to 'indices'"
  },
  "status" : 500
}

calling the same on .monitoring-es-* index timestamp field works ok

abonuccelli@w530 /opt/elk/PROD/scripts $  curl -XGET "https://192.168.1.105:9200/.monitoring-es-2-2016.05.04/_field_stats?fields=timestamp&level=indices&pretty" -k --cacert /opt/elk/PROD/FS/secure/cacert.pem  -u elastic:xxxxxx
{
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "indices" : {
    ".monitoring-es-2-2016.05.04" : {
      "fields" : {
        "timestamp" : {
          "max_doc" : 21360,
          "doc_count" : 21360,
          "density" : 100,
          "sum_doc_freq" : -1,
          "sum_total_term_freq" : 21360,
          "min_value" : 1462353531772,
          "min_value_as_string" : "2016-05-04T09:18:51.772Z",
          "max_value" : 1462370153801,
          "max_value_as_string" : "2016-05-04T13:55:53.801Z"
        }
      }
    }
  }

}

Provide logs (if relevant):

[2016-05-04 15:50:22,052][WARN ][rest.suppressed          ] /logstash-syslog-2016.05.04/_field_stats Params: {pretty=, level=indices, index=logstash-syslog-2016.05.04, fields=@timestamp}
java.lang.IllegalStateException: trying to merge the field stats of field [@timestamp] from index [logstash-syslog-2016.05.04] but the field type is incompatible, try to set the 'level' option to 'indices'
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:108)
    at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:58)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.finishHim(TransportBroadcastAction.java:248)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.onOperation(TransportBroadcastAction.java:213)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:193)
    at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:180)
    at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:789)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:178)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:143)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

**other info

@timestamp mapping for problematic logstash-syslog-* index

logstash-syslog-* 

 "timestamp" : {
            "type" : "date",
           }

timestamp mapping for .monitoring-es-* index

 "timestamp" : {
            "type" : "date",
            "format" : "date_time"
          }
@clintongormley clintongormley reopened this May 4, 2016
@clintongormley clintongormley added the :Data Management/Stats Statistics tracking and retrieval APIs label May 4, 2016
@clintongormley
Copy link

Trying to work up a recreation

@clintongormley
Copy link

OK - simple recreation. Start a cluster with two nodes, then run the following:

DELETE t

POST t/t/_bulk
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}
{"index":{}}
{"@timestamp":"2000-01-01T00:00:00.0+00"}

GET t/_field_stats?level=indices&fields=@timestamp

The above returns:

{
   "error": {
      "root_cause": [
         {
            "type": "illegal_state_exception",
            "reason": "trying to merge the field stats of field [@timestamp] from index [t] but the field type is incompatible, try to set the 'level' option to 'indices'"
         }
      ],
      "type": "illegal_state_exception",
      "reason": "trying to merge the field stats of field [@timestamp] from index [t] but the field type is incompatible, try to set the 'level' option to 'indices'"
   },
   "status": 500
}

With the following stack trace:

[2016-05-04 19:04:43,705][WARN ][rest.suppressed          ] /t/_field_stats Params: {level=indices, index=t, fields=@timestamp}
java.lang.IllegalStateException: trying to merge the field stats of field [@timestamp] from index [t] but the field type is incompatible, try to set the 'level' option to 'indices'
  at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:108)
  at org.elasticsearch.action.fieldstats.TransportFieldStatsTransportAction.newResponse(TransportFieldStatsTransportAction.java:58)
  at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.finishHim(TransportBroadcastAction.java:248)
  at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction.onOperation(TransportBroadcastAction.java:213)
  at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:193)
  at org.elasticsearch.action.support.broadcast.TransportBroadcastAction$AsyncBroadcastAction$1.handleResponse(TransportBroadcastAction.java:180)
  at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:789)
  at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:178)
  at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:143)
  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
  at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
  at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
  at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
  at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
  at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

@jimczi
Copy link
Contributor

jimczi commented May 4, 2016

This is a serialization bug in the field stats transport that was introduced when the new point api was added. It occurs only with 5.0.0-alpha2 and has been fixed in master with this commit f600c4a

@jimczi jimczi closed this as completed May 4, 2016
@nellicus
Copy link
Contributor Author

@jimferenczi this effectively prevents, at least in my environment, any use in kibana of the data coming from logstash. just raising a concern that this might impede proper testing of alpha2 through our user base

@clintongormley
Copy link

@nellicus yeah - not much we can do about it until the next release

@anhlqn
Copy link

anhlqn commented May 13, 2016

This also occurred when I tried to use the winlogbeat indexes on a fresh installation without Logstash

@gluckspilz
Copy link

gluckspilz commented May 27, 2016

Will adding "format" : "date_time" to the mapping for the index created by logstash stop this error?

@michalterbert
Copy link

@gluckspilz: for me doesn't work 👎

@clintongormley
Copy link

This is fixed in 5.0.0-alpha3, which is out already

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Data Management/Stats Statistics tracking and retrieval APIs
Projects
None yet
Development

No branches or pull requests

6 participants