Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uaa login failing -- wrong redirect? #203

Closed
tomsherrod opened this issue Oct 19, 2016 · 36 comments
Closed

uaa login failing -- wrong redirect? #203

tomsherrod opened this issue Oct 19, 2016 · 36 comments
Assignees

Comments

@tomsherrod
Copy link

Hi,

logsearch-boshrelease(v203) working at http://10.104.2.2/.
Passed smoke tests.

Updated the deployment with logsearch-for-cloudfoundry.
http://10.104.2.2/ redirected to the login page.
Got a page for ok that kibana_client have access.
Once ok, redirects to https://10.104.2.2/login?code=oBpTLd&state=EgQjI4X3NVqPO1Jm6DFRB9 which fails since https isn't enabled. http version of that url, returns a 500.

Pointers to what I missed appreciated.

Tom

@tomsherrod
Copy link
Author

Any additional information needed?
I'm happy to test things out too!

Tom

@Infra-Red
Copy link
Member

Hi @tomsherrod !
I added ability to use insecure Kibana connection f693698, by default it was set to force https.
To allow http connection you must set additional ENV property USE_HTTPS: false in stub file, for example: https://github.com/cloudfoundry-community/logsearch-for-cloudfoundry/blob/develop/templates/logsearch-for-cf.example-with-uaa-auth.yml#L88.

@tomsherrod
Copy link
Author

Hi @Infra-Red
Thanks for following up.
I made the changes, no change in function. I removed the deployment and releases, to start fresh, same result.

@Infra-Red
Copy link
Member

@tomsherrod You right, initially I tested with Kibana as Cloud Foundry application. I made additional fix, could you please try to create release and redeploy again with https://github.com/cloudfoundry-community/logsearch-for-cloudfoundry/blob/develop/templates/logsearch-for-cf.example-with-uaa-auth.yml#L97 property added to deployment manifest.
Thanks!

@tomsherrod
Copy link
Author

@Infra-Red Failing. I did a complete redeploy using the latest logsearch-boshrelease and logsearch-for-cloudfoundry. Pointers welcome.

After logging in, the browser:
{
statusCode: 500,
error: "Internal Server Error",
message: "An internal server error occurred"
}

From kibana.stdout.log:
{"type":"error","@timestamp":"2016-11-02T18:41:34+00:00","tags":[],"pid":11680,"level":"error","message":"Incorrect custom state parameter","error":{"message":"
Incorrect custom state parameter","name":"Error","stack":"Error: Incorrect custom state parameter\n at Object.authenticate (/var/vcap/data/packages/kibana/cc
b3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/installedPlugins/auth/node_modules/bell/lib/oauth.js:189:31)\n at /var/vcap
/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/auth.js:227:30\n at [object O
bject].internals.Protect.run (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/ha
pi/lib/protect.js:56:5)\n at authenticate (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4
/node_modules/hapi/lib/auth.js:218:26)\n at [object Object].internals.Auth.authenticate (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252
c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/auth.js:348:5)\n at internals.Auth.authenticate (/var/vcap/data/packages/kibana/ccb3682
b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/auth.js:177:17)\n at /var/vcap/data/packages/kibana/ccb368
2b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/request.js:370:13\n at iterate (/var/vcap/data/packages/k
ibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/node_modules/items/lib/index.js:35:13)\n at done
(/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/node_modules/items/lib/ind
ex.js:27:25)\n at finish (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hap
i/lib/protect.js:45:16)\n at wrapped (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node
modules/hapi/node_modules/hoek/lib/index.js:858:20)\n at done (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e
565d5f184737140f811f4/node_modules/hapi/node_modules/items/lib/index.js:30:25)\n at Function.wrapped (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe17
86d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/node_modules/hoek/lib/index.js:858:20)\n at Function.internals.continue (/var/v
cap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/reply.js:105:10)\n at /var
/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/installedPlugins/auth/node_modules/hapi-auth-cook
ie/lib/index.js:110:30\n at /var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/h
api/lib/handler.js:311:22\n at iterate (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/no
de_modules/hapi/node_modules/items/lib/index.js:35:13)\n at Object.exports.serial (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-
c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/node_modules/items/lib/index.js:38:9)\n at /var/vcap/data/packages/kibana/ccb3682b1293593d513afe17
86d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/handler.js:306:15\n at [object Object].internals.Protect.run (/var/vcap/dat
a/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/protect.js:56:5)\n at Object.expo
rts.invoke (/var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/handler.js:
304:22)\n at /var/vcap/data/packages/kibana/ccb3682b1293593d513afe1786d68f7f8252c658.1-c7ab517675c32df8a9e565d5f184737140f811f4/node_modules/hapi/lib/request
.js:367:32"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":"?code=JcdcI3&state=kl-XObCaZmbm79r9
gEDEnd","query":{"code":"JcdcI3","state":"kl-XObCaZmbm79r9gEDEnd"},"pathname":"/login","path":"/login?code=JcdcI3&state=kl-XObCaZmbm79r9gEDEnd","href":"/login?c
ode=JcdcI3&state=kl-XObCaZmbm79r9gEDEnd"}}
{"type":"response","@timestamp":"2016-11-02T18:41:34+00:00","tags":[],"pid":11680,"method":"get","statusCode":500,"req":{"url":"/login?code=JcdcI3&state=kl-XObC
aZmbm79r9gEDEnd","method":"get","headers":{"host":"10.104.63.229","cache-control":"max-age=0","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macinto
sh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml
;q=0.9,image/webp,
/
;q=0.8","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8"},"remoteAddress":"192.168.5.205","userAgent":"192.168.5.
205"},"res":{"statusCode":500,"responseTime":8,"contentLength":9},"message":"GET /login?code=JcdcI3&state=kl-XObCaZmbm79r9gEDEnd 500 8ms - 9.0B"}
{"type":"response","@timestamp":"2016-11-04T13:12:44+00:00","tags":[],"pid":11680,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"ho
st":"10.104.63.229","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/
53.0.2785.143 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8","accept-encoding":"gzip, deflate, sdch","acce
pt-language":"en-US,en;q=0.8"},"remoteAddress":"192.168.5.205","userAgent":"192.168.5.205"},"res":{"statusCode":302,"responseTime":22,"contentLength":9},"messag
e":"GET / 302 22ms - 9.0B"}

@Infra-Red
Copy link
Member

@tomsherrod Could you please additionally share Logsearch deployment manifest.
Thanks!

@tomsherrod
Copy link
Author

@Infra-Red Thanks for the response.
Please see https://gist.github.com/tomsherrod/7584e5c9d96aed172f9f98dfc71dda3f
(some name munging was done)

@masterofmonkeys
Copy link
Contributor

Hi @tomsherrod - have you got it running ? I have similar issue (on AWS) - the auth plugin seems to get into an infinite loop -> /login -> / -> /login, ending up with a 502 bad gateway?

I can't get the auth with uaa working.

@tomsherrod
Copy link
Author

tomsherrod commented Jan 4, 2017 via email

@Infra-Red
Copy link
Member

@muconsulting In your deployment you are using Kibana as standalone VM from logsearch-boshrelease or CF Kibana application included in this release?

@masterofmonkeys
Copy link
Contributor

Hi - i tried - using the CF kibana application from this release, the standalone VM and even downloaded a standard kibana 4.4.2 from elastic and deployed the plugin.

From what I can tell, the communication to my UAA server is working fine, the redirect is working, ... it seems that the oauth cookie is not set.

@hannayurkevich
Copy link
Collaborator

hannayurkevich commented Jan 5, 2017

Hi @tomsherrod and @muconsulting ,

As far as I can see from your deploy manifest, you didn't set REDIS_HOST to Kibana env:

properties:
    kibana:
      env:
      - REDIS_HOST: foo

Could you try using a real redis host? We use redis for auth sessions, so probably this causes the problem you get.

@masterofmonkeys
Copy link
Contributor

Nope - REDIS_HOST and REDIS_PORT are set in my case, since I can see connection on the redis server happening. Any reason why cookies (uaa-cookie) are not set ? I seem to only see the bell cookie one.

@hannayurkevich
Copy link
Collaborator

Can you share your deployment manifest and tell which versions of logsearch-boshrelease and logsearch-for-cf you use?

@masterofmonkeys
Copy link
Contributor

Hi,

I'm using logsearch-boshrelease (v203) and logsearch-for-cloudfoundry (v200.0.0).

Regarding the manifest - I'm deploying Kibana 4.4.2 (from the release itself) + the installed auth plugin with a generated manifest as below for faster debugging - which is equivalent to the errand job anyway.

`---
applications:

  • name: kibana-me
    memory: 1024M
    instances: 1
    buildpack: binary_buildpack
    timeout: 180
    env:
    NODE_ENV: production
    USE_HTTPS: true
    KIBANA_OAUTH2_CLIENT_ID: kibana_oauth2_client_id
    KIBANA_OAUTH2_CLIENT_SECRET: kibana_oauth2_client_secret
    CF_API_URI: "https://api.apps.me"
    CF_SYSTEM_ORG: system
    SKIP_SSL_VALIDATION: false
    REDIS_HOST: redis-xxxxxx.ec2.cloud.redislabs.com
    REDIS_PORT: 19424
    SESSION_EXPIRATION_MS: 43200000`

Idea ? As I indicated - it seems the loop redirection is due to cookie not found, or set ?

@hannayurkevich
Copy link
Collaborator

@muconsulting ,

I had similar redirect issue when my Redis properties were set wrong. So, could you please make sure that there is a kibana* key storing your session in Redis? It should look similar to this: kibana:sessions:17232c30-d3f8-11e6-b1c3-71dc90d50a56.

If there is no one, then probably something wrong with your Redis connection. If everything is ok with Redis (there is a kibana* key in there) , then could you please send me details of HTTP requests/responses that you get before the redirect loop starts?

@hannayurkevich
Copy link
Collaborator

Also, can you try using recent 201 release of logsearch-for-cloudfoundry?

@masterofmonkeys
Copy link
Contributor

masterofmonkeys commented Jan 6, 2017

Hi,

I can confirm that my redis contains kibana* key entries. I can try the latest release, but i don't see any code changes in the authentication plugin, nor the kibana release.

UPDATE: I think i found the problem, it seems that the uaa-auth cookie size exceeds the 4K limit, despite having redis cache working properly, that could explain why the cookie is never set and then transmitted between requests.
Also - any reason why the validateFunc function on the uaa-cookie strategy is never invoked?

Sorry - I'm not really a node js expert.

@hannayurkevich
Copy link
Collaborator

@muconsulting,

Thank you for your PR #216.

You are right, there is no need to store an auth credentials object in the user session cookie - storing of session_id is enough. Additionally, we are not risking to exceed a cookie size limit.

For your question regarding validateFunc, it is invoked on each user request to validate that the session passed with the cookie is valid (exists in the cache). Look into hapi-auth-cookie/lib/index.js lib - its authenticate function calls validateFunc.

Thanks!

@maoyi8212
Copy link

@hannayurkevich
I have met the similar 500 "Internal server error". And my deployed version is as following:
logsearch-for-cloudfoundry/202.0.0
logsearch/204.0.0

You have mentioned that I need to make sure that there is a kibana* key storing your session in Redis, And I tried to ssh to the queue instance and check the keys of redis:

queue/0d795175-5653-4308-aa96-ecd1f7936417:~$ /var/vcap/packages/redis/bin/redis-cli
127.0.0.1:6379> keys *
1) "logstash"

Does it mean that my current redis is not properly configured and how should I fix it? I have tried to add REDIS_HOST information in my manifest but it seems not works. Many thanks!

properties:
    kibana:
      env:
      - REDIS_HOST:

@hannayurkevich
Copy link
Collaborator

Hi @maoyi8212 , do you deploy cf-kibana or standalone one? Could you please provide your deploy manifest, so that I can get a better picture of your deployment and try to help?

Thanks.

@maoyi8212
Copy link

@hannayurkevich Thank you for your quick reply. I deploy the cf-kibana and here attached the manifest
de-cf-logsearch-for-cf-manifest_update.yml.txt

@hannayurkevich
Copy link
Collaborator

@maoyi8212 ,

I see that you set Redis host in your deployment:

properties:
...
  redis:
    host: 10.0.0.143

This property should be enough to make Kibana use Redis to store auth sessions. Just double check that this IP is correct.

Also you don't need to set REDIS_HOST in cluster_monitor job, because it sets Kibana instance without authentication plugin. So, you can remove the unnecessary property:

    kibana:      
      env:
      - REDIS_HOST: 10.0.0.143

I can't see any problems with your manifest. So, could you please describe your case in more detail:

  1. When do you get the error (as soon as you access Kibana or after redirect from UAA login page)?
  2. What do you get in your error response?
  3. Any logs from cf-kibana? (you can get them in your CF)
  4. Did you try to redeploy?

Also cc @Infra-Red

Thanks

@maoyi8212
Copy link

@hannayurkevich @Infra-Red
Thank you for the reply. And here are the specific case of mine.

  1. My redirection error happens after redirect from UAA login page, it can already get the code and state information, for my understanding the next step should access the token entry point to get the access token, but it goes to the authorize entry to require for the code for kibana_oauth2_client again and again

https://logs.de.cloudlab.siemens.com/login?code=T45K9gQAr9&state=4Yf6QDy9ZItjldyb28gjer

2 . The error response is:
{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"}

  1. in the cf logs, some errors are included like:
2017-03-23T16:55:06.34+0100 [App/0]      OUT {"type":"log","@timestamp":"2017-03
-23T15:55:06+00:00","tags":["error","authentication","session:set"],"pid":30,"me
ssage":"{\"isBoom\":true,\"isServer\":true,\"output\":{\"statusCode\":500,\"payl
oad\":{\"statusCode\":500,\"error\":\"Internal Server Error\",\"message\":\"An i
nternal server error occurred\"},\"headers\":{}}}"}
  1. I have tried

@hannayurkevich
Copy link
Collaborator

hannayurkevich commented Mar 23, 2017

It looks like Kibana auth plugin can't connect Redis to persist user session and that's why tries to authenticate them again and again. Could you please check that Redis port is open, so that Kibana can access it?

UPDATE:

Please check in your CF that you have security group for Redis, it is created by cf-kibana errand using this template. The security group should be set for your org/space where Kibana app is deployed.

@maoyi8212
Copy link

@hannayurkevich @Infra-Red
I have opened the 6379 port for my virtual machines, but the error still exist.

And what's the default name of security group for Redis? I checked my current security group. There are only cf-kibana-asg(Which is I created for Kibana) and logsearch-access

cf security-groups
Getting security groups as admin
OK

     Name               Organization   Space
#0   public_networks
#1   dns
#2   cf-kibana-asg      admin          logsearch
#3   logsearch-access   admin          logsearch

@hannayurkevich
Copy link
Collaborator

hannayurkevich commented Mar 23, 2017

It's logsearch-access sg. Make sure it contains the rule

 {
                        "destination": "192.168.1.99/32",
                        "ports": "6379",
                        "protocol": "tcp"
                }

Also, one thing you can try is to clear your browser cookies and also content of your Redis. And then retry login to Kibana. It's possible that "old" data breaks authentication.

@maoyi8212
Copy link

@hannayurkevich Really appreciate your great help!
Actually the rule is there, but the redis host and elasticsearch host is totally wrong, and we have deployed successfully the logsearch and logsearch-for-cloudfoundry once but after it runs for a certain period time, the redirect error happens so we tried to redeploy it, but when we doing the redeployment, we met several issues which has never met for the first deployment.

I just wandering how does the manifest get the "cf-kibana.elasticsearch.host" and "redis.host" configuration parameter. from environment variable or some other ways? How can I make it correct? Or I just hard code the update information and update the security group? Many thanks!

@hannayurkevich
Copy link
Collaborator

hannayurkevich commented Mar 23, 2017

You should set values in properties section (global or for a specific job). For exmple in your manifest I can find the following values:

properties:
  cf-kibana:
    elasticsearch:
      host: 10.0.0.140
   ...
   redis:
      host: 10.0.0.143

Actually the rule is there, but the redis host and elasticsearch host is totally wrong

-> so your configuration values and what you get in your security group is different? Then you should probably deploy from scratch. You can also fix it manually, if you prefer.

@maoyi8212
Copy link

@hannayurkevich
The redirection issue has resolved after I fixed the security group mannually and redeploy the cf-kibana. But a new issue is that the kibana can not get any data at the moment. According my previous problem, is it possible that the kibana has some wrong configuration of elasticsearch somewhere, so it connects to the wrong host? Could you please give me some suggestion how to debug this issue?

@Infra-Red
Copy link
Member

@maoyi8212 You should check that Elasticsearch indexes are created and logs data are available.

curl '<elasticsearch_master_ip>:9200/_cat/indices?v'

Also cf-kibana allows to list all logs only for users from particular Cloud Foundry organization, this org can be configured with property cf-kibana.cloudfoundry.system_org and it defaults to org admin.
https://github.com/cloudfoundry-community/logsearch-for-cloudfoundry/blob/develop/jobs/cf-kibana/spec#L34-L36

@maoyi8212
Copy link

@Infra-Red Thank you for the reploy

Currently there are only 2 Elasticsearch indexes
elasticsearch_master/e7d1f171-005d-4d46-af08-a7a35796dcaf:~$ curl http://10.0.0.140:9200/_cat/indices?v
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana 1 1 64 0 126.2kb 63.1kb
green open logstash-2017.03.21 1 0 15785 0 6.7mb 6.7mb
logs-app
The kibana has uploaded 3 default index patterns: logs-, logs-platform- and logs-app-. but shows nothing, if I create a index .kibana, I can see the raw data but the build-in visulaziation portal can not be used. Anything wrong of the index configuration? Many thanks!

@hannayurkevich
Copy link
Collaborator

Your logstash- index is the one containing logs. You can't see these logs in Kibana because L-f-c uploads index patterns looking like logs-*, logs-app* and logs-platform*. This is because we are using specific format for index name (read more about indices). So, you should either use the same name format to get all out-of-the-box features provided by L-f-c (such as predefined index patterns in kibana, dashboards, mappings etc.), or you can configure index patterns in Kibana manually in accordance with your indices name format (e.g. logstash-* or just *).

In general it is highly recommended to use the predefined stub for generating your deploy manifest and customize features you are sure about. Go carefully through the deplyment doc.

@maoyi8212
Copy link

@hannayurkevich @Infra-Red Thank you for your suggestion.
Actually I uesed the same name format for index when I did the deployment, I also do not know why it is not affective. I mentioned that I deployed it once very smoothly but it crashed some days ago. But I met several strange issues when I did the redeployment.

One of the error is the parser issue
Error 400007: 'parser/0 (23129718-a4be-4bdc-8cc8-9afc3d025137)' is not running after update. Review logs for failed jobs: parser
The manifest about parser is like:

- instances: 1
  name: parser
  networks:
  - name: default
  properties:
    logstash_parser:
      debug: false
      deployment_dictionary:
      - /var/vcap/packages/logsearch-config/deployment_lookup.yml
      - /var/vcap/jobs/parser-config-lfc/config/deployment_lookup.yml
      deployment_name:
        cf: <cf_info>
        diego: <diego_info>
      elasticsearch:
        index: logs-%{[@metadata][index]}-%{+YYYY.MM.dd}
        index_type: '%{@type}'
      filters:
      - logsearch-for-cf: /var/vcap/packages/logsearch-config-logstash-filters/logstash-filters-default.conf
    syslog_forwarder:
      config:
      - file: /var/vcap/sys/log/elasticsearch/elasticsearch.stdout.log
        service: elasticsearch
      - file: /var/vcap/sys/log/elasticsearch/elasticsearch.stderr.log
        service: elasticsearch
      - file: /var/vcap/sys/log/parser/parser.stdout.log
        service: parser
      - file: /var/vcap/sys/log/parser/parser.stderr.log
        service: parser
  resource_pool: parser
  templates:
  - name: parser
    release: logsearch
  - name: elasticsearch
    release: logsearch
  - name: syslog_forwarder
    release: logsearch
  - name: parser-config-lfc
    release: logsearch-for-cloudfoundry
  update:
    max_in_flight: 4
    serial: false

When I ssh to the parser instance, I found a issue as following:

parser/23129718-a4be-4bdc-8cc8-9afc3d025137:/var/vcap/sys/log/monit$ more /var/vcap/sys/log/monit/parser.err.log
------------ STARTING parser_ctl at Wed Mar 22 07:51:58 UTC 2017 --------------
cp: cannot stat ‘[/var/vcap/packages/logsearch-config/deployment_lookup.yml, /var/vcap/jobs/parser-config-lfc/config/deployment_lookup.yml]’: No such file or directory 


So I checked the /var/vcap/jobs/parser/bin/parser_ctl script and I found a command and this command cause the error.
` cp "["/var/vcap/packages/logsearch-config/deployment_lookup.yml", "/var/vcap/jobs/parser-config-lfc/config/deployment_lookup.yml"]" "${JOB_DIR}/config"

`
Then I just comment this command to make the deployment success. I am not sure whether my current issue is related to the missing of dictionary file. If yes, how can I fixed it to make both deployment and function works. Another questions is why only the parser component need the configuration of diego?

      deployment_name:
        cf: <cf_info>
        diego: <diego_info>

Many thanks for your kindly explanation.

@hannayurkevich
Copy link
Collaborator

I don't see how deployment_lookup error could affect index name. Also I think it is incorrect to comment out some steps if you meet an error - it is better to resolve the error by understanding the cause. So, my point is that the best way now is to deploy ELK from scratch. Try to completely delete your deployment and deploy it from scratch strictly following deployment instructions. And, if you meet any problems, contact us and try to solve them step-by-step.

For your question about diego, you need to provide the parser with names of your cf and diego deployments, so that parser can apply these names and set @source.deployment field for each log.

@hannayurkevich
Copy link
Collaborator

Also, If you try from scratch and meet any problems, please open a separate issue to not overload this thread.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants