https://www.elastic.co/guide/en/elasticsearch/reference/current/enabled.html


0. Delete your index pattern


1. Get default template

If you are using logstash, then you can find default template from kibana Dev API

GET _template/logstash


2. Add index disable setting

If you don't want to use "foo" field for index, you can tell ES "do not indexing this field"


copy default template and add "foo": {"enabled": false} this setting on "properties" of default template

"properties": {

  "foo": {"enabled": false}

}


3. put new settings

PUT _template/logstash

{

  ... changed whole new template ...

}


* If you want to remove .keyword things, setting "string_fields" under mappings._default_.dynamic_templates

(WARN: keyward is for keyword search of target field. if you want keyword search for specific field, you can setting it manually in template.json)

"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}



* Other way to using template.json file instead of changing _template/logstash

1. make template.json from GET _template/logstash

2. add template.json to logstash.conf

3. restart logstash

elasticsearch {
hosts => ["http://domain:80"]
template => "/etc/logstash/template.json"
}


template.json full version example)

{
"order": 0,
"version": 60001,
"index_patterns": [
"logstash-*"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
}
],
"properties": {
"@version": {
"enabled": false
},
"offset": {
"enabled": false
},
"tags": {
"enabled": false
}
}
}
},
"aliases": {}
}


블로그 이미지

시간을 거스르는자

ytkang86@gmail.com

,

Errors

2017-12-19T16:27:12+09:00 ERR  Failed to publish events caused by: read tcp [::1]:48818->[::1]:5044: i/o timeout

2017-12-19T16:27:12+09:00 ERR  Failed to publish events caused by: client is not connected

2017-12-19T16:27:13+09:00 ERR  Failed to publish events: client is not connected


In my case,

It is caused from elasticsearch error.

This is logstash log. (see logstash.yml. "log level", and change it to info)

[2017-12-19T16:53:22,168][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"index_create_block_exception", "reason"=>"blocked by: [FORBIDDEN/10/cluster create-index blocked (api)];"})


My AWS ES status was yellow(you should have minimum two ES instances. I had only one ES instance)


Just add one more ES or delete your ES domain and recreate it.

블로그 이미지

시간을 거스르는자

ytkang86@gmail.com

,

If you are going to run multiple workers and got some key error when doing async job in celery like (KeyError, Received unregistered task of type). 
This would be a solution.


Key statement

"Use different Queue name and run worker with that Queue name". reference


0. Structure

folder/

tasks/

some_tasks.py

the_other_tasks.py

scheduler.py 


1. scheduler.py

# import

from tasks.some_tasks import sum
from tasks.the_other_tasks import add

# use queue name, when you call task function
sum.apply_async(queue="some_tasks")
add.apply_async(queue="the_other_tasks")


2. tasks

A. some_tasks.py

app = Celery(..)
app.conf.task_default_queue = "some_tasks" 


B. the_other_tasks.py

app = Celery(..)
app.conf.task_default_queue = "the_other_tasks" 



3. running workers
$folder>celery -A tasks.some_tasks worker --loglevel=info --concurrency=1 -Q some_tasks
$folder>celery -A tasks.the_other_tasks worker --loglevel=info --concurrency=1 -Q the_other_tasks

*If you want to give name to worker, use -n option.
example) celery -A tasks.the_other_tasks worker --loglevel=info --concurrency=1 -n the_other_tasks -Q the_other_tasks

'celery' 카테고리의 다른 글

worker와 publisher가 init되는 것에 대한 오해가 있었다.  (0) 2015.05.11
celery 오해와 진실  (0) 2015.03.13
Revoking task  (0) 2014.11.24
예약 푸시  (0) 2014.11.20
블로그 이미지

시간을 거스르는자

ytkang86@gmail.com

,