1. find all disk attached to this machine
    sudo fdisk -l
    Disk /dev/xvda: 8589 MB, 8589934592 bytes, 16777216 sectors
    ..
    Disk /dev/nvme0n1: 950.0 GB, 950000000000 bytes, 1855468750 sectors
    ..

2. select target to make as a /data folder
    "/dev/nvme0n1"

3. format disk to xfs filesystem
    sudo mkfs -t xfs /dev/nvme0n1

4. mount disk to /data folder
    sudo mkdir -p /data
    sudo mount /dev/nvme0n1 /data
    check mounted disk list: df -TH

5. give permission to mongod user
    sudo chown mongod:mongod /data



'mongoDB' 카테고리의 다른 글

mongodb "convert to capped collection" effect  (0) 2018.02.23
mongos setup on centos  (0) 2018.01.12
mongodb profiling 과 AWS EBS IOPS에대한 고찰이라고나 할까?  (0) 2015.05.06
Shading  (0) 2014.04.23
블로그 이미지

시간을 거스르는자

,

Making scheduler for expiry

mysql 2018. 1. 19. 16:37

use mydb;


alter EVENT expireRow

ON SCHEDULE

EVERY 1 HOUR

DO 

DELETE FROM thatTable 

WHERE TIMESTAMPDIFF(DAY, date , NOW()) > 7


==


SHOW EVENTS FROM mydb;

'mysql' 카테고리의 다른 글

connection, cursor, transaction 사용시 주의점  (0) 2015.02.04
블로그 이미지

시간을 거스르는자

,

sudo ulimit

카테고리 없음 2018. 1. 12. 18:50

sudo sh -c "ulimit -n 64000 && exec su $LOGNAME"

exit

블로그 이미지

시간을 거스르는자

,

server {

  listen 80 default_server;

  listen [::]:80 default_server;

  server_name _;

  # Pass requests for / to localhost:8080:

   location / {

          proxy_set_header X-Real-IP $remote_addr;

          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

          proxy_set_header X-NginX-Proxy true;

          proxy_pass http://127.0.0.1:8080/;

          proxy_set_header Host $http_host;

          proxy_cache_bypass $http_upgrade;

          proxy_redirect off;

   }

   error_page 404 /404.html;

          location = /40x.html {

   }


   error_page 500 502 503 504 /50x.html;

          location = /50x.html {

   }

}

'nginx' 카테고리의 다른 글

nginx log ratation  (0) 2015.04.06
ssl setting  (0) 2015.03.12
mac 에서 nginx 설치 및 설정 하기  (0) 2014.10.24
블로그 이미지

시간을 거스르는자

,

mongos setup on centos

mongoDB 2018. 1. 12. 12:56

Install

https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/


Setup

sudo yum install -y mongodb-org-shell-3.6.1 mongodb-org-mongos-3.6.1

sudo mkdir /var/log/mongodb

sudo chown -R centos:centos /var/log/mongodb/

mongos --configdb configRS/ip-10-20-0-71.ec2.internal:27019,ip-10-20-0-57.ec2.internal:27019,ip-10-20-10-155.ec2.internal:27019 --logpath /var/log/mongodb/mongos.log --fork

블로그 이미지

시간을 거스르는자

,

https://www.elastic.co/guide/en/elasticsearch/reference/current/enabled.html


0. Delete your index pattern


1. Get default template

If you are using logstash, then you can find default template from kibana Dev API

GET _template/logstash


2. Add index disable setting

If you don't want to use "foo" field for index, you can tell ES "do not indexing this field"


copy default template and add "foo": {"enabled": false} this setting on "properties" of default template

"properties": {

  "foo": {"enabled": false}

}


3. put new settings

PUT _template/logstash

{

  ... changed whole new template ...

}


* If you want to remove .keyword things, setting "string_fields" under mappings._default_.dynamic_templates

(WARN: keyward is for keyword search of target field. if you want keyword search for specific field, you can setting it manually in template.json)

"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}



* Other way to using template.json file instead of changing _template/logstash

1. make template.json from GET _template/logstash

2. add template.json to logstash.conf

3. restart logstash

elasticsearch {
hosts => ["http://domain:80"]
template => "/etc/logstash/template.json"
}


template.json full version example)

{
"order": 0,
"version": 60001,
"index_patterns": [
"logstash-*"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"_default_": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "*",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
}
],
"properties": {
"@version": {
"enabled": false
},
"offset": {
"enabled": false
},
"tags": {
"enabled": false
}
}
}
},
"aliases": {}
}


블로그 이미지

시간을 거스르는자

,

Errors

2017-12-19T16:27:12+09:00 ERR  Failed to publish events caused by: read tcp [::1]:48818->[::1]:5044: i/o timeout

2017-12-19T16:27:12+09:00 ERR  Failed to publish events caused by: client is not connected

2017-12-19T16:27:13+09:00 ERR  Failed to publish events: client is not connected


In my case,

It is caused from elasticsearch error.

This is logstash log. (see logstash.yml. "log level", and change it to info)

[2017-12-19T16:53:22,168][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"index_create_block_exception", "reason"=>"blocked by: [FORBIDDEN/10/cluster create-index blocked (api)];"})


My AWS ES status was yellow(you should have minimum two ES instances. I had only one ES instance)


Just add one more ES or delete your ES domain and recreate it.

블로그 이미지

시간을 거스르는자

,

If you are going to run multiple workers and got some key error when doing async job in celery like (KeyError, Received unregistered task of type). 
This would be a solution.


Key statement

"Use different Queue name and run worker with that Queue name". reference


0. Structure

folder/

tasks/

some_tasks.py

the_other_tasks.py

scheduler.py 


1. scheduler.py

# import

from tasks.some_tasks import sum
from tasks.the_other_tasks import add

# use queue name, when you call task function
sum.apply_async(queue="some_tasks")
add.apply_async(queue="the_other_tasks")


2. tasks

A. some_tasks.py

app = Celery(..)
app.conf.task_default_queue = "some_tasks" 


B. the_other_tasks.py

app = Celery(..)
app.conf.task_default_queue = "the_other_tasks" 



3. running workers
$folder>celery -A tasks.some_tasks worker --loglevel=info --concurrency=1 -Q some_tasks
$folder>celery -A tasks.the_other_tasks worker --loglevel=info --concurrency=1 -Q the_other_tasks

*If you want to give name to worker, use -n option.
example) celery -A tasks.the_other_tasks worker --loglevel=info --concurrency=1 -n the_other_tasks -Q the_other_tasks

'celery' 카테고리의 다른 글

worker와 publisher가 init되는 것에 대한 오해가 있었다.  (0) 2015.05.11
celery 오해와 진실  (0) 2015.03.13
Revoking task  (0) 2014.11.24
예약 푸시  (0) 2014.11.20
블로그 이미지

시간을 거스르는자

,

flask save session error

python 2017. 12. 5. 10:58

Error

expires = pickle.load(f)
EOFError: Ran out of input


Solution

remove flask_session folder, and restart flask


ref

https://github.com/pallets/flask/issues/2216

블로그 이미지

시간을 거스르는자

,

If you want to change each column by hands, follow this,

In your Slices,

1. select slice

2. click editing data resource button

3. click List Metrics

4. change Verbose name


Else if you want to change all column name, use sqlite,

superset uses sqlite to save their meta information and that file is located in ~/.superset/superset.db

open this file using sqlite GUI tool, and use replace function. replcae(field, 'origin', 'replacement')

example) update  sql_metrics set verbose_name = replace(verbose_name, 'sum__', '') where table_id = 4 

블로그 이미지

시간을 거스르는자

,