Index

1 # Clore des Index
2 curl -XPOST 'localhost:9200/logstash-2014.07.*/close'
3 
4 # Ré-ouvrir un Index
5 curl -XPOST 'localhost:9200/my_index/open'
6 
7 # Suppression de certains type dans des index (index/type)
8 curl  -XDELETE 'http://localhost:9200/logstash-2014.08.22/apacheaccesslogs'

Réplicats

1 # Passer les réplicats à 0
2 # Attention pour une archi 1 serveurs (dev)
3 curl -XPUT 'localhost:9200/_settings' -d '  {  "index" : { "number_of_replicas" : 0 } }'

1 # Passer les réplicats à 0 pour un index particulier
2 # Attention pour une archi 1 serveurs (dev)
3 curl -XPUT 'localhost:9200/MonIndex/_settings' -d '  {  "index" : { "number_of_replicas" : 0 } }'

Liste des index avec information

 1 curl 'localhost:9200/_cat/indices?v'
 2 health index               pri rep docs.count docs.deleted store.size pri.store.size
 3 yellow logstash-2014.09.28   5   1        376            0    407.7kb        407.7kb
 4 yellow logstash-2015.07.30   5   1         66            0      217kb          217kb
 5 yellow kibana-int            5   1          3            0     43.3kb         43.3kb
 6 yellow nodes_stats           1   1          2            0      1.5mb          1.5mb
 7 yellow logstash-2014.10.22   5   1         17            0     40.1kb         40.1kb
 8 yellow logstash-2014.10.18   5   1          7            0     36.3kb         36.3kb
 9 yellow logstash-2014.09.08   5   1        114            0    241.9kb        241.9kb
10 yellow logstash-2014.10.12   5   1        896            0    961.7kb        961.7kb
11 yellow logstash-2014.09.10   5   1         93            0    184.4kb        184.4kb 

Stats

 1 curl 'localhost:9200/stats/
 2 
 3 autres stats
 4 /stats
 5 /stats/{metric}
 6 /stats/{metric}/{indexMetric}
 7 /{index}/stats
 8 /{index}/stats/{metric}
 9 /cluster/stats
10 /nodes/stats
11 
12 ou metric peut être
13 indices, docs, store, indexing, search, get, merge,
14 refresh, flush, warmer, filter_cache, id_cache,
15 percolate, segments, fielddata, completion

Settings

1 /nodes/settings
2 /cluster/settings
3 /settings
4 /nodes/_process

Etat du cluster

 1 curl -XGET 'http://localhost:9200/cluster/health?pretty=true'
 2 {
 3   "cluster_name" : "elasticsearch",
 4   "status" : "yellow",
 5   "timed_out" : false,
 6   "number_of_nodes" : 2,
 7   "number_of_data_nodes" : 1,
 8   "active_primary_shards" : 41,
 9   "active_shards" : 41,
10   "relocating_shards" : 0,
11   "initializing_shards" : 0,
12   "unassigned_shards" : 5
13 }
14 
15 
16 curl -XGET 'http://localhost:9200/cluster/state'

Snapshot/Restore

Dossier de snapshot

Création du dossier de snapshot

1 mkdir /tmp/my_backup
2 chmod 777 /tmp/my_backup

Création du snapshot

1 curl -XPUT http://127.0.0.1:9200/_snapshot/my_backup -d '
2 {
3   "type": "fs",
4   "settings": {
5     "location": "/tmp/my_backup"
6   }
7 }'

Snapshot

1 curl -XPUT http://127.0.0.1:9200/_snapshot/my_backup/snapshot_2 -d '
2 {
3   "indices": "logstash-2015.11.12",
4   "ignore_unavailable": "true",
5   "include_global_state": false
6 }'

Restore

1 curl -XPOST http://127.0.0.1:9200/snapshot/my_backup/snapshot_2/restore

Elasticsearch 2.x

Ajouter la ligne suivante dans /etc/elasticsearch/elasticsearch.yml

1 path.repo: ["/tmp/my_backup"]

Et redémarrer Elasticsearch.

_cat

 1  
2 curl 'http://127.0.0.1:9200/cat'
3 =^.^= 4 /
cat/allocation 5 /cat/shards 6 /cat/shards/{index} 7 /cat/master 8 /cat/nodes 9 /cat/indices 10 /cat/indices/{index} 11 /cat/segments 12 /cat/segments/{index} 13 /cat/count 14 /cat/count/{index} 15 /cat/recovery 16 /cat/recovery/{index} 17 /cat/health 18 /cat/pending_tasks 19 /cat/aliases 20 /cat/aliases/{alias} 21 /cat/thread_pool 22 /cat/plugins 23 /cat/fielddata 24 /cat/fielddata/{fields} 25 /cat/nodeattrs 26 /cat/repositories 27 /_cat/snapshots/{repository}

Exemples :

1  
2 curl 'http://127.0.0.1:9200/_cat/master' 3 h5yLY6U5QgKn3bjKZiD84g 127.0.0.1 127.0.0.1 node1

En verbose

1  
2 curl 'http://127.0.0.1:9200/_cat/master?v' 3 id host ip node
4 h5yLY6U5QgKn3bjKZiD84g 127.0.0.1 127.0.0.1 node1

Help

1  
2 curl 'http://127.0.0.1:9200/_cat/master?help' 3 id | | node id
4 host | h | host name
5 ip | | ip address 6 node | n | node name

Headers

1  
2 curl 'http://127.0.0.1:9200/_cat/master?h=host,id' 3 127.0.0.1 h5yLY6U5QgKn3bjKZiD84g

Template

Récupérer les templates

1 curl 'http://127.0.0.1:9200/_template?pretty'

Récupérer un template spécifique

1 curl 'http://127.0.0.1:9200/_template/logstash?pretty'

Ajouter un nouveau template

1 curl 'http://127.0.0.1:9200/_template/MonTemplate' -d '.....'

Option d'affichage

  • ?pretty=false : format brute sans format (valeur par défaut)
  • ?pretty=true : sortie au format JSON
  • ?format=yaml : sortie au format YAML
  • ?human=true : Ajoute une entrée supplémentaire par champs qui peut être convertie (champs basés sur le temps ou taille)

Certain sont cummulable [pretty|format] et human.

Afficher les index/shards non assignés :

1 # curl -s localhost:9200/_cat/shards  | grep UNASSIGNED 
2 logstash-2016.11.14 4 p UNASSIGNED
3 logstash-2016.11.14 4 r UNASSIGNED
4 logstash-2016.11.15 3 p UNASSIGNED
5 logstash-2016.11.15 3 r UNASSIGNED
6 logstash-2016.11.15 4 p UNASSIGNED
7 logstash-2016.11.15 4 r UNASSIGNED
8 logstash-2016.11.15 0 p UNASSIGNED
9 logstash-2016.11.15 0 r UNASSIGNED 

Assigner un index/shard à un membre du cluster

1 curl -XPOST 'localhost:9200/_cluster/reroute' -d '{ 
2     "commands" : [ { 
3               "allocate" : { 
4               "index" : "logstash-2016.11.15", "shard" : 4, "node" : "MonNode","allow_primary" : true 
5           } 
6         } 
7     ] 
8 }' 

Afficher le nombre de File Descriptor utilisé

1 curl http://127.0.0.1:9200/_cluster/stats?pretty | grep -A 4 file
2       "open_file_descriptors" : {
3         "min" : 63505,
4         "max" : 65870,
5         "avg" : 64687
6       } 

ou si l'on connait le PID

1 # ls /proc/22505/fd/ | wc -l 
2 65966 

ou

 1 # curl 'localhost:9200/_nodes/stats/process?pretty&human=true'
 2 {
 3   "cluster_name" : "elasticsearch",
 4   "nodes" : {
 5     "VGaVGsCoQKO4tI8uvOL4eQ" : {
 6       "timestamp" : 1479478000398,
 7       "name" : "Alasta Lab",
 8       "transport_address" : "127.0.0.1:9300",
 9       "host" : "127.0.0.1",
10       "ip" : [ "127.0.0.1:9300", "NONE" ],
11       "process" : {
12         "timestamp" : 1479478000399,
13         "open_file_descriptors" : 3549,
14         "max_file_descriptors" : 65535,
15         "cpu" : {
16           "percent" : 6,
17           "total" : "45.5m",
18           "total_in_millis" : 2735680
19         },
20         "mem" : {
21           "total_virtual" : "2.8gb",
22           "total_virtual_in_bytes" : 3071959040
23         }
24       }
25     }
26   }
27 }

Afficher la limite de File Descriptor

1 # su - elasticsearch -s /bin/bash
2 $ ulimit -Sn
3 65535
4 $ ulimit -Hn
5 65535

ou avec le PID

 1 # cat /proc/22505/limits 
 2 Limit                     Soft Limit           Hard Limit           Units
 3 Max cpu time              unlimited            unlimited            seconds
 4 Max file size             unlimited            unlimited            bytes
 5 Max data size             unlimited            unlimited            bytes
 6 Max stack size            10485760             unlimited            bytes
 7 Max core file size        0                    unlimited            bytes
 8 Max resident set          unlimited            unlimited            bytes
 9 Max processes             1024                 774254               processes
10 Max open files            128000               128000               files
11 Max locked memory         65536                65536                bytes
12 Max address space         unlimited            unlimited            bytes
13 Max file locks            unlimited            unlimited            locks
14 Max pending signals       774254               774254               signals
15 Max msgqueue size         819200               819200               bytes
16 Max nice priority         0                    0
17 Max realtime priority     0                    0
18 Max realtime timeout      unlimited            unlimited            us 

Afficher les index en les triant par date

1 curl -s http://127.0.0.1:9200/_cat/shards | awk '{print $1}'  | sort -n | uniq
2 .kibana
3 logstash-2015.11.29
4 logstash-2015.11.30
5 logstash-2015.12.01
6 logstash-2015.12.02