- Introduction
- elasticsearch-repository-hdfs plugin allows Elasticsearch 1.4 to use hdfs file-system as a repository for snapshot/restore.
- Installation
- version information
- CDH: 5.3.0
- ealsticsearch: 1.4.2
- elasticsearch-repository-hdfs: 2.1.0.Beta3-light
- note that the stable version which is 2.0.2 did not work right before I am writing this blog.
- check https://groups.google.com/forum/#!msg/elasticsearch/CZy1oJpKHyc/1uvoMbI5r5sJ
- hadoop installed at the same node
- append the output of "hadoop classpath" commnad to ES_CLASSPATH
- example
- install plugin at each node and restart it
- bin/plugin -i elasticsearch/elasticsearch-repository-hdfs/2.1.0.Beta3-light
- no hadoop installed at the same node
- install plugin at each node and restart it
- bin/plugin -i elasticsearch/elasticsearch-repository-hdfs/2.1.0.Beta3-hadoop2
- install plugin at each node and restart it
- repository register
- exmaple
- PUT _snapshot/hdfs
{
"type": "hdfs",
"settings": {
"path": "/backup/elasticsearch"
}
}
- verification
- POST _snapshot/hdfs/_verify
- version information
- Configuration
- uri: "hdfs://<host>:<port>/" # optional - Hadoop file-system URI
- path: "some/path" # required - path with the file-system where data is stored/loaded
- load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
- conf_location: "extra-cfg.xml" # optional - Hadoop configuration XML to be loaded (use commas for multi values)
- conf.<key> : "<value>" # optional - 'inlined' key=value added to the Hadoop configuration
- concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
- compress: "false" # optional - whether to compress the data or not (default)
- chunk_size: "10mb" # optional - chunk size (disabled by default)
- Reference
Wednesday, July 29, 2015
elasticsearch-repository-hdfs
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.