Monday, August 31, 2015

Lucidworks - Connectors

  1. 2015.08.04
    1. hive serde
      1. introduction
        1. The Lucidworks Hive SerDe allows reading and writing data to and from Solr using Apache Hive
      2. example
        1. hive
        2. CREATE TABLE books (id STRING, cat STRING, title STRING, price FLOAT, in_stock BOOLEAN, author STRING, series STRING, seq INT, genre STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
        3. LOAD DATA LOCAL INPATH '/opt/lucidworks-hdpsearch/solr/example/exampledocs/books.csv' OVERWRITE INTO TABLE books;
        4. ADD JAR /opt/lucidworks-hdpsearch/hive/lucidworks-hive-serde-2.0.3.jar;
        5. CREATE EXTERNAL TABLE solr (id STRING, cat_s STRING, title_s STRING, price_f FLOAT, in_stock_b BOOLEAN, author_s STRING, series_s STRING, seq_i INT, genre_s STRING) STORED BY 'com.lucidworks.hadoop.hive.LWStorageHandler' LOCATION '/tmp/solr' TBLPROPERTIES('solr.server.url' = 'http://10.0.2.104:8983/solr', 'solr.collection' = 'myCollection');
        6. INSERT OVERWRITE TABLE solr SELECT b.* FROM books b;
        7. solr UI -> core selector -> myCollection_shar1_replica1 -> query -> execute query

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.