๋ฐ˜์‘ํ˜•

์‚ฌ์ „์ž‘์—… ํ•„์š”

  1. root ๊ณ„์ •์— JAVA_HOME ์ถ”๊ฐ€ ํ•„์š”ํ•จ
  2. solr ์„ค์น˜
  3. Maven 3.6.3 ์„ค์น˜
  4. PostgreSQL ์„ค์น˜ ๋ฐ DB - ranger, User - rangeradmin(pw:rangeradmin) ์ƒ์„ฑ


์ž‘์—…๋“ค ์‹คํ–‰ํ•  ๋•Œ root ๋˜๋Š” ๊ถŒํ•œ ๊ฐ€์ง„ ๊ณ„์ •์œผ๋กœ ํ•ด์•ผํ•จ
solr ์„ค์น˜ํ•„์š”!
https://n-a-y-a.tistory.com/m/68

[Apache Solr] Apache solr 8.5.0 ์„ค์น˜ํ•˜๊ธฐ

ranger, atlas๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์„ ์„ค์น˜ํ•ด์•ผํ•˜๋Š” ์˜คํ”ˆ์†Œ์Šค์ด๋‹ค. https://archive.apache.org/dist/lucene/solr/8.5.0/ Index of /dist/lucene/solr/8.5.0 archive.apache.org ํ•ด๋‹น ์‚ฌ์ดํŠธ์—์„œ 8.5.0๋ฒ„์ „์„ ๋‹ค์šด ๋ฐ›..

n-a-y-a.tistory.com


ํ•„์š”ํ•œ ํŒจํ‚ค์ง€๋“ค

1
2
3
4
$ sudo yum install git gcc python3 python3-devel
$ sudo yum install -y npm nodejs
$ npm install node-ranger
$ pip3 install requests
cs

Ranger ์„ค์น˜ ๋ฐ MVN ๋นŒ๋“œ

1
2
3
4
$ sudo wget https://downloads.apache.org/ranger/2.1.0/apache-ranger-2.1.0.tar.gz
$ sudo tar xvzf apache-ranger-2.1.0.tar.gz
$ cd apache-ranger-2.1.0/
$ mvn -Pall -DskipTests=true clean compile package install
cs

mvn ๋นŒ๋“œ๋Š” ํ•œ์‹œ๊ฐ„ ๋ฐ˜์ •๋„ ๊ฑธ๋ฆฌ๊ณ , mvn์—์„œ ๋นŒ๋“œ ์—๋Ÿฌ๋Š” ๋‚˜์ง€์•Š์•˜์Œ.
์—๋Ÿฌ ๋‚  ๊ฒฝ์šฐ, mvn ์„ค์ • ๊ฐ’ ํ™•์ธ ํ•„์š”ํ•จ.

Ranger - Admin ์„ค์น˜

1
2
3
4
5
$ cd ${RANGER_SRC}/target/
$ ls -al
-rw-r--r--.  1 root  root  248560962 Jul  1 17:50 ranger-2.1.0-admin.tar.gz
$ sudo tar xvzf ranger-2.1.0-admin.tar.gz 
$ cd ranger-2.1.0-admin
cs

Ranger - Admin Config ์„ค์ •

1
2
3
4
5
6
7
8
9
10
11
12
13
$ vi ranger-2.1.0-admin/install.properties
###
DB_FLAVOR=POSTGRES
SQL_CONNECTOR_JAR=/usr/share/java/postgresql.jar
db_root_user=postgres
db_root_password=postgres
db_host=localhost:5432/ranger
db_name=ranger
db_user=rangeradmin
db_password=rangeradmin
audit_solr_urls=http://localhost:6083/solr/ranger_audits
haddop_conf=/opt/hadoop-3.1.1/etc/hadoop
###
cs/ใ…•ใ„ดใ„ท


/usr/share/java/ dir์— postgresql JDBC ์„ค์น˜ ํ•„์š”!

Solr ์‹คํ–‰๋˜์–ด ์žˆ๋Š” ์ƒํƒœ์—์„œ solr-ranger set up

1
ranger-2.1.0-admin/contrib/solr_for_audit_setup/setup.sh
cs

Solr ์‹คํ–‰

1
# ./opt/solr/ranger_audit_server/scripts/start_solr.sh
cs

Ranger-admin setup.sh ์‹คํ–‰

1
# ranger-2.0.0-admin/set_globals.sh
# ranger-2.0.0-admin/setup.sh
cs

์„ฑ๊ณต๋กœ๊ทธ

1
2
3
4
5
2021-07-02 14:09:47,267  [I] --------- Verifying Ranger DB connection ---------
2021-07-02 14:09:47,267  [I] Checking connection..
2021-07-02 14:09:47,267  [JISQL] /usr/lib/jvm/java-1.8.0-openjdk/bin/java  -cp /usr/share/java/postgresql-42.2.8.jar:/opt/ranger-2.1.0-admin/jisql/lib/* org.apache.util.sql.Jisql -driver postgresql -cstring jdbc:postgresql://localhost:5432/ranger              # for db_flavor=mysql|postgres|sqla|mssql       #for example: db_host=localhost:3306/ranger -u rangeradmin -p '********' -noheader -trim -c \;  -query "select 1;"
2021-07-02 14:09:47,508  [I] Checking connection passed.
Installation of Ranger PolicyManager Web Application is completed.
cs

ranger-admin ์‹คํ–‰

1
$ sudo ranger-admin start
cs


6080 ํฌํŠธ ์ ‘์†ํ•˜๋ฉด ranger ํ™”๋ฉด ํ™•์ธ๊ฐ€๋Šฅํ•จ.
ID/PW : admin

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

์ŠคํŒŒํฌ๋ฅผ ์‹คํ–‰ํ•  ๋•Œ, ๋ฉ”๋ชจ๋ฆฌ์™€ ์ฝ”์–ด๋ฅผ ์„ค์ •ํ•˜์—ฌ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค.

 

x=sc.parallelize([“spark”, ”rdd”, ”example”, “sample”, “example”], 3)  ๋ณ‘๋ ฌํ™”(transformation)

x=x.map(lambda x:(x,1))  #์ž…๋ ฅ๊ฐ’ : x   ์ถœ๋ ฅ๊ฐ’:  (x,1)  ๋งคํ•‘(transformation)

y.collect   ์ง‘ํ•ฉ(action)

[(‘spark’,1), (‘rdd’,1), (‘example’,1), (‘sample’,1), (‘example’,1)]

 

spark yarn ์‹คํ–‰

scala : spark-shell --master yarn --queue queue_name

python : pyspark --master yarn --queue queue_name

--driver-memory 3G : spark driver๊ฐ€ ์‚ฌ์šฉํ•  ๋ฉ”๋ชจ๋ฆฌ default = 1024M

--executor-memory 3G : ๊ฐ spark executor๊ฐ€ ์‚ฌ์šฉํ•  ๋ฉ”๋ชจ๋ฆฌ์–‘

--executor-cores NUM : ๊ฐ spark executor์˜ ์ฝ”์–ด์˜ ์–‘

์ž‘์„ฑํ•œ ํŒŒ์ผ Spark์—์„œ ์‹คํ–‰์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•

ํŒŒ์ด์ฌ ํŒŒ์ผ

spark-submit –master local[num] ํŒŒ์ผ๋ช….py 

  (num์€ ์“ฐ๋ ˆ๋“œ ๊ฐœ์ˆ˜,default ๊ฐ’์€ 2~4๊ฐœ ์ •๋„)

์ž๋ฐ”,์Šค์นผ๋ผ

spark-submit \ --class “SimpleApp”\ --master local[num] /location~/name.jar

 

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

NoSQL ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์ด๋‹ค.

ํ•˜๋‘ก์˜ ๋ฐ์ดํ„ฐ๋ฅผ NoSQL (Key, value) ์Œ์œผ๋กœ ์ €์žฅํ•จ

 

 

$ /hadoop/sbin/start-all.sh

$ ./start-hbase.sh

$ ./hbase shell

### hbase test ###

create 'test', 'cf'

list 'test'

describe 'test'

put 'test', 'row1', 'cf:a', 'value1'

put 'test', 'row2', 'cf:b', 'value2'

put 'test', 'row3', 'cf:c', 'value3'

scan 'test'

------------------------

ROW COLUMN+CELL

row1 column=cf:a, timestamp=1612833812641, value=value1  

row2 column=cf:b, timestamp=1612833817184, value=value2

row3 column=cf:c, timestamp=1612833818011, value=value3

3 row(s)

Took 0.8014 seconds

whoami

grant 'username','RWXCA'

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

HIVE ํ…Œ์ด๋ธ” ๊ด€๋ฆฌ

HIVE ํ…Œ์ด๋ธ”

1. ๋ฐ์ดํ„ฐ๋ฅผ HIVE ํ…Œ์ด๋ธ”๋กœ ๊ฐ€์ ธ์˜ค๋ฉด?

HiveQL, ํ”ผ๊ทธ, ์ŠคํŒŒํฌ ๋“ฑ์„ ํ™œ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌ > ์ƒํ˜ธ์šด์˜ ๋ณด์žฅ

2. HIVE๊ฐ€ ์ง€์›ํ•˜๋Š” ํ…Œ์ด๋ธ” ์ข…๋ฅ˜

    - ๋‚ด๋ถ€ ํ…Œ์ด๋ธ” : HIVE๊ฐ€ ๊ด€๋ฆฌ, HIVE/ ๋ฐ์ดํ„ฐ์›จ์–ดํ•˜์šฐ์Šค์— ์ €์žฅ, ๋‚ด๋ถ€ํ…Œ์ด๋ธ” ์‚ญ์ œ ์‹œ ๋ฉ”ํƒ€์ •์˜์™€ ๋ฐ์ดํ„ฐ๊นŒ์ง€ ์‚ญ์ œ๋จ,

   ORC๊ฐ™์€ ํ˜•์‹์œผ๋กœ ์ €์žฅ๋˜์–ด ๋น„๊ต์  ๋น ๋ฅธ ์„ฑ๋Šฅ

    - ์™ธ๋ถ€ ํ…Œ์ด๋ธ” : ํ•˜์ด๋ธŒ๊ฐ€ ์ง์ ‘ ๊ด€๋ฆฌํ•˜์ง€ ์•Š์Œ,

   ํ•˜์ด๋ธŒ์˜ ๋ฉ”ํƒ€์ •์˜๋งŒ ์‚ฌ์šฉํ•˜์—ฌ ์›์‹œ ํ˜•ํƒœ๋กœ ์ €์žฅ๋œ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ์ ‘๊ทผ

   ์™ธ๋ถ€ ํ…Œ์ด๋ธ”์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ญ์ œํ•ด๋„ ํ…Œ์ด๋ธ” ๋ฉ”ํƒ€ ์ •์˜๋งŒ ์‚ญ์ œ๋˜๊ณ  ๋ฐ์ดํ„ฐ๋Š” ์œ ์ง€๋จ.

   ํ•ด๋‹น ๋ฐ์ดํ„ฐ๊ฐ€ ํ•˜์ด๋ธŒ ์™ธ๋ถ€์— ์ ์žฌ ๋˜์–ด์žˆ๊ฑฐ๋‚˜ ํ…Œ์ด๋ธ”์ด ์‚ญ์ œ๋˜๋”๋ผ๋„ ์›๋ณธ ๋ฐ์ดํ„ฐ๊ฐ€ ๋‚จ์•„ ์žˆ์–ด์•ผํ•  ๋•Œ ์‚ฌ์šฉ

3.csv ํŒŒ์ผ์„ ํ•˜์ด๋ธŒ ํ…Œ์ด๋ธ”๋กœ ๊ฐ€์ ธ์˜ค๊ธฐ

  1.names.csv ์„ HDFS์— ๋ณต์‚ฌ

  2. hdfs dfsmkdir names

  3. hdfs dfs –put names.csv names

  4. hive ์‹คํ–‰ ํ›„ ์ฟผ๋ฆฌ๋กœ ํ…Œ์ด๋ธ” ์ƒ์„ฑ  location ‘/directory’ ๊ตฌ๋ฌธ์€ ํ…Œ์ด๋ธ”์ด ์‚ฌ์šฉํ•  ์ž…๋ ฅ ํŒŒ์ผ์˜ ๊ฒฝ๋กœ์ด๋‹ค.

  5. select * from ~ ๋ฐ์ดํ„ฐ ํ™•์ธํ•˜๊ธฐ

  6. stored as orc > ๋‚ด๋ถ€ ํ…Œ์ด๋ธ”

  7. ๋ฐ์ดํ„ฐ ํ˜•์‹ ํ…์ŠคํŠธ ํŒŒ์ผ, ์‹œํ€€์Šค ํŒŒ์ผ(k-v์Œ), RC ํŒŒ์ผ, ORC ํ˜•์‹, Parquet ํ˜•์‹

 

์™ธ๋ถ€ ํ…Œ์ด๋ธ” ์ƒ์„ฑ

suhdfs

hdfs dfsmkdir /Smartcar

hdfs dfs –put /txtfile.txt /Smartcar

hdfs dfschown –R hive /Smartcar

hdfs dfschmod –R 777 /Smartcar

su – hive

hive

create external table (~) ~ location /Smartcar;

๋‚ด๋ถ€ ํ…Œ์ด๋ธ” ์ƒ์„ฑ

create table (~) ~ location /Smartcar;

์™ธ๋ถ€ ํ…Œ์ด๋ธ”์˜ ๋ฐ์ดํ„ฐ ๋‚ด๋ถ€ ํ…Œ์ด๋ธ”๋กœ ๋ณต์‚ฌ

insert overwrite table SmartCar_in

select * from SmartCar_ex;

๋‚ด๋ถ€ ํ…Œ์ด๋ธ” ๋””๋ ‰ํ„ฐ๋ฆฌ ์ƒ์„ฑํ™•์ธ

hdfs dfs –ls /Smartcar

/Smartcar/base_0000001/bucket_00000/bucket_00000

 

ํ•˜์ด๋ธŒ๋Š” SQL๊ณผ ์œ ์‚ฌํ•ด์„œ

๊ธฐ์กด์— SQL์„ ๊ณต๋ถ€ํ–ˆ๋‹ค๋ฉด ์–ด๋ ต์ง€์•Š๋‹ค.

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

ํ•˜๋‘ก ๋ฒ„์ „ 3.1 ๊ธฐ์ค€์œผ๋กœ 

๊ฐœ์ธ์ ์œผ๋กœ ์ •๋ฆฌํ•œ ๋ช…๋ น์–ด์ด๋‹ค.

 

๊ธฐ์กด์— ๋ฆฌ๋ˆ…์Šค์— ๋Œ€ํ•ด ๊ณต๋ถ€ํ–ˆ๋‹ค๋ฉด ํ•˜๋‘ก ๋ช…๋ น์–ด๋ฅผ ๊ณต๋ถ€ํ•˜๋Š”๋ฐ์— ์—„์ฒญ ์–ด๋ ต์ง„์•Š๋‹ค.

 

1.hdfs dfs –cat /tmp/Sample2.txt      #ํŒŒ์ผ ์ฝ๊ธฐ

2.hdfs dfs –checksum /tmp/Sample2.txt ๋ฐ์ดํ„ฐ๋ฌด๊ฒฐ์„ฑ

3.hdfs dfschgrp kyn /tmp/Sample2.txt

4.hdfs dfschown kyn /tmp/Sample2.txt

5.hdfs dfschmod –R 777 /tmp/Sample2.txt

6.hdfs dfscopyFromLocal /tmp/Sample2.txt put์œ ์‚ฌ

7.hdfs dfscopyToLocal /tmp/Sample2.txt

8.hdfs dfs –count /tmp/Sample2.txt

9.hdfs dfs –cp /tmp/Sample2.txt /tmp/rename.txt

10. hdfs dfscreateSnapshot

11. hdfs dfsdeleteSnapshot

12.hdfs dfs -df -h /tmp/ ๋””์Šคํฌ ์—ฌ์œ  ๊ณต๊ฐ„

13.hdfs dfs –du –h /tmp/ ๋””์Šคํฌ ์‚ฌ์šฉ๋Ÿ‰

14.hdfs dfs –expunge ํœด์ง€ํ†ต ๋น„์šฐ๊ธฐ (hdfs ํŒŒ์ผ ์‚ญ์ œ ํ›„ ํœด์ง€ํ†ต์— ์ž„์‹œ์ €์žฅ, ์‹œ๊ฐ„๊ฒฝ๊ณผํ›„ ์‚ญ์ œํ•จ)

15.hdfs dfs –find /temp/Sample2.txt

16.hdfs dfs –get /temp/Sample2.txt

17.hdfs dfs –head /temp/Sample2.txt

18.hdfs dfs –help

20. hdfs dfs –ls /

21. hdfs dfsmkdir /temp/test

22. hdfs dfsmoveFromLocal  (Local ์—์„œ hdfs)

23. hdfs dfsmoveToLocal (hdfs์—์„œ local)

24. hdfs dfs –mv /tmp/Sample2.txt /tmp/test

25. hdfs dfs –put /root/home/Sample.txt /tmp

26. hdfs dfsrenameSnapshot oldname newname

27. hdfs dfsrmdir/test

28. hdfs dfs –rm /tmp/Sample2.txt

29. hdfs dfs –stat ‘%d %o %r %u %n’ /tmp/Sample2.txt

%b  ํŒŒ์ผ ํฌ๊ธฐ %o ํŒŒ์ผ ๋ธ”๋ก ํฌ๊ธฐ $r ๋ณต์ œ ์ˆ˜ %u ์†Œ์œ ์ž %n ํŒŒ์ผ๋ช…

30. hdfs dfs –tail –F /tmp/Sample2.txt

31. hdfs dfs –test ~  true > 0 false > -1

32. hdfs dfs –text /tmp/filename

33. hdfs dfs –touch /tmp/filename

34. hdfs dfstouchz /tmp/filename (0 byte)

35. hdfs dfs –usage command ์‚ฌ์šฉ๋ฐฉ๋ฒ• ์ถœ๋ ฅ

 

File system checking

1.hdfs fsck /

2.hdfs fsck delete

์ปค๋ŸฝํŠธ๋œ ํŒŒ์ผ ์‚ญ์ œ

3. hdfs fsck move

์ปค๋ŸฝํŠธ๋œ ํŒŒ์ผ ์ด๋™

 

Status: HEALTHY

 Number of data-nodes:  3

 Number of racks:               1

 Total dirs:                    950

 Total symlinks:                0

Replicated Blocks:

 Total size:    3664434506 B (Total open files size: 1353 B)  ํ˜„์žฌ ์‚ฌ์šฉ์ค‘์ธ byte

 Total files:   2127 (Files currently being written: 10)

 Total blocks (validated):      1998 (avg. block size 1834051 B)

 (Total open file blocks (not validated): 5)

 Minimally replicated blocks:   1998 (100.0 %)  ์ตœ์†Œ๊ฐœ์ˆ˜๋กœ ๋ณต์ œ๋œ ๋ธ”๋ก

 Over-replicated blocks:        0 (0.0 %)     ์„ค์ • ๊ฐ’๋ณด๋‹ค ๋” ๋ณต์ œ๋œ ๋ธ”๋ก

 Under-replicated blocks:       1998 (100.0 %)     ์„ค์ • ๊ฐ’๋ณด๋‹ค ์•„๋ž˜ ๋ณต์ œ๋ธ”๋ก

 Mis-replicated blocks:         0 (0.0 %)     ๊ทœ์ • ์œ„๋ฐ˜ ๋ธ”๋ก

 Default replication factor:    3     dfs.replication ๊ฐ’

 Average block replication:     1.998999     ๋ณต์ œ๊ฐ’ ํ‰๊ท 

 Missing blocks:                0

 Corrupt blocks:                0     ์˜ค๋ฅ˜๋ฐœ์ƒ

 Missing replicas:              1998 (33.34446 %)       ๋ณต์ œ์กด์žฌX๋ธ”๋ก

 

 

์ปค๋ŸฝํŠธ ์ƒํƒœ

๋ชจ๋“  ๋ธ”๋ก์— ๋ฌธ์ œ ์ƒ๊ฒจ

๋ณต๊ตฌ ๋ชปํ•˜๋Š” ์ƒํƒœ

3copy ๋ฐฉ์‹์œผ๋กœ ๋ฐ์ดํ„ฐ๋…ธ๋“œ ์ค‘ ๋ฌธ์ œ ์ƒ๊ธฐ๋ฉด

Reblancing์„ ํ†ตํ•ด ๋ฐ์ดํ„ฐ์˜ ํฌ๊ธฐ๋ฅผ ๋งž์ถ”๊ฑฐ๋‚˜

์œ ์‹ค๋œ?๋ฐ์ดํ„ฐ๋ฅผ copy

 

 

 

NameNode ์ƒํƒœ ๊ด€๋ฆฌ

Hdsf dfsadminsafemode enter  : Namenode ๋ฐ์ดํ„ฐ ๋ณ€๊ฒฝ ๋ชปํ•˜๊ฒŒ safemode

Hdfs dfsadminsafemode get : name node ํ™•์ธ

Hdfs dfsadminsafemode leave : name node๊ฐ€ safemode ๋‚˜๊ฐ

 

hdfs envvars (ํ™˜๊ฒฝ๋ณ€์ˆ˜ ์ถœ๋ ฅ๋จ)

JAVA_HOME='/usr/java/jdk'

HADOOP_HDFS_HOME='/usr/hdp/3.1.0.0-78/hadoop-hdfs'

HDFS_DIR='./'

HDFS_LIB_JARS_DIR='lib'

HADOOP_CONF_DIR='/usr/hdp/3.1.0.0-78/hadoop/conf'

HADOOP_TOOLS_HOME='/usr/hdp/3.1.0.0-78/hadoop'

HADOOP_TOOLS_DIR='share/hadoop/tools'

HADOOP_TOOLS_LIB_JARS_DIR='share/hadoop/tools/lib'

 

 

hdfs httpfs

HttpFS ์„œ๋ฒ„, HDFS HTTP ๊ฒŒ์ดํŠธ์›จ์ด ์‹คํ–‰

 

1.hdfs version  ์„ค์น˜๋œ ํ•˜๋‘ก ๋ฒ„์ „ ํ™•์ธ

2.hdfs classpath ์„ค์น˜๋œ ํ•˜๋‘ก jar, ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ classpath ์ถœ๋ ฅ

3.hdfs groups  hdfs : hadoop hdfs kyn

4.hdfs lsSnapshottableDir ์Šค๋ƒ…์ƒท ๋ฆฌ์ŠคํŠธ ์ถœ๋ ฅ๋จ

5.hdfs jmxget jmx์ •๋ณด ์ถœ๋ ฅ

 

 

1.init: server=localhost;port=;service=NameNode;localVMUrl=null

2.Domains:

3.        Domain = JMImplementation

4.        Domain = com.sun.management

5.        Domain = java.lang

6.        Domain = java.nio

7.        Domain = java.util.logging

8.MBeanServer default domain = DefaultDomain

9.MBean count = 22

10.Query MBeanServer MBeans:

11.List of all the available keys:

 

JMX - ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ(์†Œํ”„ํŠธ์›จ์–ด)/๊ฐ์ฒด/์žฅ์น˜ (ํ”„๋ฆฐํ„ฐ ๋“ฑ) ๋ฐ ์„œ๋น„์Šค ์ง€ํ–ฅ ๋„คํŠธ์›Œํฌ ๋“ฑ์„ ๊ฐ์‹œ ๊ด€๋ฆฌ๋ฅผ ์œ„ํ•œ ๋„๊ตฌ๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋ฐ” API์ด๋‹ค.

 

 

1.Hdfs oev   Hadoop offline edits viewer editlogs ํŒŒ์ผ ํฌ๋งท ํŒŒ์‹ฑ

2.Hdfs oiv    Hadoop offline image viewer fsImage ์‚ฌ๋žŒ์ด ์ฝ์„ ์ˆ˜ ์žˆ๊ฒŒ ๋ณ€๊ฒฝ

3.Hdfs snapshotDiff [๊ฒฝ๋กœ]

1.A์Šค๋ƒ…์ƒท B ์Šค๋ƒ…์ƒท A์—์„œ B์™€ ๋‹ค๋ฅธ ์  ์ถœ๋ ฅ

 

 

 

Administration Commands

 

Hdfs balancer : ๋ธ”๋ก ์œ„์น˜ ๋ถ„์„, ๋ฐ์ดํ„ฐ ๋…ธ๋“œ๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ ๊ท ํ˜•๋งž์ถค

Hdfs cacheadmin : cache pool๊ณผ ์ƒํ˜ธ์ž‘์šฉ ๊ฐ€๋Šฅ,

Hdfs cypto : ๋ฐฐ์น˜์—์„œ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ๋Œ€ enctyption zone ๋ชฉ๋กํ™”, ๋„ค์ž„๋…ธ๋“œ ํผํฌ๋จผ์Šค ํ–ฅ์ƒ

Hdfs datanode : HDFS ๋ฐ์ดํ„ฐ๋…ธ๋“œ ์‹คํ–‰, ๋กค๋ฐฑ

Hdfs dfsrouter : DFS ๋ผ์šฐํ„ฐ ์‹คํ–‰

Hdfs dfsrouteradmin : ๋ผ์šฐํ„ฐ ๊ธฐ๋ฐ˜ ์กฐ์ง ๊ด€๋ฆฌ

Hdfs diskbalancer : diskbalancer ์‹คํ–‰ (๋ชจ๋“  ๋””์Šคํฌ์˜ ๋ฐ์ดํ„ฐ๋…ธ๋“œ์— ๋ฐ์ดํ„ฐ ๋ถ„๋ฐฐํ•˜๋Š” ์—ญํ• )

Hdfs ec : erasure conding _ ์†Œ๊ฑฐ์ฝ”๋“œ ๊ด€๋ จ๋œ ๋ช…๋ น์–ด

Hdfs haadmin : namenode status ํ™•์ธ, namenode active ์„ ์ •

Hdfs journalnode : journalnode ์‹œ์ž‘

Hdfs mover : data migration ์‹คํ–‰

Hdfs namenode : namenode ์‹คํ–‰, ๋ฐฑ์—…, ํšŒ๋ณต, ์—…๊ทธ๋ ˆ์ด๋“œ, ์ด์ „๋ฒ„์ „์œผ๋กœ ๋กค๋ฐฑ,,๋“ฑ

Hdfs nfs3 : HDFS NFS3 ์„œ๋น„์Šค ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด NFS3 ๊ฒŒ์ดํŠธ์›จ์ด ์‹คํ–‰

Hdfs portmap : HDFS NFS3์„œ๋น„์Šค ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด RPS portmap ์‹คํ–‰

Hdfs secondarynamenode : secondary namenode ์‹คํ–‰

Hdfs storagepolicies : storage policies ๋ฆฌ์ŠคํŠธ ์ถœ๋ ฅ, ์„ค์ •  

BlockStoragePolicy{PROVIDED:1, storageTypes=[PROVIDED, DISK], creationFallbacks=[PROVIDED, DISK]

, replicationFallbacks=[PROVIDED, DISK]}

Hdfs zkfc : zookeeper failover(์žฅ์• ๋Œ€๋น„๊ธฐ๋Šฅ) controller process ์‹คํ–‰

 

hdfs dfsadmin -report

Configured Capacity: 248290449920 (231.24 GB)

Present Capacity: 194169229810 (180.83 GB)

DFS Remaining: 183161573874 (170.58 GB)

DFS Used: 11007655936 (10.25 GB)

DFS Used%: 5.67%

Replicated Blocks:

        Under replicated blocks: 0

        Blocks with corrupt replicas: 0

        Missing blocks: 0

        Missing blocks (with replication factor 1): 0

        Low redundancy blocks with highest priority to recover: 0

        Pending deletion blocks: 0

Erasure Coded Block Groups:

        Low redundancy block groups: 0

        Block groups with corrupt internal blocks: 0

        Missing block groups: 0

        Low redundancy blocks with highest priority to recover: 0

        Pending deletion blocks: 0

-------------------------------------------------

 

Live datanodes (3):

Name: 192.168.56.131:50010 

Hostname: bdd.co.kr

Decommission Status : Normal

Configured Capacity: 76060626432 (70.84 GB)

DFS Used: 3599179776 (3.35 GB)

Non DFS Used: 21608422912 (20.12 GB)

DFS Remaining: 50584588454 (47.11 GB)

DFS Used%: 4.73%

DFS Remaining%: 66.51%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 6

Last contact: Fri Sep 04 12:14:34 KST 2020

Last Block Report: Fri Sep 04 08:00:09 KST 2020

Num of Blocks: 2003

 

 

Hdfs dfsadmin –report –live : ์‚ด์•„์žˆ๋Š” ๋ฐ์ดํ„ฐ ๋…ธ๋“œ ํฌํ•จ ํ•˜๋‘ก ์ƒํƒœ

Hdfs dfsadmin –report –dead: ์ฃฝ์€ ๋ฐ์ดํ„ฐ ๋…ธ๋“œ ์ƒํƒœ

 

 

Debug Commands

Hdfs debug verifyMeta : HDFS ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ, ๋ธ”๋ก ํŒŒ์ผ ๊ฒ€์ฆ.

๋ฉ”ํƒ€ํŒŒ์ผ์˜ ์ฒดํฌ์„ฌ๊ณผ ๋ธ”๋กํŒŒ์ผ๊ณผ ๋งค์น˜ํ•˜์—ฌ ๊ฒ€์ฆ

Hdfs debug computeMeta : ๋ธ”๋กํŒŒ์ผ์—์„œ์˜ HDFS ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ ๊ณ„์‚ฐ,

๋ธ”๋กํŒŒ์ผ์—์„œ ์ฒดํฌ์„ฌ ๊ณ„์‚ฐํ•˜๊ณ  ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ ์‚ฐ์ถœ๋ฌผ์— ์ €์žฅํ•จ

Hdfs debug recoverLease : recoverLease๋ฅผ ํด๋ผ์ด์–ธํŠธ๊ฐ€ recoverLease๋ฅผ ๋ช‡ ๋ฒˆ ํ˜ธ์ถœํ• ์ง€ ์„ค์ •(default = 1),

์ง€์ •๋œ path์—์„œ lease ํšŒ๋ณต

HDFS lease >> ํด๋ผ์ด์–ธํŠธ์—๊ฒŒ ์“ฐ๊ธฐ ์ž‘์—…์„ ์œ„ํ•ด ํŒŒ์ผ ์—ฌ๋Š” ๊ฒƒ์„ ๊ถŒํ•œ์„ ๋ถ€์—ฌ ๋ฐ›์Œ

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

hue ์„ค์น˜ ํ•  ๋•Œ ์•ž์„œ ์žˆ๋˜ ํ•˜๋‘ก ์—์ฝ”์‹œ์Šคํ…œ๋“ค์ด ์–ด๋Š์ •๋„ ์„ค์น˜๋˜์—ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๊ณ  ์ง„ํ–‰ํ•˜๊ฒ ๋‹ค.

ํœด์˜ ๊ฒฝ์šฐ ์„ค์น˜ํ•˜๊ธฐ์ „์— ์‚ฌ์ „์ž‘์—…์„ ํ•ด์ค˜์•ผ ํ•œ๋‹ค.

postgres๋Š” ๋‹ค๋ฅธ ํฌ์ŠคํŠธ์—์„œ ์„ค์ •์„ ๋‹ค๋ฃจ๊ธฐ๋กœ ํ•˜๊ณ ,

ํœด ์„ค์น˜ ๊ฐ€์ด๋“œ ์—์„œ๋Š” ํœด์—์„œ ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ ๋ฒ ์ด์Šค ์ƒ์„ฑ์ •๋„๋งŒ ๋‹ค๋ฃฐ ์˜ˆ์ •์ด๋‹ค.

 

์‚ฌ์ „์ž‘์—…

ํœด๋Š” ํŒŒ์ด์ฌ์„ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ™˜๊ฒฝ๋ณ€์ˆ˜๋กœ ํŒŒ์ด์ฌ ๋ฒ„์ „์„ ์žก์•„์ค˜์•ผํ•œ๋‹ค.

ํ™˜๊ฒฝ๋ณ€์ˆ˜๋Š” .bash_profile ์— ์ถ”๊ฐ€ํ•˜์˜€๋‹ค.

ํŒŒ์ด์ฌ ํ™˜๊ฒฝ๋ณ€์ˆ˜ ์ถ”๊ฐ€

$ sudo vi ~/..bash_profile

export PYTHON_VER=python3.8

 

psycopg2 ์„ค์น˜ (์ „์— pip๋„ ์„ค์น˜๋˜์–ด ์žˆ์–ด์•ผํ•จ)

$ pip install psycopg2

$ python setup.py build

$ sudo python setup.py install

$ pip install psycopg2-binary


nodejs ์„ค์น˜ (centos7 ๊ธฐ์ค€ ์ด๋‹ค.)

 
$ sudo yum install epel-release

$ sudo yum install nodejs


hue ์—์„œ ์‚ฌ์šฉํ•˜๋Š” package ์„ค์น˜

$ sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel
cs


maven ์„ค์น˜

$ wget https://downloads.apache.org/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz -P /tmp

$ sudo tar xf /tmp/apache-maven-3.6.3-bin.tar.gz -C /opt

$ sudo ln -s /opt/apache-maven-3.6.0 /opt/maven 

$ sudo vi ~/.bash_profile

#MAVEN

export MAVEN_HOME=/opt/maven

export M2_HOME=$MAVEN_HOME

PATH=$PATH:$M2_HOME/bin:

$ source ~/.bash_profile

$ vi /opt/maven/conf/settings.xml


mirror ์‚ฌ์ดํŠธ ์ถ”๊ฐ€ํ•˜๊ธฐ

maven build๊ฐ€ ํ•„์š”ํ•œ ์•„ํŒŒ์น˜ ์˜คํ”ˆ์†Œ์Šค๋“ค์ด ์žˆ๋Š”๋ฐ, centos์˜ ๊ฒฝ์šฐ yum install maven์‹œ 3.0.5๊ฐ€ ์„ค์น˜๋œ๋‹ค.
3.0.5๋ฒ„์ „์œผ๋กœ ๋นŒ๋“œ ์‹œ fail์ด ๋นˆ๋ฒˆํ•˜๊ธฐ๋„ํ•˜๊ณ , ๊ณต์‹์‚ฌ์ดํŠธ์—์„œ๋„ 3.3์ด์ƒ ๋ฒ„์ „ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•˜๊ธฐ ๋•Œ๋ฌธ์—
์•„ํŒŒ์น˜ ๋ฏธ๋Ÿฌ ์‚ฌ์ดํŠธ์—์„œ ์ตœ์‹ ๋ฒ„์ „ maven์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•œ๋‹ค.

postgres์— hue db, user ์ถ”๊ฐ€ํ•˜๊ธฐ

psql -U postgres

CREATE USER hue WITH PASSWORD 'hue';

CREATE DATABASE hue OWNER hue;

\l

ํœด ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์™€ ์˜ค๋„ˆ ํ™•์ธํ•˜๊ธฐ

---

Solr ์„ค์น˜

https://n-a-y-a.tistory.com/m/68

 

[Solr] Apache solr 8.5.0 ์„ค์น˜ํ•˜๊ธฐ

ranger, atlas๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์„ ์„ค์น˜ํ•ด์•ผํ•˜๋Š” ์˜คํ”ˆ์†Œ์Šค์ด๋‹ค. https://archive.apache.org/dist/lucene/solr/8.5.0/ Index of /dist/lucene/solr/8.5.0 archive.apache.org ํ•ด๋‹น ์‚ฌ์ดํŠธ์—์„œ 8.5.0๋ฒ„์ „์„ ๋‹ค์šด ๋ฐ›..

n-a-y-a.tistory.com

---

 

ํœด ์„ค์น˜

 
$ wget https://cdn.gethue.com/downloads/hue-4.0.1.tgz

$ tar -xvzf hue-4.0.1.tgz

$ ln -s hue-4.0.0 hue

$ cd hue

$ export PREFIX=/usr/local

$ make 7$ make install

 


ํœด ์‹คํ–‰

 
$ ./build/env/bin/supervisor &

$ netstat -nltp | grep 8888

์ž…๋ ฅ์‹œ ์„œ๋น„์Šค ์˜ฌ๋ผ์˜จ ๊ฒƒ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋‹ค.


***HDFS***
***HIVE***
***HBASE***
๋“ฑ ๊ฐ ์„œ๋น„์Šค๋“ค์€ ํ˜„์žฌ ์—ฐ๊ฒฐ๋œ ์ƒํƒœ๋Š” ์•„๋‹ˆ๋ฏ€๋กœ
๋งž๋Š” config๊ฐ’๋“ค์„ ์ฐพ์•„ ์ˆ˜์ •ํ•ด์ค˜์•ผํ•œ๋‹ค.


์ฐธ๊ณ ์‚ฌ์ดํŠธ
docs.gethue.com/administrator/installation/

 

Installation :: Hue SQL Assistant Documentation

docs.gethue.com

 

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

์ฃผํ‚คํผ๋ž€ ๋ถ„์‚ฐ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์œ„ํ•œ ๋ถ„์‚ฐ ์ฝ”๋””๋„ค์ด์…˜์ด๋‹ค.

 

znode(์ €๋„๋…ธ๋“œ)๊ฐ€ ๊ฐ๊ฐ์˜ ์„œ๋ฒ„์— ์œ„์น˜ํ•ด ์žˆ๋‹ค.

๊ฐ ํ•˜๋‘ก์˜ ์„œ๋น„์Šค๋“ค์ด ์ž˜ ๋™์ž‘ํ•˜๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•œ๋‹ค.

์ฃผ๊ธฐ์ ์œผ๋กœ ํ•˜ํŠธ๋น„ํŠธ ์š”๊ตฌํ•˜์—ฌ ๋ฐ›๋Š” ๋ฐฉ์‹์œผ๋กœ,

 

๋”ฐ๋ผ์„œ ์ฃผ๊ธฐํผ๋Š” ํ™€์ˆ˜๋กœ ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ๊ตฌ์„ฑํ•˜๋Š”๋ฐ

์—ฌ๊ธฐ์„œ ๋“ค์–ด๊ฐ€๋Š” ๊ฐœ๋…์ด ์ฟผ๋Ÿผ์ด๋‹ค.

 

์ฟผ๋Ÿผ์ด๋ž€? 

๋‹ค์ˆ˜๊ฒฐ๋กœ ์˜ˆ๋ฅผ ๋“ค์–ด 5๊ฐœ์˜ ์„œ๋ฒ„๋กœ ๊ตฌ์„ฑ ๋˜์–ด์žˆ๊ณ ,

2๊ฐœ์˜ ์„œ๋ฒ„๊ฐ€ ์ฃฝ๋Š”๋‹ค๊ณ  ๊ฐ€์ •ํ–ˆ์„ ๋•Œ ์ •์ƒ์ ์œผ๋กœ ๋™์ž‘ํ•œ๋‹ค๊ณ  ํŒ๋‹จํ•œ๋‹ค.

๊ทธ๋ฆฌ๊ณ  5๊ฐœ ์ค‘ 3๊ฐœ์˜ ์„œ๋ฒ„๊ฐ€ ์ฃฝ์—ˆ์„ ๊ฒฝ์šฐ, ๋‹ค์ˆ˜๊ฒฐ๋กœ ์ธํ•ด ๋น„์ •์ƒ์ด๋ผ๊ณ  ํŒ๋‹คํ•œ๋‹ค.

๊ทธ๋กœ ์ธํ•ด, ์ฃผํ‚คํผ๋Š” ํ™€์ˆ˜๋กœ ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค.

 

zookeeper ํด๋Ÿฌ์Šคํ„ฐ๋Š”

ํ•˜๋‚˜์˜ ์„œ๋ฒ„๊ฐ€ ๋ฆฌ๋”์ด๊ณ , ๋‹ค๋ฅธ ์„œ๋ฒ„๋Š” ํŒ”๋กœ์›Œ์ด๋‹ค

๋ฆฌ๋” ์„œ๋ฒ„๋ฅผ ๊ธฐ์ค€์œผ๋กœ sync๋ฅผ ๋งž์ถ˜๋‹ค.

 

์ž์„ธํ•œ ๋‚ด์šฉ์€ ๊ณต์‹ ์‚ฌ์ดํŠธ ์ฐธ์กฐ๋ฐ”๋žŒ

 

์ฃผํ‚คํผ ์„ค์น˜ ๋ฐฉ๋ฒ•

์ฃผํ‚คํผ ํŒŒ์ผ ๋‹ค์šด๋กœ๋“œ ํ›„ ์••์ถ• ํ•ด์ œ ํ›„ ํ…Œ์ŠคํŠธ

1
2
3
4
wget https://mirror.navercorp.com/apache/zookeeper/zookeeper-3.5.9/apache-zookeeper-3.5.9.tar.gz
tar xvzf apache-zookeeper-3.5.9.tar.gz
cd bin/zkCli.sh -server 127.0.0.1:2181
./zkCli.sh -server 127.0.0.1:2181
cs

 

์ฃผํ‚คํผ conf์—์„œ zoo.cfg ํŒŒ์ผ ์ƒ์„ฑ

1
2
3
4
5
6
$ vi $ZOOKEEPER_HOME/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/userDIr/zookeeper
clientPort=2181
cs

 

์ฃผํ‚คํผ ์‹คํ–‰

1
bin/zkServer.sh start
cs

 

jps๋กœ ์ฃผํ‚คํผ ์‹คํ–‰ ์ค‘์ธ์ง€ ํ™•์ธ

1
2
3
jps
-------------------
Quorumpeermain
cs

netstat -nltp | grep 2181๋กœ ์ฃผํ‚คํผ ํ™•์ธํ•˜๊ธฐ

 

 

์ฃผํ‚คํผ์˜ ํฌํŠธ๋ฒˆํ˜ธ๋Š” zoo.cfg ํŒŒ์ผ์—์„œ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ๋‹ค.

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

livy-env.sh

export SPARK_HOME=/usr/lib/spark

export HADOOP_CONF_DIR=/etc/hadoop/conf

 

livy start

./bin/livy-server start

 

livy ์ •์ƒ๋™์ž‘ํ•˜๋Š”์ง€ spark์—์„œ ํ…Œ์ŠคํŠธํ•˜๋Š” ์˜ˆ์ œ

sudo pip install requests

 

import json, pprint, requests, textwrap

host = 'http://localhost:8998'

data = {'kind': 'spark'}

headers = {'Content-Type': 'application/json'}

r = requests.post(host + '/sessions', data=json.dumps(data), headers=headers)

r.json() {u'state': u'starting', u'id': 0, u'kind': u'spark'}

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

728x90
๋ฐ˜์‘ํ˜•
๋ฐ˜์‘ํ˜•

https://dlcdn.apache.org/hive/hive-3.1.2/

 

Index of /hive/hive-3.1.2

 

dlcdn.apache.org

apache mirror ์‚ฌ์ดํŠธ์—์„œ ์›ํ•˜๋Š” hive ๋ฒ„์ „์˜ binary ํด๋”๋ฅผ ๋‹ค์šด๋ฐ›๋Š”๋‹ค. 

 

์‚ฌ์ „์ž‘์—… - Hadoop Path ์„ค์ • ๋˜์–ด์žˆ์–ด์•ผํ•จ

export HADOOP_HOME=<hadoop-install-dir>

 

์••์ถ•ํ•ด์ œ

wget https://dlcdn.apache.org/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz
tar xvzf apache-hive-3.1.2-bin.tar.gz

 

ํ™˜๊ฒฝ๋ณ€์ˆ˜ ์„ค์ •

Hive ํ™ˆ ํ™˜๊ฒฝ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•ด์•ผํ•œ๋‹ค.

.bash_prifile์—์„œ ์ˆ˜์ •ํ•˜๋Š” ๋ฐฉ์‹๋ณด๋‹จ /etc/profile.d/์— ์‰˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์ค„๊ฒƒ

vi /etc/profile.d/hive_home.sh

export HIVE_HOME=/opt/apache-hive-3.1.2-bin

export PATH=$PATH:$HIVE_HOME/bin

 

ํ•ด๋‹น ํŒŒ์ผ ์ €์žฅ ํ›„ ํ•œ๋ฒˆ ์‹คํ–‰ํ•ด์ค€๋‹ค.

chmod +x hive_home.sh
./hive_home.sh
source hive_home.sh

echo $HIVE_HOME

ํ•ด๋‹น ๋ช…๋ น์–ด ๊ฒฐ๊ณผ๋กœ ์ •์ƒ์ ์œผ๋กœ ๋ฐ˜์˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธ์ž‘์—… ํ•„์š”ํ•˜๋‹ค.

 

 

Hadoop์— tmp, hive warehouse ๋””๋ ‰ํ„ฐ๋ฆฌ ์ƒ์„ฑ

hadoop fs -mkdir       /tmp
hadoop fs -mkdir       /user/hive/warehouse
hadoop fs -chmod g+w   /tmp
hadoop fs -chmod g+w   /user/hive/warehouse

 

Hive CLI ์‹คํ–‰

 $HIVE_HOME/bin/hive
 $HIVE_HOME/bin/schematool -dbType <db type> -initSchema

dbtype ์€ ๊ธฐ์กด์— ์„ค์น˜๋˜์–ด์žˆ๋Š” DB๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ•ด๋‹น ๋ถ€๋ถ„์— mysql, oracle, postgres ๋ฅผ ์ž…๋ ฅํ•˜๋ฉด ๋˜๊ณ ,

ํ…Œ์ŠคํŠธ๋ฅผ ์œ„ํ•˜๋ฉด derby๋ผ๋Š” hive ๋‚ด์žฅ DB๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋œ๋‹ค.

 

HIVE Config ์„ค์ •

cp /conf/hive-default.xml /conf/hive-site.xml

ํ•ด๋‹น ํ…œํ”Œ๋ฆฟ์„ hive-site๋กœ ๋ณต์‚ฌ

 

hive-site.xml ๋ณ€๊ฒฝ - postgresql DB ์‚ฌ์šฉ์‹œ

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:postgresql://mypostgresql.testabcd1111.us-west-2.rds.amazonaws.com:5432/mypgdb</value>
    <description>PostgreSQL JDBC driver connection URL</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>org.postgresql.Driver</value>
    <description>PostgreSQL metastore driver class name</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>database_username</value>
    <description>the username for the DB instance</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>database_password</value>
    <description>the password for the DB instance</description>
  </property>
728x90
๋ฐ˜์‘ํ˜•

+ Recent posts