searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

Doris创建多源数据目录Hive

2023-05-29 07:29:00
125
0

HDFS和Hive参数配置

初始catalog配置参数如下:

CREATE CATALOG hive PROPERTIES (
'type'='hms',
'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
'hadoop.username' = 'hive',
'dfs.nameservices'='your-nameservice',
'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
);

 

登录HDFS集群的NameNode节点,根据/usr/local/hadoop3/etc/hadoop/hdfs-site.xml中的配置替换如上各dfs参数;

登录Hive集群的MetaStore节点,根据/usr/local/hive/conf/hive-site.xml中的配置替换如上hive参数。

终替换了配置参数如下:

CREATE CATALOG hive PROPERTIES (
'type'='hms',
'hive.metastore.uris' = 'thrift://nm-bigdata-030017237.ctc.local:9083,thrift://nm-bigdata-030017238.ctc.local:9083',
'hadoop.username' = 'hive',
'dfs.nameservices'='ctyunns',
'dfs.ha.namenodes.ctyunns'='nn1,nn2',
'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310',
'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310',
'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
);

Kerberos参数配置

如果开启Kerberos,如下配置删除:

'hadoop.username' = 'hive',

增加如下配置:

'hive.metastore.sasl.enabled' = 'true',
'hadoop.security.authentication' = 'kerberos',
'hadoop.kerberos.keytab' = '/your-keytab-filepath/your.keytab',
'hadoop.kerberos.principal' = 'your-principal@YOUR.COM',
'hive.metastore.kerberos.principal' = 'hive/_HOST@DEV1228.COM'

其中:

hive.metastore.kerberos.principal:在Hive集群的MetaStore节点/usr/local/hive/conf/hive-site.xml中获取;

hadoop.kerberos.keytab:根据实际keytab认证文件路径配置,并且该keytab文件需要在doris的fe、be所有节点上放置

hadoop.kerberos.principal:根据实际keytab对应的principal配置,并且该域对应的配置需要在doris的fe、be所有节点文件/etc/krb5.conf中配置

最终替换了配置参数如下:

CREATE CATALOG hive PROPERTIES (
'type'='hms',
'hive.metastore.uris' = 'thrift://nm-bigdata-030017237.ctc.local:9083,thrift://nm-bigdata-030017238.ctc.local:9083',
'dfs.nameservices'='ctyunns',
'dfs.ha.namenodes.ctyunns'='nn1,nn2',
'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310',
'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310',
'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
'hive.metastore.sasl.enabled' = 'true',
'hadoop.security.authentication' = 'kerberos',
'hadoop.kerberos.keytab' = '/etc/security/keytabs/hdfs_export.keytab',
'hadoop.kerberos.principal' = 'hdfs@BIGDATA.CHINATELECOM.CN',
'hive.metastore.kerberos.principal' = 'hive/_HOST@BIGDATA.CHINATELECOM.CN'
);
0条评论
0 / 1000
d****n
2文章数
0粉丝数
d****n
2 文章 | 0 粉丝
d****n
2文章数
0粉丝数
d****n
2 文章 | 0 粉丝
原创

Doris创建多源数据目录Hive

2023-05-29 07:29:00
125
0

HDFS和Hive参数配置

初始catalog配置参数如下:

CREATE CATALOG hive PROPERTIES (
'type'='hms',
'hive.metastore.uris' = 'thrift://172.21.0.1:7004',
'hadoop.username' = 'hive',
'dfs.nameservices'='your-nameservice',
'dfs.ha.namenodes.your-nameservice'='nn1,nn2',
'dfs.namenode.rpc-address.your-nameservice.nn1'='172.21.0.2:4007',
'dfs.namenode.rpc-address.your-nameservice.nn2'='172.21.0.3:4007',
'dfs.client.failover.proxy.provider.your-nameservice'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
);

 

登录HDFS集群的NameNode节点,根据/usr/local/hadoop3/etc/hadoop/hdfs-site.xml中的配置替换如上各dfs参数;

登录Hive集群的MetaStore节点,根据/usr/local/hive/conf/hive-site.xml中的配置替换如上hive参数。

终替换了配置参数如下:

CREATE CATALOG hive PROPERTIES (
'type'='hms',
'hive.metastore.uris' = 'thrift://nm-bigdata-030017237.ctc.local:9083,thrift://nm-bigdata-030017238.ctc.local:9083',
'hadoop.username' = 'hive',
'dfs.nameservices'='ctyunns',
'dfs.ha.namenodes.ctyunns'='nn1,nn2',
'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310',
'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310',
'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
);

Kerberos参数配置

如果开启Kerberos,如下配置删除:

'hadoop.username' = 'hive',

增加如下配置:

'hive.metastore.sasl.enabled' = 'true',
'hadoop.security.authentication' = 'kerberos',
'hadoop.kerberos.keytab' = '/your-keytab-filepath/your.keytab',
'hadoop.kerberos.principal' = 'your-principal@YOUR.COM',
'hive.metastore.kerberos.principal' = 'hive/_HOST@DEV1228.COM'

其中:

hive.metastore.kerberos.principal:在Hive集群的MetaStore节点/usr/local/hive/conf/hive-site.xml中获取;

hadoop.kerberos.keytab:根据实际keytab认证文件路径配置,并且该keytab文件需要在doris的fe、be所有节点上放置

hadoop.kerberos.principal:根据实际keytab对应的principal配置,并且该域对应的配置需要在doris的fe、be所有节点文件/etc/krb5.conf中配置

最终替换了配置参数如下:

CREATE CATALOG hive PROPERTIES (
'type'='hms',
'hive.metastore.uris' = 'thrift://nm-bigdata-030017237.ctc.local:9083,thrift://nm-bigdata-030017238.ctc.local:9083',
'dfs.nameservices'='ctyunns',
'dfs.ha.namenodes.ctyunns'='nn1,nn2',
'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310',
'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310',
'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
'hive.metastore.sasl.enabled' = 'true',
'hadoop.security.authentication' = 'kerberos',
'hadoop.kerberos.keytab' = '/etc/security/keytabs/hdfs_export.keytab',
'hadoop.kerberos.principal' = 'hdfs@BIGDATA.CHINATELECOM.CN',
'hive.metastore.kerberos.principal' = 'hive/_HOST@BIGDATA.CHINATELECOM.CN'
);
文章来自个人专栏
文章 | 订阅
0条评论
0 / 1000
请输入你的评论
0
0