Hudi 集成 Hive

环境:
hudi 0.10.1
spark 2.4.5
hive 2.3.7
hadoop 2.7.5

将编译好的hudi jar, copy到hive lib目录下:

cp /Users/xxx/cloudera/lib/hudi/packaging/hudi-hadoop-mr-bundle/target/hudi-hadoop-mr-bundle-0.11.0-SNAPSHOT.jar  ~/cloudera/cdh5.7/hive/lib/

1、建表并插入数据

hudi会自动创建表,也可以提前建表:

CREATE EXTERNAL TABLE member_rt(
  _hoodie_commit_time string,
  _hoodie_commit_seqno string,
  _hoodie_record_key string,
  _hoodie_partition_path string,
  _hoodie_file_name string,
  uid int,
  ad_id int,
  fullname string,
  iconurl string,
  ts string,
  hudipartition string)
PARTITIONED BY (
  dt string,
  dn string)
ROW FORMAT SERDE
  'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS INPUTFORMAT
  'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
LOCATION
  'hdfs://localhost:8020/user/zyh/hudi/hivetest';

Hudi 集成 Hive
测试写数据
import com.google.gson.Gson
import com.zyh.bean.DwsMember
import org.apache.hudi.DataSourceWriteOptions
import org.apache.hudi.config.HoodieIndexConfig
import org.apache.hudi.hive.MultiPartKeysValueExtractor
import org.apache.hudi.index.HoodieIndex
import org.apache.spark.sql.{SaveMode, SparkSession}

object HudiTestHive {

  def main(args: Array[String]): Unit = {

    val sparkSession = SparkSession
      .builder()
      .appName("dwd_member_import")
      .master("local[*]")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .enableHiveSupport()
      .getOrCreate()

    val ssc = sparkSession.sparkContext
    ssc.hadoopConfiguration.set("fs.defaultFS", "hdfs://localhost:8020")
    ssc.hadoopConfiguration.set("dfs.nameservices", "localhost")

    import org.apache.spark.sql.functions._
    import sparkSession.implicits._
    val commitTime = System.currentTimeMillis().toString

    val df = sparkSession.read.text("/user/test/ods/member.log")
      .mapPartitions(partitions => {
        val gson = new Gson
        partitions.map(item => {
          gson.fromJson(item.getString(0), classOf[DwsMember])
        })
      })
      .withColumn("ts", lit(commitTime))
      .withColumn("hudipartition", concat_ws("/", col("dt"), col("dn")))

    Class.forName("org.apache.hive.jdbc.HiveDriver");

    df.write.format("org.apache.hudi")
      .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "uid")
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "ts")
      .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "hudipartition")
      .option("hoodie.table.name", "member")
      .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, "jdbc:hive2://localhost:10000")
      .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "hudi_test")
      .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, "member")
      .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY, "dt,dn")
      .option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[MultiPartKeysValueExtractor].getName)
      .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
      .option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH, "true")
      .option(HoodieIndexConfig.INDEX_TYPE_PROP, HoodieIndex.IndexType.GLOBAL_BLOOM.name())
      .option("hoodie.insert.shuffle.parallelism", "12")
      .option("hoodie.upsert.shuffle.parallelism", "12")
      .mode(SaveMode.Append)
      .save("/user/zyh/hudi/hivetest")

  }
}

查询表数据:

Hudi 集成 Hive
Hudi集成hive, 即Hudi同步数据到hive, 供hive查询

2、表类型对比

针对copy_on_write表和merge_on_read表同时生成两份表数据,读取member日志数据各自生成两种类型的表

object HudiTestHiveForTableType {

  def main(args: Array[String]): Unit = {
    System.setProperty("HADOOP_USER_NAME", "zhangyunhao")

    val sparkSession = SparkSession
      .builder()
      .appName("dwd_member_import")
      .master("local[*]")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .enableHiveSupport()
      .getOrCreate()

    val ssc = sparkSession.sparkContext
    ssc.hadoopConfiguration.set("fs.defaultFS", "hdfs://localhost:8020")
    ssc.hadoopConfiguration.set("dfs.nameservices", "localhost")

    generateData(sparkSession)

  }

  def generateData(sparkSession: SparkSession) = {

    import org.apache.spark.sql.functions._
    import sparkSession.implicits._

    val commitTime = System.currentTimeMillis().toString
    val df = sparkSession.read.text("/user/test/ods/member.log")
      .mapPartitions(partitions => {
        val gson = new Gson
        partitions.map(item => {
          gson.fromJson(item.getString(0), classOf[DwsMember])
        })
      }).withColumn("ts", lit(commitTime))
      .withColumn("hudipartition", concat_ws("/", col("dt"), col("dn")))

    df.write.format("org.apache.hudi")
      .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
      .option("hoodie.insert.shuffle.parallelism", 12)
      .option("hoodie.upsert.shuffle.parallelism", 12)
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "uid")
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "ts")
      .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "hudipartition")
      .option("hoodie.table.name", "hudimembertest1")
      .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, "jdbc:hive2://localhost:10000")
      .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "hudi_test")
      .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, "member1")
      .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY, "dt,dn")
      .option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[MultiPartKeysValueExtractor].getName)
      .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
      .option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH, "true")
      .option(HoodieIndexConfig.INDEX_TYPE_PROP, HoodieIndex.IndexType.GLOBAL_BLOOM.name())
      .option("hoodie.insert.shuffle.parallelism", "12")
      .option("hoodie.upsert.shuffle.parallelism", "12")
      .mode(SaveMode.Overwrite)
      .save("/user/zyh/hudi/hudimembertest1")

    df.write.format("org.apache.hudi")
      .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL)
      .option("hoodie.insert.shuffle.parallelism", 12)
      .option("hoodie.upsert.shuffle.parallelism", 12)
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "uid")
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "ts")
      .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "hudipartition")
      .option("hoodie.table.name", "hudimembertest2")
      .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, "jdbc:hive2://localhost:10000")
      .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "hudi_test")
      .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, "member2")
      .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY, "dt,dn")
      .option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[MultiPartKeysValueExtractor].getName)
      .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
      .option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH, "true")
      .option(HoodieIndexConfig.INDEX_TYPE_PROP, HoodieIndex.IndexType.GLOBAL_BLOOM.name())
      .option("hoodie.insert.shuffle.parallelism", "12")
      .option("hoodie.upsert.shuffle.parallelism", "12")
      .mode(SaveMode.Overwrite)
      .save("/user/zyh/hudi/hudimembertest2")
  }
}

查看hive,发现自动建好3张表,继续使用命令查看表结构

Hudi 集成 Hive
可以看到格式有两种HoodieParquetInputFormat和HoodieParquetRealtimeInputFormat,两种在hudi中是读优化视图和实时视图。

Merge_ON_READ建表时会自动创建两种视图,COPY_ON_WRITE建表时只会自动创建读优化视图。
查询表,都已同步数据。

3、修改数据

有了数据之后对表进行更新操作,对两张表分别只更新uid 0-9的10条数据, 将full_name 全部更改为testName

def updateData(sparkSession: SparkSession) = {
    import org.apache.spark.sql.functions._
    import sparkSession.implicits._
    val commitTime = System.currentTimeMillis().toString
    val df = sparkSession.read.text("/user/test/ods/member2.log")
      .mapPartitions(partitions => {
        val gson = new Gson
        partitions.map(item => {
          gson.fromJson(item.getString(0), classOf[DwsMember])
        })
      }).where("uid>=0 and uid)

    val result = df.map(item => {
      item.fullname = "testName"
      item
    }).withColumn("ts", lit(commitTime))
      .withColumn("hudipartition", concat_ws("/", col("dt"), col("dn")))

    result.write.format("org.apache.hudi")
      .option("hoodie.insert.shuffle.parallelism", 12)
      .option("hoodie.upsert.shuffle.parallelism", 12)
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "uid")
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "ts")
      .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "hudipartition")
      .option("hoodie.table.name", "hudimembertest1")
      .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, "jdbc:hive2://localhost:10000")
      .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "hudi_test")
      .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, "member1")
      .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY, "dt,dn")
      .option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[MultiPartKeysValueExtractor].getName)
      .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
      .option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH, "true")
      .option(HoodieIndexConfig.INDEX_TYPE_PROP, HoodieIndex.IndexType.GLOBAL_BLOOM.name())
      .option("hoodie.insert.shuffle.parallelism", "12")
      .option("hoodie.upsert.shuffle.parallelism", "12")
      .mode(SaveMode.Append)
      .save("/user/zyh/hudi/hudimembertest1")

    result.write.format("org.apache.hudi")
      .option("hoodie.insert.shuffle.parallelism", 12)
      .option("hoodie.upsert.shuffle.parallelism", 12)
      .option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "uid")
      .option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "ts")
      .option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "hudipartition")
      .option("hoodie.table.name", "hudimembertest2")
      .option(DataSourceWriteOptions.HIVE_URL_OPT_KEY, "jdbc:hive2://localhost:10000")
      .option(DataSourceWriteOptions.HIVE_DATABASE_OPT_KEY, "hudi_test")
      .option(DataSourceWriteOptions.HIVE_TABLE_OPT_KEY, "member2")
      .option(DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY, "dt,dn")
      .option(DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY, classOf[MultiPartKeysValueExtractor].getName)
      .option(DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY, "true")
      .option(HoodieIndexConfig.BLOOM_INDEX_UPDATE_PARTITION_PATH, "true")
      .option(HoodieIndexConfig.INDEX_TYPE_PROP, HoodieIndex.IndexType.GLOBAL_BLOOM.name())
      .option("hoodie.insert.shuffle.parallelism", "12")
      .option("hoodie.upsert.shuffle.parallelism", "12")
      .mode(SaveMode.Append)
      .save("/user/zyh/hudi/hudimembertest2")
  }

修改完毕后,查看对应hdfs路径下的文件变化

Hudi 集成 Hive
Hudi 集成 Hive
可以看到对于新增数据,Merger_On_Read(读时合并表)采用增量日志的方式修改数据,而Copy_On_Write(写实时表)采用了全量更新出一份全新文件方式进行修改功能。

两者对比

Hudi 集成 Hive
最后修改完毕之后,查询对应hive表
Hudi 集成 Hive
可以发现ro结尾的读优化视图没有发生变化。查询rt表
Hudi 集成 Hive
rt表的数据发生了变化,实时视图是查询基础数据和日志数据的合并视图。最后查询member2的实时视图
Hudi 集成 Hive

3、总结

由此可以理解为MERGE_ON_READ的表,是以增量的形式来记录表的数据,修改操作都以日志的形式保存。而COPY_ON_READ的表是以全量覆盖的方式进行保存数据,每次有修改操作那么会和历史数据重新合并生产一份新的数据文件,并且历史数据不会删除。
两种表视图,HoodieParquetInputFormat格式的只支持查询表的原始数据,HoodieParquetRealtimeInputFormat格式支持查询表和日志的合并视图提供最新的数据。

Original: https://blog.csdn.net/qq_36039236/article/details/123331111
Author: 雾岛与鲸
Title: Hudi 集成 Hive

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/818461/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球