启动zookeeper集群
一键启动脚本的环境变量配置
#set onekey env export OK_HOME=/export/servers/oneKey export PATH=${OK_HOME}/zk:$PATH export PATH=${OK_HOME}/storm:$PATH export PATH=${OK_HOME}/kafka:$PATH
关于黑洞
一键启动的目录信息
-rw-r--r--. 1 root root 21 Nov 11 03:46 slave -rwxr-xr-x. 1 root root 160 Nov 11 03:46 startzk.sh -rwxr-xr-x. 1 root root 172 Nov 11 03:47 stopzk.sh
/export/servers/oneKey/zk
startzk.sh文件
cat /export/servers/oneKey/zk/slave | while read line do { echo $line ssh $line "source /etc/profile;nohup zkServer.sh start >/dev/nul* 2>&1 &" }& wait done
stopzk.sh 停止脚本
cat /export/servers/oneKey/zk/slave | while read line do { echo $line ssh $line "source /etc/profile;jps |grep QuorumPeerMain |cut -c 1-4 |xargs kil* -s 9" }& wait done
跨服务器运行命令
ssh hostname "command"
启动Kafka集群
环境变量
#set onekey env export OK_HOME=/export/servers/oneKey export PATH=${OK_HOME}/zk:$PATH export PATH=${OK_HOME}/storm:$PATH export PATH=${OK_HOME}/kafka:$PATH
kafka环境变量配置
#set kafka env export KAFKA_HOME=/export/servers/kafka export PATH=${KAFKA_HOME}/bin:$PATH
启动脚本
cat /export/servers/oneKey/kafka/slave | while read line do { echo $line ssh $line "source /etc/profile;nohup kafka-server-start.sh /export/servers/kafka/config/server.properties >/dev/nul* 2>&1 &" }& wait done
停止脚本
cat /export/servers/oneKey/kafka/slave | while read line do { echo $line ssh $line "source /etc/profile;jps |grep Kafka |cut -c 1-4 |xargs kil* -s 9 " }& wait done
1. 集群部署的基本流程
下载安装包、解压安装包、修改配置文件、分发安装包、启动集群
2.集群部署的基础环境准备
安装前的准备工作(zk集群已经部署完毕)
关闭防火墙
chkconfig iptables off && service iptables stop
3.解压安装包
tar -zxvf kafka_2.11-1.0.0.tgz -C /export/servers/
cd /export/servers/
mv kafka_2.11-1.0.0 kafka
4.修改配置文件
进入到kafka目录
cp config/server.properties config/server.properties.bak
vi config/server.properties
输入以下内容:
#broker的全局唯一编号,不能重复
broker.id=0
#用来监听链接的端口,producer或consumer将在此端口建立连接
port=9092
#处理网络请求的线程数量
num.network.threads=3
#用来处理磁盘IO的现成数量
num.io.threads=8
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400
#接受套接字的缓冲区大小
socket.receive.buffer.bytes=102400
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600
#kafka运行日志存放的路径
log.dirs=/export/data/kafka/
#topic在当前broker上的分片个数
num.partitions=2
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1
#segment文件保留的最长时间,超时将被删除
log.retention.hours=1
#滚动生成新的segment文件的最大时间
log.roll.hours=1
#日志文件中每个segment的大小,默认为1G
log.segment.bytes=1073741824
#周期性检查文件大小的时间
log.retention.check.interval.ms=300000
#日志清理是否打开
log.cleaner.enable=true
#broker需要使用zookeeper保存meta数据
zookeeper.connect=zk01:2181,zk02:2181,zk03:2181
#zookeeper链接超时时间
zookeeper.connection.timeout.ms=6000
#partion buffer中,消息的条数达到阈值,将触发flush到磁盘
log.flush.interval.messages=10000
#消息buffer的时间,达到阈值,将触发flush到磁盘
log.flush.interval.ms=3000
#删除topic需要server.properties中设置delete.topic.enable=true否则只是标记删除
delete.topic.enable=true
#此处的host.name为本机IP(重要),如果不改,则客户端会抛出:Producer connection to localhost:9092 unsuccessful 错误!
host.name=kafka01
advertised.host.name=192.168.175.211(注意配置的是各台机器的ip)
4.分发安装包
scp -r /export/servers/kafka kafka02:/export/servers
scp -r /export/servers/kafka kafka03:/export/servers
5.再次修改配置文件(重要)
依次修改各服务器上配置文件的的broker.id,分别是0,1,2不得重复。在各个服务器上建立 log.dirs=/export/data/kafka/
6.启动集群
依次在各节点上启动kafka
nohup /export/servers/kafka/bin/kafka-server-start.sh /export/servers/kafka/config/server.properties >/dev/null 2>&1 &
注解:输出日志到黑洞
command >/dev/null 2>&1 &
Kafka常用操作命令
查看当前服务器中的所有topic
bin/kafka-topics.sh --list --zookeeper zk01:2181
创建topic
bin/kafka-topics.sh --create --zookeeper zk01:2181 --replication-factor 1 --partitions 1 --topic test
删除topic
bin/kafka-topics.sh --delete --zookeeper zk01:2181 --topic test
需要server.properties中设置delete.topic.enable=true否则只是标记删除或者直接重启。
通过shell命令发送消息
bin/kafka-console-producer.sh --broker-list kafka01:9092 --topic test
通过shell消费消息
bin/kafka-console-consumer.sh --zookeeper zk01:2181 --from-beginning --topic test
查看消费位置
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper zk01:2181 --group testGroup
查看某个Topic的详情
bin/kafka-topics.sh --topic test --describe --zookeeper zk01:2181
对分区数进行修改
bin/kafka-topics.sh --zookeeper zk01 --alter --partitions 2 --topic test
1.安装前需要安装好jdk
这里不做描述
2.下载zookeeper安装包、解压
tar -zxvf zookeeper-3.4.5.tar.gz
mv zookeeper-3.4.5 zookeeper
3.修改环境变量(注意:3台zookeeper都需要修改)
vi /etc/profile
export ZOOKEEPER_HOME=/home/hadoop/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /etc/profile
4.修改Zookeeper配置文件
cd zookeeper/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
添加内容:
dataDir=/root/apps/zookeeper/zkdata
server.1=mini1:2888:3888 ## (心跳端口、选举端口)
server.2=mini2:2888:3888
server.3=mini3:2888:3888
创建文件夹:
cd /home/hadoop/zookeeper/
mkdir zkdata
在data文件夹下新建myid文件,myid的文件内容为:
cd zkdata
echo 1 > myid (注意:三台机器的myid分别是1.2.3,要做修改)
分发安装包到其他机器
scp -r /root/apps root@mini2:/root/
scp -r /root/apps root@mini3:/root/
修改其他机器的配置文件
修改myid文件
到mini2上:修改myid为:2
到mini3上:修改myid为:3
5.启动(每台机器)
zkServer.sh start
或者编写一个脚本来批量启动所有机器:
for host in "mini1 mini2 mini3"
do
ssh $host "source/etc/profile;/root/apps/zookeeper/bin/zkServer.sh start"
6.查看集群状态
jps(查看进程)
zkServer.sh status(查看集群状态,主从信息)
如果启动不成功,可以观察zookeeper.out日志,查看错误信息进行排查
配置文件中参数说明:
tickTime这个时间是作为zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是说每个tickTime时间就会发送一个心跳。
initLimit这个配置项是用来配置zookeeper接受客户端(这里所说的客户端不是用户连接zookeeper服务器的客户端,而是zookeeper服务器集群中连接到leader的follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。
当已经超过10个心跳的时间(也就是tickTime)长度后 zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 10*2000=20秒。
syncLimit这个配置项标识leader与follower之间发送消息,请求和应答时间长度,最长不能超过多少个tickTime的时间长度,总的时间长度就是5*2000=10秒。
dataDir顾名思义就是zookeeper保存数据的目录,默认情况下zookeeper将写数据的日志文件也保存在这个目录里;
clientPort这个端口就是客户端连接Zookeeper服务器的端口,Zookeeper会监听这个端口接受客户端的访问请求;
server.A=B:C:D中的A是一个数字,表示这个是第几号服务器,B是这个服务器的IP地址,C第一个端口用来集群成员的信息交换,表示这个服务器与集群中的leader服务器交换信息的端口,D是在leader挂掉时专门用来进行选举leader所用的端口。
Hive分析窗口函数(一) SUM,AVG,MIN,MAX
数据准备
建表语句: create table itcast_t1( cookieid string, createtime string, --day pv int ) row format delimited fields terminated by ','; 加载数据: load data local inpath '/root/hivedata/itcast_t1.dat' into table itcast_t1; cookie1,2018-04-10,1 cookie1,2018-04-11,5 cookie1,2018-04-12,7 cookie1,2018-04-13,3 cookie1,2018-04-14,2 cookie1,2018-04-15,4 cookie1,2018-04-16,4 开启智能本地模式 SET hive.exec.mode.local.auto=true;
SUM(结果和ORDER BY相关,默认为升序)
select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime) as pv1 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid) as pv3 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and current row) as pv4 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and 1 following) as pv5 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between current row and unbounded following) as pv6 from itcast_t1; pv1: 分组内从起点到当前行的pv累积,如,11号的pv1=10号的pv+11号的pv, 12号=10号+11号+12号 pv2: 同pv1 pv3: 分组内(cookie1)所有的pv累加 pv4: 分组内当前行+往前3行,如,11号=10号+11号, 12号=10号+11号+12号, 13号=10号+11号+12号+13号, 14号=11号+12号+13号+14号 pv5: 分组内当前行+往前3行+往后1行,如,14号=11号+12号+13号+14号+15号=5+7+3+2+4=21 pv6: 分组内当前行+往后所有行,如,13号=13号+14号+15号+16号=3+2+4+4=13, 14号=14号+15号+16号=2+4+4=10
- 如果不指定rows between,默认为从起点到当前行;
- 如果不指定order by,则将分组内所有值累加;
- 关键是理解rows between含义,也叫做window子句:
- preceding:往前
- following:往后
- current row:当前行
- unbounded:起点
- unbounded preceding 表示从前面的起点
- unbounded following:表示到后面的终点
AVG,MIN,MAX,和SUM用法一样
select cookieid,createtime,pv, avg(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1; select cookieid,createtime,pv, max(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1; select cookieid,createtime,pv, min(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1;
Hive分析窗口函数(二) NTILE,ROW_NUMBER,RANK,DENSE_RANK
数据准备
cookie1,2018-04-10,1 cookie1,2018-04-11,5 cookie1,2018-04-12,7 cookie1,2018-04-13,3 cookie1,2018-04-14,2 cookie1,2018-04-15,4 cookie1,2018-04-16,4 cookie2,2018-04-10,2 cookie2,2018-04-11,3 cookie2,2018-04-12,5 cookie2,2018-04-13,6 cookie2,2018-04-14,3 cookie2,2018-04-15,9 cookie2,2018-04-16,7 CREATE TABLE itcast_t2 ( cookieid string, createtime string, --day pv INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t2.dat' into table itcast_t2;
NTILE
背景:
有时会有这样的需求:如果数据排序后分为三部分,业务人员只关心其中的一部分,如何将这中间的三分之一数据拿出来呢?NTILE函数即可以满足。
ntile可以看成是:把有序的数据集合平均分配到指定的数量(num)个桶中, 将桶号分配给每一行。 如果不能平均分配,则优先分配较小编号的桶,并且各个桶中能放的行数最多相差1。 语法是:ntile (num) over ([partition_clause] order_by_clause) as xxx 然后可以根据桶号,选取前或后 n分之几的数据。 数据会完整展示出来,只是给相应的数据打标签;具体要取几分之几的数据,需要再嵌套一层根据标签取出。 NTILE不支持ROWS BETWEEN,比如 NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW)
SELECT cookieid, createtime, pv, NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn1, NTILE(3) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn2, NTILE(4) OVER(ORDER BY createtime) AS rn3 FROM itcast_t2 ORDER BY cookieid,createtime;
比如,统计一个cookie,pv数最多的前1/3的天
SELECT cookieid, createtime, pv, NTILE(3) OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn FROM itcast_t2; 其中rn = 1 的记录,就是我们想要的结果
ROW_NUMBER
ROW_NUMBER() 从1开始,按照顺序,生成分组内记录的序列
SELECT cookieid, createtime, pv, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn FROM itcast_t2;
RANK 和 DENSE_RANK
RANK() 生成数据项在分组中的排名,排名相等会在名次中留下空位
DENSE_RANK() 生成数据项在分组中的排名,排名相等会在名次中不会留下空位SELECT cookieid, createtime, pv, RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn1, DENSE_RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn2, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn3 FROM itcast_t2 WHERE cookieid = 'cookie1';
Hive分析窗口函数(三) CUME_DIST,PERCENT_RANK
这两个序列分析函数不是很常用,注意: 序列函数不支持WINDOW子句
数据准备
d1,user1,1000 d1,user2,2000 d1,user3,3000 d2,user4,4000 d2,user5,5000 CREATE EXTERNAL TABLE itcast_t3 ( dept STRING, userid string, sal INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t3.dat' into table itcast_t3;
CUME_DIST 和order byd的排序顺序有关系
CUME_DIST 小于等于当前值的行数/分组内总行数 order 默认顺序 正序 升序
比如,统计小于等于当前薪水的人数,所占总人数的比例SELECT dept, userid, sal, CUME_DIST() OVER(ORDER BY sal) AS rn1, CUME_DIST() OVER(PARTITION BY dept ORDER BY sal) AS rn2 FROM itcast_t3; rn1: 没有partition,所有数据均为1组,总行数为5, 第一行:小于等于1000的行数为1,因此,1/5=0.2 第三行:小于等于3000的行数为3,因此,3/5=0.6 rn2: 按照部门分组,dpet=d1的行数为3, 第二行:小于等于2000的行数为2,因此,2/3=0.6666666666666666
PERCENT_RANK
PERCENT_RANK 分组内当前行的RANK值-1/分组内总行数-1
经调研 该函数显示现实意义不明朗 有待于继续考证
SELECT dept, userid, sal, PERCENT_RANK() OVER(ORDER BY sal) AS rn1, --分组内 RANK() OVER(ORDER BY sal) AS rn11, --分组内RANK值 SUM(1) OVER(PARTITION BY NULL) AS rn12, --分组内总行数 PERCENT_RANK() OVER(PARTITION BY dept ORDER BY sal) AS rn2 FROM itcast_t3; rn1: rn1 = (rn11-1) / (rn12-1) 第一行,(1-1)/(5-1)=0/4=0 第二行,(2-1)/(5-1)=1/4=0.25 第四行,(4-1)/(5-1)=3/4=0.75 rn2: 按照dept分组, dept=d1的总行数为3 第一行,(1-1)/(3-1)=0 第三行,(3-1)/(3-1)=1
Hive分析窗口函数(四) LAG,LEAD,FIRST_VALUE,LAST_VALUE
注意: 这几个函数不支持WINDOW子句
准备数据
cookie1,2018-04-10 10:00:02,url2 cookie1,2018-04-10 10:00:00,url1 cookie1,2018-04-10 10:03:04,1url3 cookie1,2018-04-10 10:50:05,url6 cookie1,2018-04-10 11:00:00,url7 cookie1,2018-04-10 10:10:00,url4 cookie1,2018-04-10 10:50:01,url5 cookie2,2018-04-10 10:00:02,url22 cookie2,2018-04-10 10:00:00,url11 cookie2,2018-04-10 10:03:04,1url33 cookie2,2018-04-10 10:50:05,url66 cookie2,2018-04-10 11:00:00,url77 cookie2,2018-04-10 10:10:00,url44 cookie2,2018-04-10 10:50:01,url55 CREATE TABLE itcast_t4 ( cookieid string, createtime string, --页面访问时间 url STRING --被访问页面 ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t4.dat' into table itcast_t4;
LAG
LAG(col,n,DEFAULT) 用于统计窗口内往上第n行值
第一个参数为列名,第二个参数为往上第n行(可选,默认为1),第三个参数为默认值(当往上第n行为NULL时候,取默认值,如不指定,则为NULL)SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAG(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS last_1_time, LAG(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS last_2_time FROM itcast_t4; last_1_time: 指定了往上第1行的值,default为'1970-01-01 00:00:00' cookie1第一行,往上1行为NULL,因此取默认值 1970-01-01 00:00:00 cookie1第三行,往上1行值为第二行值,2015-04-10 10:00:02 cookie1第六行,往上1行值为第五行值,2015-04-10 10:50:01 last_2_time: 指定了往上第2行的值,为指定默认值 cookie1第一行,往上2行为NULL cookie1第二行,往上2行为NULL cookie1第四行,往上2行为第二行值,2015-04-10 10:00:02 cookie1第七行,往上2行为第五行值,2015-04-10 10:50:01
LEAD
与LAG相反
LEAD(col,n,DEFAULT) 用于统计窗口内往下第n行值
第一个参数为列名,第二个参数为往下第n行(可选,默认为1),第三个参数为默认值(当往下第n行为NULL时候,取默认值,如不指定,则为NULL)SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LEAD(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS next_1_time, LEAD(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS next_2_time FROM itcast_t4;
FIRST_VALUE
取分组内排序后,截止到当前行,第一个值
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS first1 FROM itcast_t4;
LAST_VALUE
取分组内排序后,截止到当前行,最后一个值
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1 FROM itcast_t4;
如果想要取分组内排序后最后一个值,则需要变通一下:
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1, FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime DESC) AS last2 FROM itcast_t4 ORDER BY cookieid,createtime;
特别注意order by
如果不指定ORDER BY,则进行排序混乱,会出现错误的结果
SELECT cookieid,
createtime,
url,
FIRST_VALUE(url) OVER(PARTITION BY cookieid) AS first2
FROM itcast_t4;
Hive分析窗口函数(五) GROUPING SETS,GROUPING__ID,CUBE,ROLLUP
这几个分析函数通常用于OLAP中,不能累加,而且需要根据不同维度上钻和下钻的指标统计,比如,分小时、天、月的UV数。
数据准备
2018-03,2018-03-10,cookie1 2018-03,2018-03-10,cookie5 2018-03,2018-03-12,cookie7 2018-04,2018-04-12,cookie3 2018-04,2018-04-13,cookie2 2018-04,2018-04-13,cookie4 2018-04,2018-04-16,cookie4 2018-03,2018-03-10,cookie2 2018-03,2018-03-10,cookie3 2018-04,2018-04-12,cookie5 2018-04,2018-04-13,cookie6 2018-04,2018-04-15,cookie3 2018-04,2018-04-15,cookie2 2018-04,2018-04-16,cookie1 CREATE TABLE itcast_t5 ( month STRING, day STRING, cookieid STRING ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t5.dat' into table itcast_t5;
GROUPING SETS
grouping sets是一种将多个group by 逻辑写在一个sql语句中的便利写法。
等价于将不同维度的GROUP BY结果集进行UNION ALL。
GROUPING__ID,表示结果属于哪一个分组集合。
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day GROUPING SETS (month,day) ORDER BY GROUPING__ID; grouping_id表示这一组结果属于哪个分组集合, 根据grouping sets中的分组条件month,day,1是代表month,2是代表day 等价于 SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM itcast_t5 GROUP BY month UNION ALL SELECT NULL as month,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM itcast_t5 GROUP BY day;
再如:
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day GROUPING SETS (month,day,(month,day)) ORDER BY GROUPING__ID; 等价于 SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM itcast_t5 GROUP BY month UNION ALL SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM itcast_t5 GROUP BY day UNION ALL SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM itcast_t5 GROUP BY month,day;
CUBE
根据GROUP BY的维度的所有组合进行聚合。
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day WITH CUBE ORDER BY GROUPING__ID; 等价于 SELECT NULL,NULL,COUNT(DISTINCT cookieid) AS uv,0 AS GROUPING__ID FROM itcast_t5 UNION ALL SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM itcast_t5 GROUP BY month UNION ALL SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM itcast_t5 GROUP BY day UNION ALL SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM itcast_t5 GROUP BY month,day;
ROLLUP
是CUBE的子集,以最左侧的维度为主,从该维度进行层级聚合。
比如,以month维度进行层级聚合: SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day WITH ROLLUP ORDER BY GROUPING__ID; --把month和day调换顺序,则以day维度进行层级聚合: SELECT day, month, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY day,month WITH ROLLUP ORDER BY GROUPING__ID; (这里,根据天和月进行聚合,和根据天聚合结果一样,因为有父子关系,如果是其他维度组合的话,就会不一样)
背景介绍:
explode与lateral view在关系型数据库中本身是不该出现的。
因为他的出现本身就是在操作不满足第一范式的数据(每个属性都不可再分)。本身已经违背了数据库的设计原理(不论是业务系统还是数据仓库系统),在面向分析的数据库 数据仓库中,发生了改变。
explode函数可以将一个array或者map展开,
其中explode(array)使得结果中将array列表里的每个元素生成一行;
explode(map)使得结果中将map里的每一对元素作为一行,key为一列,value为一列,
一般情况下,直接使用即可,也可以根据需要结合lateral view 使用
explode的使用
001,allen,usa|china|japan,1|3|7 002,kobe,usa|england|japan,2|3|5
create table test_message(id int,name string,location array<string>,city array<int>) row format delimited fields terminated by "," collection items terminated by '|'; load data local inpath "/root/hivedata/test_message.txt" into table test_message;
查看array的元素
用下标进行寻找,类似于其他编程语言中的数组访问select location[1] from test_message;
使用explode
select explode(location) from test_message; select name,explode(location) from test_message; 报错 当使用UDTF函数的时候,hive只允许对拆分字段进行访问的。
lateral view(侧视图)
lateral view为侧视图,意义是为了配合UDTF来使用,把某一行数据拆分成多行数据.不加lateral view的UDTF只能提取单个字段拆分,并不能塞会原来数据表中.加上lateral view就可以将拆分的单个字段数据与原始表数据关联上.
在使用lateral view的时候需要指定视图别名和生成的新列别名
tabelA lateral view UDTF(xxx) 视图别名 as a,b,c select subview.* from test_message lateral view explode(location) subview as lc; subview为视图别名,lc为指定新列别名 select name,subview.* from test_message lateral view explode(location) subview as lc; lateral view explode 相当于一个拆分location字段的虚表,然后与原表进行关联.
json_tuple()函数也是UDTF函数,因为一个json字符串对应了解析出n个字段.与原表数据关联的时候需要使用lateral view
select id from table lateral view json_tuple(property,'tag_id','tag_type’);