Hive分析窗口函数(一) SUM,AVG,MIN,MAX
数据准备
建表语句: create table itcast_t1( cookieid string, createtime string, --day pv int ) row format delimited fields terminated by ','; 加载数据: load data local inpath '/root/hivedata/itcast_t1.dat' into table itcast_t1; cookie1,2018-04-10,1 cookie1,2018-04-11,5 cookie1,2018-04-12,7 cookie1,2018-04-13,3 cookie1,2018-04-14,2 cookie1,2018-04-15,4 cookie1,2018-04-16,4 开启智能本地模式 SET hive.exec.mode.local.auto=true;
SUM(结果和ORDER BY相关,默认为升序)
select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime) as pv1 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid) as pv3 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and current row) as pv4 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between 3 preceding and 1 following) as pv5 from itcast_t1; select cookieid,createtime,pv, sum(pv) over(partition by cookieid order by createtime rows between current row and unbounded following) as pv6 from itcast_t1; pv1: 分组内从起点到当前行的pv累积,如,11号的pv1=10号的pv+11号的pv, 12号=10号+11号+12号 pv2: 同pv1 pv3: 分组内(cookie1)所有的pv累加 pv4: 分组内当前行+往前3行,如,11号=10号+11号, 12号=10号+11号+12号, 13号=10号+11号+12号+13号, 14号=11号+12号+13号+14号 pv5: 分组内当前行+往前3行+往后1行,如,14号=11号+12号+13号+14号+15号=5+7+3+2+4=21 pv6: 分组内当前行+往后所有行,如,13号=13号+14号+15号+16号=3+2+4+4=13, 14号=14号+15号+16号=2+4+4=10
- 如果不指定rows between,默认为从起点到当前行;
- 如果不指定order by,则将分组内所有值累加;
- 关键是理解rows between含义,也叫做window子句:
- preceding:往前
- following:往后
- current row:当前行
- unbounded:起点
- unbounded preceding 表示从前面的起点
- unbounded following:表示到后面的终点
AVG,MIN,MAX,和SUM用法一样
select cookieid,createtime,pv, avg(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1; select cookieid,createtime,pv, max(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1; select cookieid,createtime,pv, min(pv) over(partition by cookieid order by createtime rows between unbounded preceding and current row) as pv2 from itcast_t1;
Hive分析窗口函数(二) NTILE,ROW_NUMBER,RANK,DENSE_RANK
数据准备
cookie1,2018-04-10,1 cookie1,2018-04-11,5 cookie1,2018-04-12,7 cookie1,2018-04-13,3 cookie1,2018-04-14,2 cookie1,2018-04-15,4 cookie1,2018-04-16,4 cookie2,2018-04-10,2 cookie2,2018-04-11,3 cookie2,2018-04-12,5 cookie2,2018-04-13,6 cookie2,2018-04-14,3 cookie2,2018-04-15,9 cookie2,2018-04-16,7 CREATE TABLE itcast_t2 ( cookieid string, createtime string, --day pv INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t2.dat' into table itcast_t2;
NTILE
背景:
有时会有这样的需求:如果数据排序后分为三部分,业务人员只关心其中的一部分,如何将这中间的三分之一数据拿出来呢?NTILE函数即可以满足。
ntile可以看成是:把有序的数据集合平均分配到指定的数量(num)个桶中, 将桶号分配给每一行。 如果不能平均分配,则优先分配较小编号的桶,并且各个桶中能放的行数最多相差1。 语法是:ntile (num) over ([partition_clause] order_by_clause) as xxx 然后可以根据桶号,选取前或后 n分之几的数据。 数据会完整展示出来,只是给相应的数据打标签;具体要取几分之几的数据,需要再嵌套一层根据标签取出。 NTILE不支持ROWS BETWEEN,比如 NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime ROWS BETWEEN 3 PRECEDING AND CURRENT ROW)
SELECT cookieid, createtime, pv, NTILE(2) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn1, NTILE(3) OVER(PARTITION BY cookieid ORDER BY createtime) AS rn2, NTILE(4) OVER(ORDER BY createtime) AS rn3 FROM itcast_t2 ORDER BY cookieid,createtime;
比如,统计一个cookie,pv数最多的前1/3的天
SELECT cookieid, createtime, pv, NTILE(3) OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn FROM itcast_t2; 其中rn = 1 的记录,就是我们想要的结果
ROW_NUMBER
ROW_NUMBER() 从1开始,按照顺序,生成分组内记录的序列
SELECT cookieid, createtime, pv, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn FROM itcast_t2;
RANK 和 DENSE_RANK
RANK() 生成数据项在分组中的排名,排名相等会在名次中留下空位
DENSE_RANK() 生成数据项在分组中的排名,排名相等会在名次中不会留下空位SELECT cookieid, createtime, pv, RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn1, DENSE_RANK() OVER(PARTITION BY cookieid ORDER BY pv desc) AS rn2, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY pv DESC) AS rn3 FROM itcast_t2 WHERE cookieid = 'cookie1';
Hive分析窗口函数(三) CUME_DIST,PERCENT_RANK
这两个序列分析函数不是很常用,注意: 序列函数不支持WINDOW子句
数据准备
d1,user1,1000 d1,user2,2000 d1,user3,3000 d2,user4,4000 d2,user5,5000 CREATE EXTERNAL TABLE itcast_t3 ( dept STRING, userid string, sal INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t3.dat' into table itcast_t3;
CUME_DIST 和order byd的排序顺序有关系
CUME_DIST 小于等于当前值的行数/分组内总行数 order 默认顺序 正序 升序
比如,统计小于等于当前薪水的人数,所占总人数的比例SELECT dept, userid, sal, CUME_DIST() OVER(ORDER BY sal) AS rn1, CUME_DIST() OVER(PARTITION BY dept ORDER BY sal) AS rn2 FROM itcast_t3; rn1: 没有partition,所有数据均为1组,总行数为5, 第一行:小于等于1000的行数为1,因此,1/5=0.2 第三行:小于等于3000的行数为3,因此,3/5=0.6 rn2: 按照部门分组,dpet=d1的行数为3, 第二行:小于等于2000的行数为2,因此,2/3=0.6666666666666666
PERCENT_RANK
PERCENT_RANK 分组内当前行的RANK值-1/分组内总行数-1
经调研 该函数显示现实意义不明朗 有待于继续考证
SELECT dept, userid, sal, PERCENT_RANK() OVER(ORDER BY sal) AS rn1, --分组内 RANK() OVER(ORDER BY sal) AS rn11, --分组内RANK值 SUM(1) OVER(PARTITION BY NULL) AS rn12, --分组内总行数 PERCENT_RANK() OVER(PARTITION BY dept ORDER BY sal) AS rn2 FROM itcast_t3; rn1: rn1 = (rn11-1) / (rn12-1) 第一行,(1-1)/(5-1)=0/4=0 第二行,(2-1)/(5-1)=1/4=0.25 第四行,(4-1)/(5-1)=3/4=0.75 rn2: 按照dept分组, dept=d1的总行数为3 第一行,(1-1)/(3-1)=0 第三行,(3-1)/(3-1)=1
Hive分析窗口函数(四) LAG,LEAD,FIRST_VALUE,LAST_VALUE
注意: 这几个函数不支持WINDOW子句
准备数据
cookie1,2018-04-10 10:00:02,url2 cookie1,2018-04-10 10:00:00,url1 cookie1,2018-04-10 10:03:04,1url3 cookie1,2018-04-10 10:50:05,url6 cookie1,2018-04-10 11:00:00,url7 cookie1,2018-04-10 10:10:00,url4 cookie1,2018-04-10 10:50:01,url5 cookie2,2018-04-10 10:00:02,url22 cookie2,2018-04-10 10:00:00,url11 cookie2,2018-04-10 10:03:04,1url33 cookie2,2018-04-10 10:50:05,url66 cookie2,2018-04-10 11:00:00,url77 cookie2,2018-04-10 10:10:00,url44 cookie2,2018-04-10 10:50:01,url55 CREATE TABLE itcast_t4 ( cookieid string, createtime string, --页面访问时间 url STRING --被访问页面 ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t4.dat' into table itcast_t4;
LAG
LAG(col,n,DEFAULT) 用于统计窗口内往上第n行值
第一个参数为列名,第二个参数为往上第n行(可选,默认为1),第三个参数为默认值(当往上第n行为NULL时候,取默认值,如不指定,则为NULL)SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAG(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS last_1_time, LAG(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS last_2_time FROM itcast_t4; last_1_time: 指定了往上第1行的值,default为'1970-01-01 00:00:00' cookie1第一行,往上1行为NULL,因此取默认值 1970-01-01 00:00:00 cookie1第三行,往上1行值为第二行值,2015-04-10 10:00:02 cookie1第六行,往上1行值为第五行值,2015-04-10 10:50:01 last_2_time: 指定了往上第2行的值,为指定默认值 cookie1第一行,往上2行为NULL cookie1第二行,往上2行为NULL cookie1第四行,往上2行为第二行值,2015-04-10 10:00:02 cookie1第七行,往上2行为第五行值,2015-04-10 10:50:01
LEAD
与LAG相反
LEAD(col,n,DEFAULT) 用于统计窗口内往下第n行值
第一个参数为列名,第二个参数为往下第n行(可选,默认为1),第三个参数为默认值(当往下第n行为NULL时候,取默认值,如不指定,则为NULL)SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LEAD(createtime,1,'1970-01-01 00:00:00') OVER(PARTITION BY cookieid ORDER BY createtime) AS next_1_time, LEAD(createtime,2) OVER(PARTITION BY cookieid ORDER BY createtime) AS next_2_time FROM itcast_t4;
FIRST_VALUE
取分组内排序后,截止到当前行,第一个值
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS first1 FROM itcast_t4;
LAST_VALUE
取分组内排序后,截止到当前行,最后一个值
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1 FROM itcast_t4;
如果想要取分组内排序后最后一个值,则需要变通一下:
SELECT cookieid, createtime, url, ROW_NUMBER() OVER(PARTITION BY cookieid ORDER BY createtime) AS rn, LAST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime) AS last1, FIRST_VALUE(url) OVER(PARTITION BY cookieid ORDER BY createtime DESC) AS last2 FROM itcast_t4 ORDER BY cookieid,createtime;
特别注意order by
如果不指定ORDER BY,则进行排序混乱,会出现错误的结果
SELECT cookieid,
createtime,
url,
FIRST_VALUE(url) OVER(PARTITION BY cookieid) AS first2
FROM itcast_t4;
Hive分析窗口函数(五) GROUPING SETS,GROUPING__ID,CUBE,ROLLUP
这几个分析函数通常用于OLAP中,不能累加,而且需要根据不同维度上钻和下钻的指标统计,比如,分小时、天、月的UV数。
数据准备
2018-03,2018-03-10,cookie1 2018-03,2018-03-10,cookie5 2018-03,2018-03-12,cookie7 2018-04,2018-04-12,cookie3 2018-04,2018-04-13,cookie2 2018-04,2018-04-13,cookie4 2018-04,2018-04-16,cookie4 2018-03,2018-03-10,cookie2 2018-03,2018-03-10,cookie3 2018-04,2018-04-12,cookie5 2018-04,2018-04-13,cookie6 2018-04,2018-04-15,cookie3 2018-04,2018-04-15,cookie2 2018-04,2018-04-16,cookie1 CREATE TABLE itcast_t5 ( month STRING, day STRING, cookieid STRING ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' stored as textfile; 加载数据: load data local inpath '/root/hivedata/itcast_t5.dat' into table itcast_t5;
GROUPING SETS
grouping sets是一种将多个group by 逻辑写在一个sql语句中的便利写法。
等价于将不同维度的GROUP BY结果集进行UNION ALL。
GROUPING__ID,表示结果属于哪一个分组集合。
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day GROUPING SETS (month,day) ORDER BY GROUPING__ID; grouping_id表示这一组结果属于哪个分组集合, 根据grouping sets中的分组条件month,day,1是代表month,2是代表day 等价于 SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM itcast_t5 GROUP BY month UNION ALL SELECT NULL as month,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM itcast_t5 GROUP BY day;
再如:
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day GROUPING SETS (month,day,(month,day)) ORDER BY GROUPING__ID; 等价于 SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM itcast_t5 GROUP BY month UNION ALL SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM itcast_t5 GROUP BY day UNION ALL SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM itcast_t5 GROUP BY month,day;
CUBE
根据GROUP BY的维度的所有组合进行聚合。
SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day WITH CUBE ORDER BY GROUPING__ID; 等价于 SELECT NULL,NULL,COUNT(DISTINCT cookieid) AS uv,0 AS GROUPING__ID FROM itcast_t5 UNION ALL SELECT month,NULL,COUNT(DISTINCT cookieid) AS uv,1 AS GROUPING__ID FROM itcast_t5 GROUP BY month UNION ALL SELECT NULL,day,COUNT(DISTINCT cookieid) AS uv,2 AS GROUPING__ID FROM itcast_t5 GROUP BY day UNION ALL SELECT month,day,COUNT(DISTINCT cookieid) AS uv,3 AS GROUPING__ID FROM itcast_t5 GROUP BY month,day;
ROLLUP
是CUBE的子集,以最左侧的维度为主,从该维度进行层级聚合。
比如,以month维度进行层级聚合: SELECT month, day, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY month,day WITH ROLLUP ORDER BY GROUPING__ID; --把month和day调换顺序,则以day维度进行层级聚合: SELECT day, month, COUNT(DISTINCT cookieid) AS uv, GROUPING__ID FROM itcast_t5 GROUP BY day,month WITH ROLLUP ORDER BY GROUPING__ID; (这里,根据天和月进行聚合,和根据天聚合结果一样,因为有父子关系,如果是其他维度组合的话,就会不一样)
评论已关闭
暂无评论