0


Pyspark中pyspark.sql.functions常用方法(1)

文章目录

pyspark sql functions(1)

spark.range

Spark的range()函数用于生成一个指定范围内的连续整数序列
具体来说,range(start, end, step)函数接受三个参数:
start:序列的起始值。
end:序列的结束值(不包括此值)。
step:序列中每个数的间隔。

spark.range(0,3,).show()
+---+
| id|
+---+
|  0|
|  1|
|  2|
+---+
spark.range(0,3,2).show()
+---+
| id|
+---+
|  0|
|  2|
+---+

col alias column

from pyspark.sql.functions import col

以下是一些

col()

函数的常见用法示例:

  1. 选择列:df.select(col("column_name"))
  2. 进行条件过滤:df.filter(col("column_name") > 5)
  3. 创建新列:df.withColumn("new_column", col("column1") + col("column2"))
  4. 嵌套函数调用:df.withColumn("new_column", sqrt(col("column1")))

通过使用

col()

函数,你可以对 DataFrame 的列执行各种转换和操作,例如选择、过滤、计算等。它提供了一种方便的方式来处理列级别的操作,同时使代码更易读和可维护。

lit 创建常量列

from pyspark.sql.functions import lit
from pyspark.sql import functions as sf
df.select(lit(5).alias('height'), df.id).show()
+------+---+
|height| id|
+------+---+
|     5|  0|
+------+---+

spark.createDataFrame([[12,13],[12,13]],['age1','age2']).show()
+----+----+
|age1|age2|
+----+----+
|  12|  13|
|  12|  13|
+----+----+

df.withColumn('age3',lit('11')).show()
+----+----+----+
|age1|age2|age3|
+----+----+----+
|  12|  13|  11|
|  12|  13|  11|
+----+----+----+

broadcast 广播表

用于在 Spark SQL 的任务中广播小表,以优化 join 操作。当执行 join 操作时,如果其中一个表非常小,可以使用这个函数将其广播到所有节点上,这样在节点间进行 join 操作时就不需要移动大量的数据,可以显著提高性能。

from pyspark.sql import types
# col 名value
df = spark.createDataFrame([1, 2, 3, 3, 4], types.IntegerType())
# col 为id
df_small = spark.range(3)
# 广播
df_b = broadcast(df_small)
df.join(df_b, df.value == df_small.id).show()
+-----+---+
|value| id|
+-----+---+
|    1|  1|
|    2|  2|
+-----+---+

coalesce 合并列 (none)

它用于合并列表中的列,并返回第一个非空(非null)值(跟顺序有关)。如果所有列都是空值,则结果为空值。

cDf = spark.createDataFrame([(None, None), (1, None), (None, 2)], ("a", "b"))
cDf.show()
+----+----+
|   a|   b|
+----+----+
|NULL|NULL|
|   1|NULL|
|NULL|   2|
+----+----+

# 合并 a,b 当都为空则只能返回空
cDf.select(coalesce(cDf["a"], cDf["b"])).show()
+--------------+
|coalesce(a, b)|
+--------------+
|          NULL|
|             1|
|             2|
+--------------+

cDf.show()
+----+----+
|   a|   b|
+----+----+
|null|null|
|   1|null|
|   3|   2|
+----+----+

# 跟顺序也有关系
cDf.select(coalesce(cDf["b"], cDf["a"])).show()
+--------------+
|coalesce(b, a)|
+--------------+
|          null|
|             1|
|             2|
+--------------+
# 改变顺序返回的第一个值发生变化
cDf.select(coalesce(cDf["a"], cDf["b"])).show()
+--------------+
|coalesce(a, b)|
+--------------+
|          null|
|             1|
|             3|
+--------------+

isnan 判断nan值

df = spark.createDataFrame([(1.0, float('nan')), (float('nan'), 2.0)], ("a", "b"))
df.show()
+----+----+
|   a|   b|
+----+----+
|   1|null|
|null|   2|
+----+----+
df.select("a", "b", isnan("a").alias("r1"), isnan(df.b).alias("r2")).show()
+---+---+-----+-----+
|  a|  b|   r1|   r2|
+---+---+-----+-----+
|1.0|NaN|false| true|
|NaN|2.0| true|false|
+---+---+-----+-----+

isnull 判断None值

df = spark.createDataFrame([(1, None), (None, 2)], ("a", "b"))
df.show()
+---+---+
|  a|  b|
+---+---+
|1.0|NaN|
|NaN|2.0|
+---+---+
df.select("a", "b", isnull("a").alias("r1"), isnull(df.b).alias("r2")).show()
+----+----+-----+-----+
|   a|   b|   r1|   r2|
+----+----+-----+-----+
|   1|NULL|false| true|
|NULL|   2| true|false|
+----+----+-----+-----+

nanvl 合并列 (nan)

如果col1不是NaN,则返回col1,如果col1是NaN,返回col2。
两个输入都应该是浮点列(DoubleType或FloatType)。

df = spark.createDataFrame([(1.0, float('nan')), (float('nan'), 2.0)], ("a", "b"))
df.show()
+---+---+
|  a|  b|
+---+---+
|1.0|NaN|
|NaN|2.0|
+---+---+
from pyspark.sql.functions import nanvl
df.select(nanvl("a", "b").alias("r1"), nanvl(df.a, df.b).alias("r2")).show()
+---+---+
| r1| r2|
+---+---+
|1.0|1.0|
|2.0|2.0|
+---+---+

udf 自定义函数

是 PySpark 中用于定义用户自定义函数(UDF)的工具。UDF允许你在Spark DataFrame中使用Python函数处理数据。UDF的性能通常不如内置的Spark函数,因为它们会引入额外的Python虚拟机开销。只有当没有其他选项时才应该使用UDF。

from pyspark.sql.functions import col,udf
from pyspark.sql.types import IntegerType
# 自定义函数
get_weight = udf(lambda s: int(s.split(",")[1]), IntegerType())
df2 = df1.withColumn("weight", get_weight(f.col("height_weight")))
df2.show()
+----+-------------+------+
|name|height_weight|weight|
+----+-------------+------+
|a|       165,70|    70|
|b|       163,75|    75|
|c|       167,65|    65|
|d|       176,60|    60|
+----+-------------+------+
# 使用装饰器自定义
@udf
def ldsx(x):
    return int(x.split(",")[1])

df1.withColumn("weight", ldsx(col("height_weight"))).show()
+----+-------------+------+
|name|height_weight|weight|
+----+-------------+------+
|赵四|       165,70|    70|
|刘能|       163,75|    75|
|广坤|       167,65|    65|
|浩哥|       176,60|    60|
+----+-------------+------+
df1.withColumn("weight", ldsx(col("height_weight"))).printSchema()
root
 |-- name: string (nullable = true)
 |-- height_weight: string (nullable = true)
 |-- weight: string (nullable = true
# 装饰器带参数
@udf(returnType=IntegerType())
def ldsx(x):
    return int(x.split(",")[1])
df1.withColumn("weight", ldsx(f.col("height_weight"))).printSchema()
root
 |-- name: string (nullable = true)
 |-- height_weight: string (nullable = true)
 |-- weight: integer (nullable = true)

ex:

df = spark.createDataFrame([(1, "John Doe", 21)], ("id", "name", "age"))
df.show()
+---+--------+---+
| id|    name|age|
+---+--------+---+
|  1|John Doe| 21|
+---+--------+---+

slen = udf(lambda s: len(s), IntegerType())
@udf
def to_upper(s):
    if s is not None:
        return s.upper()

@udf(returnType=IntegerType())
def add_one(x):
    if x is not None:
        return x + 1

df.select(slen("name").alias("slen(name)"), to_upper("name"), add_one("age")).show()
+----------+--------------+------------+
|slen(name)|to_upper(name)|add_one(age)|
+----------+--------------+------------+
|         8|      JOHN DOE|          22|
+----------+--------------+------------+

rand 随机列 ,randn 随机正态分布

parameters: seed:int 随机生成器的种子值,种子固定则随机值固定。取值范围0.0,~~1.0

return column

# 固定种子
spark.range(0, 2, 1, 1).withColumn('rand', sf.rand(seed=42) * 3).show()
+---+------------------+
| id|              rand|
+---+------------------+
|  0|1.8575681106759028|
|  1|1.5288056527339444|
+---+------------------+

# 不使用种子进行随机
spark.range(0, 2, 1, 1).withColumn('rand', sf.rand() *3).show()
+---+-------------------+
| id|               rand|
+---+-------------------+
|  0|0.13415045833565098|
|  1| 1.0979329499109334|
+---+-------------------+
spark.range(0, 2, 1, 1).withColumn('rand', sf.rand() *3).show()
+---+-------------------+
| id|               rand|
+---+-------------------+
|  0| 0.2658713153576896|
|  1|0.15252418082890773|
+---+-------------------+

spark_partition_id 获取分区id

df.repartition(50).withColumn('pid',sf.spark_partition_id()).show()
+---+---+
| id|pid|
+---+---+
|  5|  0|
| 69|  0|
| 37|  1|
| 77|  1|
| 25|  2|
| 59|  2|
| 31|  3|
| 81|  3|
| 30|  4|
| 95|  4|
| 23|  5|
| 67|  5|
| 32|  6|
| 90|  6|
|  6|  7|
| 52|  7|
| 42|  8|
| 61|  8|
| 16|  9|
| 66|  9|
+---+---+
only showing top 20 rows

when 与 otherwise 配合使用

如果未调用Column.otherwise(),则对于不匹配的条件将返回None

df = spark.createDataFrame(
     [(2, "Alice"), (5, "Bob")], ["age", "name"])
     
df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+

# 查询条件进行筛选,当when不配合otherwise 默认使用null代替
df.select(df.name, when(df.age > 3, 1)).show()
+-----+------------------------------+
| name|CASE WHEN (age > 3) THEN 1 END|
+-----+------------------------------+
|Alice|                          null|
|  Bob|                             1|
+-----+------------------------------+

# 使用otherwise 的条件代替null
df.select(df.name, when(df.age > 3, 1).otherwise(0)).show()
+-----+-------------------------------------+
| name|CASE WHEN (age > 3) THEN 1 ELSE 0 END|
+-----+-------------------------------------+
|Alice|                                    0|
|  Bob|                                    1|
+-----+-------------------------------------+

expr 将字符串转换为表达式 等价 df.selectExpr

df = spark.createDataFrame([["Alice"], ["Bob"]], ["name"])
df.select("name", expr("length(name)")).show()
+-----+------------+
| name|length(name)|
+-----+------------+
|Alice|           5|
|  Bob|           3|
+-----+------------+

df.selectExpr('length(name) as name','name as new_name').show()
+----+--------+
|name|new_name|
+----+--------+
|   5|   Alice|
|   3|     Bob|
+----+--------+

greatest 多列获取最大值

返回列名列表中的最大值,跳过空值。此函数至少需要2个列。如果所有参数都为null,它将返回null。

df = spark.createDataFrame([(1, 4, 3)], ['a', 'b', 'c'])
df.show()
+---+---+---+
|  a|  b|  c|
+---+---+---+
|  1|  4|  3|
+---+---+---+
# 三列比较
df.select(sf.greatest(df.a, df.b, df.c).alias("greatest")).show()
+--------+
|greatest|
+--------+
|       4|
+--------+
# 两列比较
df.select(sf.greatest(df.a, df.c).alias("greatest")).show()
+--------+
|greatest|
+--------+
|       3|
+--------+

least 多列获取最小值

df = spark.createDataFrame([(1, 4, 3)], ['a', 'b', 'c'])
df.select(sf.least(df.a, df.b, df.c).alias("least")).show()

+-----+
|least|
+-----+
|    1|
+-----+

abs 绝对值

参数:列名

df = spark.range(1)
df.select(abs(lit(-1))).show()
+-------+
|abs(-1)|
+-------+
|      1|
+-------+

sqrt 计算指定浮点值的平方根

参数:列名

df = spark.range(1)
df.select(sqrt(lit(4))).show()
+-------+
|SQRT(4)|
+-------+
|    2.0|
+-------+

bin 返回给定列的二进制值的字符串

df.select(sf.bin(df.v2).alias('c')).show()
+---+
|  c|
+---+
|101|
+---+

power 第一列底数,第二列幂 返回结果

df.select(pow(sf.lit(3), sf.lit(2))).show()
+-----------+
|POWER(3, 2)|
+-----------+
|        9.0|
+-----------+
标签: python pyspark hive

本文转载自: https://blog.csdn.net/weixin_43322583/article/details/142869653
版权归原作者 百流 所有, 如有侵权,请联系我们删除。

“Pyspark中pyspark.sql.functions常用方法(1)”的评论:

还没有评论