问题描述
df1 - large dataset
df2 = df1.sample(tiny_fraction)
df1 is written to disk as a parquet with snappy compression (~75GB)
df2 is written to disk as a parquet with snappy compression (~90GB)
df3 = read df2's saved parquet
# set sql context to use snappy compression
df3.partition(1).save() -> (~100MB)
df4 = read df1's saved parquet
df4.sample(tiny_fraction).partition(1).save() -> (~100MB)
子采样数据帧(未重新分区)的文件大小大于原始数据集 有没有人知道为什么会发生这种情况?
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)