熊猫存储桶时间戳记放入TimeGrouper频率组

问题描述

我在熊猫中有一个带有DateTime索引的数据框。 使用时间分组器pd.Grouper(freq='360Min') 对其进行分组时,如何将结果重新添加到原始时间戳记中? 即等值联接时间戳=桶将不起作用? 有便利功能吗? 是否应该使用asof连接? 还是我必须手动提取小时数然后尝试进行匹配?

示例:

获取

的来源
import pandas as pd
df = pd.DataFrame(
   {
       "Publish date": [
            pd.Timestamp("2000-01-02"),pd.Timestamp("2000-01-02"),pd.Timestamp("2000-01-09"),pd.Timestamp("2000-01-16")
        ],"ID": [0,1,2,3],"Price": [10,20,30,40]
    }
)

哪个给:

  Publish date  ID  Price
0   2000-01-02   0     10
1   2000-01-02   1     20
2   2000-01-09   2     30
3   2000-01-16   3     40

我想以任意频率(不仅是月,日,小时)进行汇总,例如1

month.

agg_result = df.groupby(pd.Grouper(key="Publish date",freq="1M")).agg([pd.Series.mean,pd.Series.median]).reset_index()
agg_result.columns = ['_'.join(col).strip() for col in agg_result.columns.values]
agg_result.columns = ['Publish date month','ID_mean','ID_median','Price_mean','Price_median']
print(agg_result)
Publish date month  ID_mean  ID_median  Price_mean  Price_median
0         2000-01-31      1.5        1.5          25            25

如何确保等分线再次起作用?即用相同的任意频率将原始时间戳转换为拟合桶?

即在示例代码中进行了描述,我如何获得:

agg_result['Publish date month'] = agg_result['Publish date'].apply(magic transform to same frequency bucket)
df.merge(agg_result,on['Publish date month'])

要工作,即定义对正确存储桶的转换?

解决方法

编辑:

识别每个组对应的原始值的最简单方法应该是:

gb = df.groupby(pd.Grouper(key="Publish date",freq="1M"))
dict(list(gb['Publish date']))

然后,您可以使用它来将任何信息连接回原始表。


您可以只加入两个中间列吗?

df['Publish date'].dt.month

df.groupby(pd.Grouper(key="Publish date",freq="1M")).agg([pd.Series.mean,pd.Series.median]).index.month

像这样

results =  df.groupby(pd.Grouper(key="Publish date",pd.Series.median])

results.columns = ['-'.join(col[::-1]).strip() for col in results.columns]

df['month'] = df['Publish date'].dt.month

results['month'] = results.index.month
results.merge(df)
,

我将使用Groupby.transform方法:

import pandas as pd
df = pd.DataFrame(
   {
       "Publish date": [
            pd.Timestamp("2000-01-02"),pd.Timestamp("2000-01-02"),pd.Timestamp("2000-01-09"),pd.Timestamp("2000-01-16")
        ],"ID": [0,1,2,3],"Price": [10,20,30,40]
    }
)

g = df.groupby(pd.Grouper(key="Publish date",freq="1M"))

(
  df.join(g.transform('mean'),rsuffix='_mean')
    .join(g.transform('median'),rsuffix='_median')
)

然后返回:

  Publish date  ID  Price  ID_mean  Price_mean  ID_median  Price_median
0   2000-01-02   0     10      1.5          25        1.5            25
1   2000-01-02   1     20      1.5          25        1.5            25
2   2000-01-09   2     30      1.5          25        1.5            25
3   2000-01-16   3     40      1.5          25        1.5            25

您也可以使用pandas.concat代替DataFrame.join

methods = ['mean','median','std','min','max']

pd.concat([
    df,*[g.transform(m).add_suffix(f'_{m}') for m in methods]
],axis='columns')

这给你:

  Publish date  ID  Price  ID_mean  Price_mean  ID_median  Price_median    ID_std  Price_std  ID_min  Price_min  ID_max  Price_max
0   2000-01-02   0     10      1.5          25        1.5            25  1.290994  12.909944       0         10       3         40
1   2000-01-02   1     20      1.5          25        1.5            25  1.290994  12.909944       0         10       3         40
2   2000-01-09   2     30      1.5          25        1.5            25  1.290994  12.909944       0         10       3         40
3   2000-01-16   3     40      1.5          25        1.5            25  1.290994  12.909944       0         10       3         40