问题描述
我已经定义了这样的“裁剪”分布:
from pymc3.distributions.transforms import ElemwiseTransform
import aeSara.tensor as at
import numpy as np
class MvClippingTransform(ElemwiseTransform):
name = "MvClippingTransform"
def __init__(self,lower = None,upper = None):
if lower is None:
lower = float("-inf")
if upper is None:
upper = float("inf")
self.lower = lower
self.upper = upper
def backward(self,x):
return x
def forward(self,x):
return at.clip(x,self.lower,self.upper)
def forward_val(self,x,point=None):
return np.clip(x,self.upper)
def jacobian_det(self,x):
# The backwards transformation of clipping as I've defined it is the identity function (perhaps that will change)
# I have an intuition that the jacobian determinant of the identity function is 1,so log(abs(1)) -> 0
return at.zeros(x.shape)
我已经将它应用到 Mvnormal 和 LKJ Cholesky 之前,如下所示:
import importlib,clipping; importlib.reload(clipping)
with pm.Model() as m:
# Taken from https://docs.pymc.io/pymc-examples/examples/case_studies/LKJ.html
chol,corr,stds = pm.LKJCholeskyCov(
# Specifying compute_corr=True also unpacks the cholesky matrix in the returns (otherwise we'd have to unpack ourselves)
"chol",n=3,eta=2.0,sd_dist=pm.Exponential.dist(1.0),compute_corr=True
)
cov = pm.Deterministic("cov",chol.dot(chol.T))
μ = pm.Uniform("μ",-10,10,shape=3,testval=samples.mean(axis=0))
clipping = clipping.MvClippingTransform(lower = None,upper = upper_truncation)
mv = pm.Mvnormal("mv",mu = μ,chol=chol,shape = 3,transform = clipping,observed=samples) #,observed = samples
trace = pm.sample(random_seed=44,init="adapt_diag",return_inferencedata=True,target_accept = 0.9)
ppc = pm.sample_posterior_predictive(
trace,var_names=["mv"],random_seed=42
)
(上截断是一个numpy数组)
现在,我通过为多元正态分布定义协方差矩阵并对其应用裁剪来生成模拟数据:
但是当我从 PPC 中采样时,我得到了这个:
即使我将裁剪定义为 [0,0],它仍然不起作用。
为什么 PPC(或参数采样,就此而言)不反映裁剪变换?
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)