问题描述
我正在尝试重新实现本网站中的 example: 他们有代码的地方:
class DKLModel(gpytorch.Module):
def __init__(self,feature_extractor,num_dim,grid_bounds=(-10.,10.)):
super(DKLModel,self).__init__()
self.feature_extractor = feature_extractor
self.gp_layer = GaussianProcessLayer(num_dim=num_dim,grid_bounds=grid_bounds)
self.grid_bounds = grid_bounds
self.num_dim = num_dim
def forward(self,x):
features = self.feature_extractor(x)
features = gpytorch.utils.grid.scale_to_bounds(features,self.grid_bounds[0],self.grid_bounds[1])
# This next line makes it so that we learn a GP for each feature
features = features.transpose(-1,-2).unsqueeze(-1)
res = self.gp_layer(features)
return res
model = DKLModel(feature_extractor,num_dim=num_features)
likelihood = gpytorch.likelihoods.softmaxLikelihood(num_features=model.num_dim,num_classes=num_classes)
if torch.cuda.is_available():
model = model.cuda()
likelihood = likelihood.cuda()
他们提到他们为每个特征学习一个 GP,但我希望所有特征都具有相同的高斯分布。有人知道怎么做吗?
解决方法
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)