问题描述
我正在尝试通过全局修剪来修剪我的深度学习模型。原始 UnPruned 模型大约为 77.5 MB。但是修剪后,当我保存模型时,模型的大小与原始模型相同。有人能帮我解决这个问题吗?
以下是剪枝代码:-
import torch.nn.utils.prune as prune
parameters_to_prune = (
(model.encoder[0],‘weight’),(model.up_conv1[0],(model.up_conv2[0],(model.up_conv3[0],)
print(parameters_to_prune)
prune.global_unstructured(
parameters_to_prune,pruning_method=prune.L1Unstructured,amount=0.2,)
print(
“Sparsity in Encoder.weight: {:.2f}%”.format(
100. * float(torch.sum(model.encoder[0].weight == 0))
/ float(model.encoder[0].weight.nelement())
)
)
print(
“Sparsity in up_conv1.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv1[0].weight == 0))
/ float(model.up_conv1[0].weight.nelement())
)
)
print(
“Sparsity in up_conv2.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv2[0].weight == 0))
/ float(model.up_conv2[0].weight.nelement())
)
)
print(
“Sparsity in up_conv3.weight: {:.2f}%”.format(
100. * float(torch.sum(model.up_conv3[0].weight == 0))
/ float(model.up_conv3[0].weight.nelement())
)
)
print(
“Global sparsity: {:.2f}%”.format(
100. * float(
torch.sum(model.encoder[0].weight == 0)
+ torch.sum(model.up_conv1[0].weight == 0)
+ torch.sum(model.up_conv2[0].weight == 0)
+ torch.sum(model.up_conv3[0].weight == 0)
)
/ float(
model.encoder[0].weight.nelement()
+ model.up_conv1[0].weight.nelement()
+ model.up_conv2[0].weight.nelement()
+ model.up_conv3[0].weight.nelement()
)
)
)
**Setting Pruning to Permanent**
prune.remove(model.encoder[0],“weight”)
prune.remove(model.up_conv1[0],“weight”)
prune.remove(model.up_conv2[0],“weight”)
prune.remove(model.up_conv3[0],“weight”)
**Saving the model**
PATH = “C:\PrunedNet.pt”
torch.save(model.state_dict(),PATH)
解决方法
修剪不会改变模型大小,如果这样应用。
如果您有张量,请说:
[1.,2.,3.,4.,5.,6.,7.,8.]
然后您修剪 50%
的数据,例如:
[1.,0.,0.]
您仍然会有 8
浮点值并且它们的大小将相同。
何时修剪会减小模型尺寸?
- 当我们以稀疏格式保存权重时,但它应该具有高稀疏性(因此 10% 非零元素)
- 当我们实际删除某些内容时(例如
Conv2d
中的内核,如果其权重为零或可以忽略不计,则可以将其删除)
否则它不会工作。查看一些相关项目,这些项目可以让您无需自己编写代码即可完成,例如 Torch-Pruning。