无法从'scipy.misc'导入名称'toimage'/如何解决fromarray集成

问题描述

我是一名业余的天文摄影师,正在为StarNet ++ https://github.com/nekitmm/starnet进行从Tensorflow1代码到Tensorflow2的翻译,到目前为止,tf_upgrade_v2做得很好,并且可以自我解释。现在我在保存数组图像时遇到了问题,旧代码使用了scipy模块中不推荐使用的toimgae,

非常感谢您将image.fromarray集成到此代码中的任何帮助,谢谢

import numpy as np
import tensorflow as tf
from PIL import Image as img
import matplotlib.pyplot as plt
from scipy.misc import toimage
import matplotlib
import sys
import time
import model
import starnet_utils

WINDOW_SIZE = 256                      # Size of the image fed to net. Do not change until you kNow what you are doing! Default is 256
                                       # and changing this will force you to train the net anew.

def transform(image,stride):
    
    # placeholders for tensorflow
    X = tf.compat.v1.placeholder(tf.float32,shape = [None,WINDOW_SIZE,3],name = "X")
    Y = tf.compat.v1.placeholder(tf.float32,name = "Y")

    # create model
    train,avers,outputs = model.model(X,Y)
    
    #initialize variables
    init = tf.compat.v1.global_variables_initializer()
    
    # create saver instance to load model parameters
    saver = tf.compat.v1.train.Saver()

    with tf.compat.v1.Session() as sess:        
        # initialize all variables and start training
        sess.run(init)
        
        # restore current state of the model
        print("Restoring prevIoUs state of the model...")
        saver.restore(sess,"./model.ckpt")
        print("Done!")
        
        # read input image
        print("opening input image...")
        input = np.array(img.open(image),dtype = np.float32)
        print("Done!")
        
        # rescale to [-1,1]
        input /= 255
        # backup to use for mask
        backup = np.copy(input)
        input = input * 2 - 1
        
        
        # Now some tricky magic
        # image size is unlikely to be multiple of stride and hence we need to pad the image and
        # also we need some additional padding to allow offsets on sides of the image
        offset = int((WINDOW_SIZE - stride) / 2)
        
        # get size of the image and calculate numbers of iterations needed to transform it
        # given stride and taking into account that we will pad it a bit later (+1 comes from that)
        h,w,_ = input.shape
        ith = int(h / stride) + 1
        itw = int(w / stride) + 1
        
        # calculate how much we need to add to make image sizes multiples of stride
        dh = ith * stride - h
        dw = itw * stride - w
        
        # pad image using parts of the image itself and values calculated above
        input = np.concatenate((input,input[(h - dh) :,:,:]),axis = 0)
        input = np.concatenate((input,input[:,(w - dw) :,axis = 1)
        
        # get image size again and pad to allow offsets on all four sides of the image
        h,_ = input.shape
        input = np.concatenate((input,input[(h - offset) :,axis = 0)
        input = np.concatenate((input[: offset,:],input),(w - offset) :,axis = 1)
        input = np.concatenate((input[:,: offset,axis = 1)
        
        # copy input image to output
        output = np.copy(input)
        
        # helper array just to add fourth dimension to net input
        tmp = np.zeros((1,3),dtype = np.float)
        
        # here goes
        for i in range(ith):
            for j in range(itw):
                print('Transforming input image... %d%%\r' % int((itw * i + j + 1) * 100 / (ith * itw)))
                
                x = stride * i
                y = stride * j
                
                # write piece of input image to tmp array
                tmp[0] = input[x : x + WINDOW_SIZE,y : y + WINDOW_SIZE,:]
                
                # transform
                result = sess.run(outputs,Feed_dict = {X:tmp})
                
                # write transformed array to output
                output[x + offset : x + stride + offset,y + offset: y + stride + offset,:] = result[0,offset : stride + offset,:]
        print("Transforming input image... Done!")
        
        # rescale back to [0,1]
        output = (output + 1) / 2
        
        # leave only necessary part,without pads added earlier
        output = output[offset : - (offset + dh),offset : - (offset + dw),:]
        
        print("Saving output image...")
        toimage(output * 255,cmin = 0,cmax = 255).save('./' + image + '_starless.tif')
        print("Done!")
        
        print("Saving mask...")
        # mask showing areas that were changed significantly
        mask = (((backup * 255).astype(np.int_) - (output * 255).astype(np.int_)) > 25).astype(np.int_)
        mask = mask.max(axis = 2,keepdims = True)
        mask = np.concatenate((mask,mask,mask),axis = 2)
        toimage(mask * 255,cmax = 255).save('./' + image + '_mask.tif')
        print("Done!")

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)