如何使用最新的IBM Waston Studio API解决.MPS文件

问题描述

我正在尝试迁移一个由于中断更改而当前中断的实用程序,该实用程序可以使用IBM的API解决.mps。
原始代码使用一个空的model.tar.gz文件,创建一个部署,并将.mps文件tp传递给新作业。
(python)代码如下所示:

import tarfile
tar = tarfile.open("model.tar.gz","w:gz")
tar.close()

test_Metadata = {
    client.repository.ModelMetaNames.NAME: "Test",client.repository.ModelMetaNames.DESCRIPTION: "Model for Test",client.repository.ModelMetaNames.TYPE: "do-cplex_12.9",client.repository.ModelMetaNames.RUNTIME_UID: "do_12.9"    
}

model_details = client.repository.store_model(model='model.tar.gz',Meta_props=test_Metadata)
model_uid = client.repository.get_model_uid(model_details)
n_nodes = 1
Meta_props = {
    client.deployments.ConfigurationMetaNames.NAME: "Test Deployment " + str(n_nodes),client.deployments.ConfigurationMetaNames.DESCRIPTION: "Test Deployment",client.deployments.ConfigurationMetaNames.BATCH: {},client.deployments.ConfigurationMetaNames.COmpuTE: {'name': 'S','nodes': n_nodes}
}

deployment_details = client.deployments.create(model_uid,Meta_props=Meta_props)
deployment_uid = client.deployments.get_uid(deployment_details)

solve_payload = {
    client.deployments.DecisionoptimizationMetaNames.soLVE_ParaMETERS: {
                'oaas.logAttachmentName':'log.txt','oaas.logTailEnabled':'true','oaas.resultsFormat': 'JSON'
    },client.deployments.DecisionoptimizationMetaNames.INPUT_DATA_REFERENCES: [
        {
            'id':'test.mps','type': 's3','connection': {
                'endpoint_url': COS_ENDPOINT,'access_key_id': cos_credentials['cos_hmac_keys']["access_key_id"],'secret_access_key': cos_credentials['cos_hmac_keys']["secret_access_key"]
            },'location': {
                'bucket': COS_BUCKET,'path': 'test.mps'
            }
        }
    ],client.deployments.DecisionoptimizationMetaNames.OUTPUT_DATA_REFERENCES: [
        {
                    'id':'solution.json','connection': {
                        'endpoint_url': COS_ENDPOINT,'secret_access_key': cos_credentials['cos_hmac_keys']["secret_access_key"]
                    },'location': {
                        'bucket': COS_BUCKET,'path': 'solution.json'
                    }
                },{
                    'id':'log.txt','path': 'log.txt'
                    }
                }
    ]
}


job_details = client.deployments.create_job(deployment_uid,solve_payload)

我设法做的最接近的(几乎正是我需要的)是使用本示例中的大多数代码
https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/deployments/decision_optimization/Use%20Decision%20Optimization%20to%20plan%20your%20diet.ipynb

这是一个完整的工作示例。

from ibm_watson_machine_learning import apiclient
import os
import wget
import json
import pandas as pd
import time

COS_ENDPOINT = "https://s3.ams03.cloud-object-storage.appdomain.cloud" 
model_path = 'do-model.tar.gz'
api_key = 'XXXXX'
access_key_id = "XXXX",secret_access_key= "XXXX"

location = 'eu-gb'
space_id = 'XXXX'
softwareSpecificationName = "do_12.9"
modelType = "do-docplex_12.9"

wml_credentials = {
    "apikey": api_key,"url": 'https://' + location + '.ml.cloud.ibm.com'
}

client = apiclient(wml_credentials)
client.set.default_space(space_id)

if not os.path.isfile(model_path):
    wget.download("https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/decision_optimization/do-model.tar.gz")

sofware_spec_uid = client.software_specifications.get_uid_by_name(softwareSpecificationName)

model_Meta_props = {
                        client.repository.ModelMetaNames.NAME: "LOCALLY created DO model",client.repository.ModelMetaNames.TYPE: modelType,client.repository.ModelMetaNames.soFTWARE_SPEC_UID: sofware_spec_uid
                    }
published_model = client.repository.store_model(model=model_path,Meta_props=model_Meta_props)
time.sleep(5) # So that the model is avalable on the API
published_model_uid = client.repository.get_model_uid(published_model)
client.repository.list_models()

Meta_data = {
    client.deployments.ConfigurationMetaNames.NAME: "deployment_DO",client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {"name": "S","num_nodes": 1}

}
deployment_details = client.deployments.create(published_model_uid,Meta_props=Meta_data)
time.sleep(5) # So that the deployment is avalable on the API
deployment_uid = client.deployments.get_uid(deployment_details)
client.deployments.list()


job_payload_ref = {
    client.deployments.DecisionoptimizationMetaNames.INPUT_DATA_REFERENCES: [
        {
                    'id':'diet_food.csv','access_key_id': access_key_id,'secret_access_key': secret_access_key
                    },'location': {
                        'bucket': "gvbucketname0api",'path': "diet_food.csv"
                    }
        },{
                    'id':'diet_food_nutrients.csv','path': "diet_food_nutrients.csv"
                    }
        },{
                    'id':'diet_nutrients.csv','path': "diet_nutrients.csv"
                    }
        }
    ],client.deployments.DecisionoptimizationMetaNames.OUTPUT_DATA_REFERENCES: 
    [
        {
                    'id':'.*','secret_access_key':secret_access_key
                    },'path': "${job_id}/${attachment_name}"
                    }
        }
    ]
}

job = client.deployments.create_job(deployment_uid,Meta_props=job_payload_ref)

以上示例使用一个模型和一些csv文件作为输入。 当我将INPUT_DATA_REFERENCES更改为使用.mps文件(和空模型)时,出现错误

"errors": [
{
   "code": "invalid_model_archive_in_deployment","message": "Invalid or unrecognized archive type in deployment `XXX-XXX-XXX`.
               Supported archive types are `zip` or `tar.gz`"
}

我不是专家,但是据我了解,mps文件既包含输入文件又包含模型文件,因此我不必同时提供两者。

解决方法

Alex Fleischeranother forum上提供了答案。

可以在此处找到完整的示例:
https://medium.com/@AlainChabrier/solve-lp-problems-from-do-experiments-9afd4d53aaf5
上面的链接(与我的问题中的代码相似)显示了一个带有“ .lp”文件的示例,但对于“ .mps”文件也是如此。 (请注意,模型类型为 do-cplex_12.10 ,而不是 do-docplex_12.10

我的问题是我正在使用一个空的model.tar.gz文件。
将存档中的.lp / .mps文件保存好后,一切都会按预期进行