尝试将数据从Data Fusion加载到Salesforce时出错

问题描述

尝试将数据从Data Fusion加载到Salesforce时遇到此错误

java.lang.RuntimeException: There was issue communicating with Salesforce
    at io.cdap.plugin.salesforce.plugin.sink.batch.SalesforceOutputFormat.getRecordWriter(SalesforceOutputFormat.java:53) ~[1599122485492-0/:na]
    at org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.initWriter(SparkHadoopWriter.scala:350) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:120) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at org.apache.spark.scheduler.Task.run(Task.scala:109) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) ~[spark-core_2.11-2.3.4.jar:2.3.4]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_252]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_252]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_252]
Caused by: com.sforce.async.AsyncApiException: InvalidJob : Invalid job id: null
    at com.sforce.async.BulkConnection.parseAndThrowException(BulkConnection.java:182) ~[na:na]
    at com.sforce.async.BulkConnection.doHttpGet(BulkConnection.java:753) ~[na:na]
    at com.sforce.async.BulkConnection.getJobStatus(BulkConnection.java:769) ~[na:na]
    at com.sforce.async.BulkConnection.getJobStatus(BulkConnection.java:760) ~[na:na]
    at io.cdap.plugin.salesforce.plugin.sink.batch.SalesforceRecordWriter.<init>(SalesforceRecordWriter.java:69) ~[1599122485492-0/:na]
    at io.cdap.plugin.salesforce.plugin.sink.batch.SalesforceOutputFormat.getRecordWriter(SalesforceOutputFormat.java:51) ~[1599122485492-0/:na]
    ... 10 common frames omitted
2020-09-03 08:41:28,595 - WARN  [task-result-getter-0:o.a.s.ThrowableSerializationWrapper@192] - Task exception Could not be deserialized
java.lang.classNotFoundException: Class not found in all delegated ClassLoaders: com.sforce.async.AsyncApiException
    at io.cdap.cdap.common.lang.CombineClassLoader.findClass(CombineClassLoader.java:96) ~[na:na]
    at java.lang.classLoader.loadClass(ClassLoader.java:418) ~[na:1.8.0_252]
    at java.lang.classLoader.loadClass(ClassLoader.java:351) ~[na:1.8.0_252]
    at io.cdap.cdap.common.lang.WeakReferenceDelegatorClassLoader.findClass(WeakReferenceDelegatorClassLoader.java:58) ~[na:na] 

错误是什么意思?我保证输入字段与SObject定义匹配。

解决方法

我查看了堆栈跟踪,我想我可能知道问题所在了。

属性mapred.salesforce.job.id是未定义的,并且在代码执行时,它会获取此键的值,并且由于未定义,作业会出错。我认为您需要将mapred.salesforce.job.id标志设置为运行时属性。为此,请在数据融合中执行以下操作:

  1. 在Pipeline Studio中,导航到要配置的管道的详细信息页面。
  2. 单击“运行”按钮旁边的下拉菜单。
  3. 设置所需的群集属性,并在所有属性名称前加上system.profile.properties.。对于我们的情况,我认为名称为system.profile.properties.mapred:mapred.salesforce.job.id,值是您要用作ID的数字