尝试显示模型结果时 Sparknlp Java 错误

问题描述

我正在尝试输出使用 Spark-NLP 创建的练习 NLP 模型的结果。但是,我不断收到以下错误。有谁可以帮我离开这里吗。当我尝试输出数据帧时, .show() 方法代码的早期工作。每当我尝试输出模型结果的任何部分时,它就会失败。

我正在 Windows 机器上运行 Jupyter Notebook 中的代码。我的机器上有 pyspark spark-3.0.3 和 Hadoop 2.7。

使用的代码

import findspark
findspark.init()
findspark.find()
import pyspark

import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline

spark = sparknlp.start()

data = spark.createDataFrame([['Peter is a godo person living in Germany. Paula is also a good person. She lives in London']]).toDF('text')

data.show(truncate=False)

document = DocumentAssembler().setInputCol('text').setoutputCol('document').setCleanupMode('shrink')

sentence = SentenceDetector().setInputCols('document').setoutputCol('sentence')

sentence.setExplodeSentences(True)

tokenizer = Tokenizer().setInputCols('sentence').setoutputCol('token')

checker = norvigSweetingModel.pretrained().setInputCols(['token']).setoutputCol('checked')

embeddings = WordEmbeddingsModel.pretrained().setInputCols(['sentence','checked']).setoutputCol('embeddings')

ner = nerDLModel.pretrained().setInputCols(['sentence','checked','embeddings']).setoutputCol('ner')

converter = NerConverter().setInputCols(['sentence','ner']).setoutputCol('chunk')

pipeline = Pipeline().setStages([document,sentence,tokenizer,checker,embeddings,ner,converter])

model = pipeline.fit(data)

result = model.transform(data)

#LINE THAT TRIGGERS ERROR
result.select('chunk.result').show(truncate=False)

错误

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-75-4f3ba5a75c4a> in <module>
----> 1 result.select('chunk.result').show(truncate=False)

C:\Spark\python\pyspark\sql\dataframe.py in show(self,n,truncate,vertical)
    440             print(self._jdf.showString(n,20,vertical))
    441         else:
--> 442             print(self._jdf.showString(n,int(truncate),vertical))
    443 
    444     def __repr__(self):

C:\Spark\python\lib\py4j-0.10.9-src.zip\py4j\java_gateway.py in __call__(self,*args)
   1302 
   1303         answer = self.gateway_client.send_command(command)
-> 1304         return_value = get_return_value(
   1305             answer,self.gateway_client,self.target_id,self.name)
   1306 

C:\Spark\python\pyspark\sql\utils.py in deco(*a,**kw)
    126     def deco(*a,**kw):
    127         try:
--> 128             return f(*a,**kw)
    129         except py4j.protocol.Py4JJavaError as e:
    130             converted = convert_exception(e.java_exception)

C:\Spark\python\lib\py4j-0.10.9-src.zip\py4j\protocol.py in get_return_value(answer,gateway_client,target_id,name)
    324             value = OUTPUT_CONVERTER[type](answer[2:],gateway_client)
    325             if answer[1] == REFERENCE_TYPE:
--> 326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
    328                     format(target_id,".",name),value)

Py4JJavaError: An error occurred while calling o1393.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 6 in stage 39.0 Failed 1 times,most recent failure: Lost task 6.0 in stage 39.0 (TID 174,DESKTOP-G6LQ7L8,executor driver): org.apache.spark.SparkException: Failed to execute user defined function(HasSimpleAnnotate$$Lambda$2720/1692472191: (array<array<struct<annotatorType:string,begin:int,end:int,result:string,Metadata:map<string,string>,embeddings:array<float>>>>) => array<struct<annotatorType:string,embeddings:array<float>>>)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(UnkNown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(UnkNown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345)
    at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
    at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:463)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:466)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(UnkNown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(UnkNown Source)
    at java.lang.Thread.run(UnkNown Source)
Caused by: java.lang.Exception: feature Number of words in the dictionary is not set
    at com.johnsNowlabs.nlp.serialization.Feature.$anonfun$getorDefault$1(Feature.scala:81)
    at scala.Option.getorElse(Option.scala:189)
    at com.johnsNowlabs.nlp.serialization.Feature.getorDefault(Feature.scala:81)
    at com.johnsNowlabs.nlp.HasFeatures.$$(HasFeatures.scala:39)
    at com.johnsNowlabs.nlp.HasFeatures.$$$(HasFeatures.scala:39)
    at com.johnsNowlabs.nlp.AnnotatorModel.$$(AnnotatorModel.scala:14)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.allWords$lzycompute(norvigSweetingModel.scala:125)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.allWords(norvigSweetingModel.scala:124)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.getSuggestion(norvigSweetingModel.scala:189)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.getBestSpellingSuggestion(norvigSweetingModel.scala:170)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.checkSpellWord(norvigSweetingModel.scala:154)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.$anonfun$annotate$1(norvigSweetingModel.scala:137)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at scala.collection.TraversableLike.map(TraversableLike.scala:238)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
    at scala.collection.AbstractTraversable.map(Traversable.scala:108)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.annotate(norvigSweetingModel.scala:136)
    at com.johnsNowlabs.nlp.HasSimpleAnnotate.$anonfun$dfAnnotate$1(HasSimpleAnnotate.scala:24)
    ... 27 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndindependentStages(DAGScheduler.scala:2059)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2135)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2154)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:472)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:425)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:47)
    at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3627)
    at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2697)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
    at org.apache.spark.sql.execution.sqlExecution$.$anonfun$withNewExecutionId$5(sqlExecution.scala:100)
    at org.apache.spark.sql.execution.sqlExecution$.withsqlConfPropagated(sqlExecution.scala:160)
    at org.apache.spark.sql.execution.sqlExecution$.$anonfun$withNewExecutionId$1(sqlExecution.scala:87)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:767)
    at org.apache.spark.sql.execution.sqlExecution$.withNewExecutionId(sqlExecution.scala:64)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2697)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2904)
    at org.apache.spark.sql.Dataset.getRows(Dataset.scala:300)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:337)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(UnkNown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(UnkNown Source)
    at java.lang.reflect.Method.invoke(UnkNown Source)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(UnkNown Source)
Caused by: org.apache.spark.SparkException: Failed to execute user defined function(HasSimpleAnnotate$$Lambda$2720/1692472191: (array<array<struct<annotatorType:string,embeddings:array<float>>>)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(UnkNown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1209)
    at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1215)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(UnkNown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:345)
    at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872)
    at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:313)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:463)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:466)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(UnkNown Source)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(UnkNown Source)
    ... 1 more
Caused by: java.lang.Exception: feature Number of words in the dictionary is not set
    at com.johnsNowlabs.nlp.serialization.Feature.$anonfun$getorDefault$1(Feature.scala:81)
    at scala.Option.getorElse(Option.scala:189)
    at com.johnsNowlabs.nlp.serialization.Feature.getorDefault(Feature.scala:81)
    at com.johnsNowlabs.nlp.HasFeatures.$$(HasFeatures.scala:39)
    at com.johnsNowlabs.nlp.HasFeatures.$$$(HasFeatures.scala:39)
    at com.johnsNowlabs.nlp.AnnotatorModel.$$(AnnotatorModel.scala:14)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.allWords$lzycompute(norvigSweetingModel.scala:125)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.allWords(norvigSweetingModel.scala:124)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.getSuggestion(norvigSweetingModel.scala:189)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.getBestSpellingSuggestion(norvigSweetingModel.scala:170)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.checkSpellWord(norvigSweetingModel.scala:154)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.$anonfun$annotate$1(norvigSweetingModel.scala:137)
    at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at scala.collection.TraversableLike.map(TraversableLike.scala:238)
    at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
    at scala.collection.AbstractTraversable.map(Traversable.scala:108)
    at com.johnsNowlabs.nlp.annotators.spell.norvig.norvigSweetingModel.annotate(norvigSweetingModel.scala:136)
    at com.johnsNowlabs.nlp.HasSimpleAnnotate.$anonfun$dfAnnotate$1(HasSimpleAnnotate.scala:24)
    ... 27 more

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

相关问答

Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其...
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。...
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbc...