Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any code is not running #39

Open
yugam1 opened this issue Jul 12, 2017 · 3 comments
Open

Any code is not running #39

yugam1 opened this issue Jul 12, 2017 · 3 comments

Comments

@yugam1
Copy link

yugam1 commented Jul 12, 2017

for any code which i run on this kernel nothing happens just Intitializing Scala interpreter ... is written as output and code never completes
image

@mariusvniekerk
Copy link
Collaborator

Do you have logs from the NotebookApp? Sounds like there may be something incorrect happening with how Apache Spark is started up

@yugam1
Copy link
Author

yugam1 commented Jul 13, 2017

[I 10:38:38.759 NotebookApp] Accepting one-time-token-authenticated connection f
rom ::1
[I 10:39:37.282 NotebookApp] Creating new notebook in
[I 10:39:41.453 NotebookApp] Kernel started: e634dfd9-9c9e-4024-94df-519fbfc874b
5
[I 10:39:46.866 NotebookApp] Adapting to protocol v5.1 for kernel e634dfd9-9c9e-
4024-94df-519fbfc874b5
[MetaKernelApp] ERROR | Exception in message handler:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", lin
e 235, in dispatch_shell
handler(stream, idents, msg)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", lin
e 399, in execute_request
user_expressions, allow_stdin)
File "C:\ProgramData\Anaconda3\lib\site-packages\metakernel_metakernel.py", l
ine 357, in do_execute
retval = self.do_execute_direct(code)
File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_kernel.py
", line 141, in do_execute_direct
res = self._scalamagic.eval(code.strip(), raw=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_magic.py"
, line 155, in eval
intp = self.get_scala_interpreter()
File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_magic.py"
, line 46, in get_scala_interpreter
self.interp = get_scala_interpreter()
File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpret
er.py", line 562, in get_scala_interpreter
scala_intp = initialize_scala_interpreter()
File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpret
er.py", line 163, in initialize_scala_interpreter
spark_session, spark_jvm_helpers, spark_jvm_proc = init_spark()
File "C:\ProgramData\Anaconda3\lib\site-packages\spylon_kernel\scala_interpret
er.py", line 78, in init_spark
import pyspark.java_gateway
File "C:\ProgramData\spark-2.1.0-bin-hadoop2.7\python\pyspark_init
.py", li
ne 44, in
from pyspark.context import SparkContext
File "C:\ProgramData\spark-2.1.0-bin-hadoop2.7\python\pyspark\context.py", lin
e 40, in
from pyspark.rdd import RDD, load_from_socket, ignore_unicode_prefix
File "C:\ProgramData\spark-2.1.0-bin-hadoop2.7\python\pyspark\rdd.py", line 47
, in
from pyspark.statcounter import StatCounter
File "C:\ProgramData\spark-2.1.0-bin-hadoop2.7\python\pyspark\statcounter.py",
line 24, in
from numpy import maximum, minimum, sqrt
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy_init
.py", line 142,
in
from . import add_newdocs
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\add_newdocs.py", line 1
3, in
from numpy.lib import add_newdoc
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib_init
.py", line
8, in
from .type_check import *
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\type_check.py", lin
e 11, in
import numpy.core.numeric as nx
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\core_init
.py", line
72, in
from numpy.testing.nosetester import numpy_tester
File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\testing_init
.py", l
ine 10, in
from unittest import TestCase
File "C:\ProgramData\Anaconda3\lib\unittest_init.py", line 58, in

from .result import TestResult

File "C:\ProgramData\Anaconda3\lib\unittest\result.py", line 7, in
from . import util
File "C:\ProgramData\Anaconda3\lib\unittest\util.py", line 119, in
_Mismatch = namedtuple('Mismatch', 'actual expected value')
File "C:\ProgramData\spark-2.1.0-bin-hadoop2.7\python\pyspark\serializers.py",
line 393, in namedtuple
cls = _old_namedtuple(*args, **kwargs)
TypeError: namedtuple() missing 3 required keyword-only arguments: 'verbose', 'r
ename', and 'module'
[MetaKernelApp] ERROR | No such comm target registered: jupyter.widget.version
[MetaKernelApp] ERROR | No such comm target registered: `jupyter.widget.version```

@mariusvniekerk
Copy link
Collaborator

So this is probably due to an issue with pyspark 2.1.0 under python 3.6. Try running it with python 3.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants