apache-zeppelin 0.8.0 installation while using home brew on mac OS - apache-zeppelin

i have installed java version "1.8.0_181", scala, apache-spark, apache-zeppelin using home brew, and there is no error showed up while these installations.
If i ran $ spark-shell in the terminal, it shows warn but still called up scala as below:
However, while i opened http://localhost:8080/#/ and ran sc.version, it shows error as bellow:
scala.reflect.internal.MissingRequirementError: object java.lang.Object in compiler mirror not found.
at scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:17)
at scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:18)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:53)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:45)
at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:66)
at scala.reflect.internal.Mirrors$RootsBase.getClassByName(Mirrors.scala:102)
at scala.reflect.internal.Mirrors$RootsBase.getRequiredClass(Mirrors.scala:105)
at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass$lzycompute(Definitions.scala:257)
at scala.reflect.internal.Definitions$DefinitionsClass.ObjectClass(Definitions.scala:257)
at scala.reflect.internal.Definitions$DefinitionsClass.init(Definitions.scala:1394)
at scala.tools.nsc.Global$Run.<init>(Global.scala:1215)
at scala.tools.nsc.interpreter.IMain.compileSourcesKeepingRun(IMain.scala:432)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.compileAndSaveRun(IMain.scala:855)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.compile(IMain.scala:813)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:675)
at scala.tools.nsc.interpreter.IMain.bind(IMain.scala:712)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietBind$1.apply(IMain.scala:711)
at scala.tools.nsc.interpreter.IMain$$anonfun$quietBind$1.apply(IMain.scala:711)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:214)
at scala.tools.nsc.interpreter.IMain.quietBind(IMain.scala:711)
at scala.tools.nsc.interpreter.ILoop.scala$tools$nsc$interpreter$ILoop$$loopPostInit(ILoop.scala:891)
at jdk.internal.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.zeppelin.spark.BaseSparkScalaInterpreter.callMethod(BaseSparkScalaInterpreter.scala:270)
at org.apache.zeppelin.spark.BaseSparkScalaInterpreter.callMethod(BaseSparkScalaInterpreter.scala:262)
at org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:84)
at org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:617)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)
how could i figure this problem?

I had multiple issues with home brew installation, but running with docker worked for me(http://zeppelin.apache.org/download.html#using-the-official-docker-image
):
"Use this command to launch Apache Zeppelin in a container.
docker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.8.0
To persist logs and notebook directories, use the volume option for docker container.
docker run -p 8080:8080 --rm -v $PWD/logs:/logs -v $PWD/notebook:/notebook -e ZEPPELIN_LOG_DIR='/logs' -e ZEPPELIN_NOTEBOOK_DIR='/notebook' --name zeppelin apache/zeppelin:0.8.0
If you have trouble accessing localhost:8080 in the browser, Please clear browser cache."

In my case it was the problem of having a Java10 installed prior to Java8. Apparently, Java version above 8.0 are not yet supported by Spark. The problem is fixable by setting the JAVA_HOME environment variable to your Java8 installation.
If you don't have it:
brew cask install java8
and the default location for MacOs should be:
export JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_202.jdk/Contents/Home"
If you launch your zeppelin server via the zeppelin-deamon.sh, then just add the export above at the beginning of the file.
You should find the zepplin-deamon.sh true file location and if you installed zeppelin via homebrew, it should be in:
/usr/local/Cellar/apache-zeppelin/0.8.1/libexec/bin/zeppelin-daemon.sh

Related

sudo apk: command not found on Linux

I'm attempting to install SQL Server driver on AWS SageMaker Notebook instance running Amazon Linux 2. I am following the documentation here - Alpine Linux section: https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver16
When I attempt to install after downloading using the following commands, I get an error:
#Install the package(s)
sudo apk add --allow-untrusted msodbcsql18_18.1.1.1-1_amd64.apk
sudo apk add --allow-untrusted mssql-tools18_18.1.1.1-1_amd64.apk
sudo apk: command not found on Linux
Linux version:
5.10.102-99.473.amzn2.x86_64
What do I need to do before running the commands?

Docker image error "Service chromedriver unexpectedly exited. Status code was: 127"

FROM python:3.7
WORKDIR /opt
RUN curl https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb -o /chrome.deb
RUN dpkg -i /chrome.deb || apt-get install -yf
RUN rm /chrome.deb
ENV CHROMEDRIVER_VERSION 89.0.4389.23
ENV CHROMEDRIVER_DIR /chromedriver
RUN mkdir -p $CHROMEDRIVER_DIR
# Download and install Chromedriver
RUN wget -q --continue -P $CHROMEDRIVER_DIR "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip"
RUN unzip $CHROMEDRIVER_DIR/chromedriver* -d $CHROMEDRIVER_DIR
ENV PATH $CHROMEDRIVER_DIR:$PATH
RUN apt-get update
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3","code.py"]
code.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
try:
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
driver.get('google.com')
print(f'{driver.page_source}')
except Exception as e:
print(f'111111111 Exception: {e}')
try:
options = webdriver.ChromeOptions()
options.headless = True
driver = webdriver.Chrome(options=options)
driver.get('google.com')
print(f'{driver.page_source}')
except Exception as e:
print(f'222222 Exception: {e}')
print(f'h')
requirements.txt
selenium
When i build this image it build fine , but throws following error on running image
Service chromedriver unexpectedly exited. Status code was: 127
any idea , where i am doing wrong in the Dockerfile. I tried for other posts available but nothing worked for me.
My machine os is Mac catalina and was trying to configure the code for remote host, but this is not even working at my local system as well.
Is this is the issue I have another OS and configuring another
I tried this post also but nothing worked
Had the same problem. Fixed it by installing chrome-browser.
Please check this answer: How to install Google chrome in a docker container

Is it possible to create custom Linux-based Docker image on Azure Windows Server DSVM

I am using an Azure DSVM in a DevTest Lab running Windows Server 2019. I am trying to get Docker installed and working to allow me to run local experiments from Azure ML Service environments.
I want to build a custom Linux container on Docker - which I believe is possible on Windows from reading some other online posts (I can't use a Linux host for various reasons). When I try to create such an image that contains a WORKDIR ... step, I get a "container ***** encountered an error during CreateProcess: failure in a Windows system call" error.
I installed Docker on the DSVM (which is a Standard D2s_v3) by adding the "Docker" artifact at creation and then running the following commands to enable Linux containers:
$> Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
$> [Environment]::SetEnvironmentVariable("LCOW_SUPPORTED", "1", "Machine")
Running a simple Linux container works fine:
$> docker run --rm -it alpine:latest
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
/ #
To build a custom image, I'm using a simple Dockerfile as follows:
FROM alpine:latest
WORKDIR /abm
The image appears to build successfully:
$> docker build --no-cache -t abm-alpine:workdir -f .\abm-alpine.Dockerfile .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM alpine:latest
---> a187dde48cd2
Step 2/2 : WORKDIR /abm
---> 495f8ecb3a0e
Removing intermediate container 219e91296e47
Successfully built 495f8ecb3a0e
Successfully tagged abm-alpine:workdir
When I run the image, I get the following error:
$> docker run --rm -it abm-alpine:workdir
C:\Program Files\Docker\docker.exe: Error response from daemon: container 01fad57c971d672d91238a6c6ec21376e033006ec4c26563e91e7288cfb3bfeb encountered an error during CreateProcess: failure in a Windows system call: The virtual machine or container exited unexpectedly. (0xc0370106) extra info: {"CommandArgs":["/bin/sh"],"WorkingDirectory":"/abm","Environment":{"HOSTNAME":"01fad57c971d","PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","TERM":"xterm"},"EmulateConsole":true,"CreateStdInPipe":true,"CreateStdOutPipe":true,"ConsoleSize":[50,120],"OCISpecification":{"ociVersion":"1.0.0","process":{"terminal":true,"consoleSize":{"height":50,"width":120},"user":{"uid":0,"gid":0},"args":["/bin/sh"],"env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME=01fad57c971d","TERM=xterm"],"cwd":"/abm","capabilities":{"bounding":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"effective":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"inheritable":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"],"permitted":["CAP_CHOWN","CAP_DAC_OVERRIDE","CAP_FSETID","CAP_FOWNER","CAP_MKNOD","CAP_NET_RAW","CAP_SETGID","CAP_SETUID","CAP_SETFCAP","CAP_SETPCAP","CAP_NET_BIND_SERVICE","CAP_SYS_CHROOT","CAP_KILL","CAP_AUDIT_WRITE"]}},"root":{"path":"rootfs"},"hostname":"01fad57c971d","mounts":[{"destination":"/proc","type":"proc","source":"proc","options":["nosuid","noexec","nodev"]},{"destination":"/dev","type":"tmpfs","source":"tmpfs","options":["nosuid","strictatime","mode=755","size=65536k"]},{"destination":"/dev/pts","type":"devpts","source":"devpts","options":["nosuid","noexec","newinstance","ptmxmode=0666","mode=0620","gid=5"]},{"destination":"/sys","type":"sysfs","source":"sysfs","options":["nosuid","noexec","nodev","ro"]},{"destination":"/sys/fs/cgroup","type":"cgroup","source":"cgroup","options":["ro","nosuid","noexec","nodev"]},{"destination":"/dev/mqueue","type":"mqueue","source":"mqueue","options":["nosuid","noexec","nodev"]},{"destination":"/dev/shm","type":"tmpfs","source":"shm","options":["nosuid","noexec","nodev","mode=1777"]}],"linux":{"resources":{"devices":[{"allow":false,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":5,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":3,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":9,"access":"rwm"},{"allow":true,"type":"c","major":1,"minor":8,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":0,"access":"rwm"},{"allow":true,"type":"c","major":5,"minor":1,"access":"rwm"},{"allow":false,"type":"c","major":10,"minor":229,"access":"rwm"}]},"namespaces":[{"type":"mount"},{"type":"network"},{"type":"uts"},{"type":"pid"},{"type":"ipc"}],"maskedPaths":["/proc/kcore","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug"],"readonlyPaths":["/proc/asound","/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]},"windows":{"layerFolders":["C:\\ProgramData\\docker\\lcow\\5ba6a7b4fbdf9748ec89898be9bdaa911ee614436a475945638ab296b1155966","C:\\ProgramData\\docker\\lcow\\01fad57c971d672d91238a6c6ec21376e033006ec4c26563e91e7288cfb3bfeb"],"hyperv":{},"network":{"endpointList":["D615E3D5-B6AA-401E-A0A0-72581FA47059"],"allowUnqualifiedDNSQuery":true}}}}.
I've tried various logs (e.g. Get-WinEvent -LogName Microsoft-Windows-Hyper-V-Compute-Operational and Get-EventLog -LogName Application -Source Docker) but cannot see any additional information about the error.
Can anyone advise if it is possible to create custom Linux-based images on a Windows DSVM? If it is, can anyone advise what the problem may be or any additional troubleshooting steps I could take?
Thanks!
It is possible to create Linux container on Windows Server.
Although this is currently in experimental stage.
This article might help : https://www.b2-4ac.com/lcow-linux-containers-on-windows-server/

Changing my project files doesn't change files inside the Docker machine

I'm trying to use Docker to improve my workflow. I installed "Docker Toolbox for Windows" on my Windows 10 home edition (since Docker supposedly only work on professional). I'm using mgexhev's angular-seed which claim to provide full docker support. There is a docker-compose.yml file which links a ./.docker/angular-seed.development.dockerfile.
After git cloning the seed project I can start it by running the commands given on the seed project's github page. So I can see the app after running:
$ docker-compose build
$ docker-compose up -d
But when I change code with Visual Studio Code and save the livereload doesn't work. The only way I can see my changes is by re-running the build and up commands (which re-runs npm install; 5min).
In Docker's documentation they say to "Mount a host directory as a data volume" in order to be able to "change the source code and see its effect on the application in real time"
docker run -v //c/<path>:/<container path>
But I'm not sure this is right when I'm using docker-compose? I have also tried running:
docker run -d -P --name web -v //c/Users/k/dev/:/home/app/ angular-seed
docker run -p 5555:5555 -v //c/Users/k/dev/:/home/app/ -w "/home/app/" angular-seed
docker run -p 5555:5555 -v $(pwd):/home/app/ -w "/home/app/" angular-seed
and lots of similar commands but nothing seems to work.
I tried moving my project from C:/dev/project to home because I read somewhere that there might be some access right issues not using the "home" directory, but this made no difference.
I'm also a bit confused that the instructions say visit localhost:5555. I have to go to dockerIP:5555 to see the app (in case this help anyone understand why my code doesn't update inside of my docker container).
Surely my changes should move in to the docker environment automatically or docker is not very useful for development :)
Looking at the docker-compose.yml you've linked to, I don't see any volume entry. Without that, there's no connection possible between the files on your host and the files inside the container. You'll need a docker-compose.yml that includes a volume entry, like:
version: '2'
services:
angular-seed:
build:
context: .
dockerfile: ./.docker/angular-seed.development.dockerfile
command: npm start
container_name: angular-seed-start
image: angular-seed
networks:
- dev-network
ports:
- '5555:5555'
volumes:
- .:/home/app/angular-seed
networks:
dev-network:
driver: bridge
Docker-machine runs docker inside of a virtual box VM. By default, I believe c:\Users is shared into the VM, but you'll need to check the virtual box settings to confirm this. Any host directories you try to map into the container are mapped from the VM, so if your folder is not shared into that VM, your files won't be included.
With the IP, localhost works on Linux hosts and newer versions of docker for windows/mac. Older docker-machine based installs need to use the IP of the virtual box VM.

How to import .bacpac into docker Sqlserver?

I installed Sqlserver on my Mac in a docker container, following the instructions from this article.
I run the container with Kitematic and managed to connect to the server using Navicat Essentials for SQl Server.
The server has four databases and I can create new ones, but, ideally, I would like to import an existing database as .bacpac.
The instructions from this answer have been of use to me in the past. Can I run something similar within the container? Or, more generally, is there a way to import a database in the container?
Hi all! We finally have a preview ready for sqlpackage that is built on dotnet core and is cross-platform! Below are the links to download from. They are evergreen links, i.e. each day a new build is uploaded. This way any checked in bug fix is available the next day. Included in the .zip file is the preview EULA.
linux
https://go.microsoft.com/fwlink/?linkid=873926
osx
https://go.microsoft.com/fwlink/?linkid=873927
windows
https://go.microsoft.com/fwlink/?linkid=873928
Release notes:
The /p:CommandTimeout parameter is hardcoded to 120
Build and deployment contributors are not supported
a. Need to move to .NET Core 2.1 where System.ComponentModel.Composition.dll is supported
b. Need to handle case-sensitive paths
SQL CLR UDT types are not supported.
a. This includes SQL Server Types SqlGeography, SqlGeometry, & SqlHierarchyId
Older .dacpac and .bacpac files that use Json serialization are not supported
Referenced .dacpacs (e.g. master.dacpac) may not resolve due to issues with case-sensitive file systems
For lack of a better method, please provide any feedback you have here on this GitHub issue.
Thanks for giving it a try and letting us know how it goes!
https://github.com/Microsoft/mssql-docker/issues/135#issuecomment-389245587
EDIT: I've made you a Docker image for this
https://hub.docker.com/r/samuelmarks/mssql-server-fts-sqlpackage-linux/
Example of setting up a container, creating a database, copying a .bacpac file over, and importing it into aforementioned database:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 --name sqlfts0 samuelmarks/mssql-server-fts-sqlpackage-linux
docker exec -it sqlfts0 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>' -Q 'CREATE DATABASE MyDb0'
docker cp ~/Downloads/foo.bacpac sqlfts0:/opt/downloads/foo.bacpac
docker exec -it sqlfts0 dotnet /opt/sqlpackage/sqlpackage.dll /tsn:localhost /tu:SA /tp:'<YourStrong!Passw0rd>' /A:Import /tdn:MyDb0 /sf:foo.bacpac
It looks like Microsoft has implemented support of this on sqlpackage, with documentation!
You will have to add sqlpackage to your container.
You can download it here. (optionally, direct link to linux package here, hopefully doesn't change)
The following are instructions for running this from a windows machine -- obviously it's the bare minimum to get it working. Please change passwords, and probably put this in a docker-compose.yml for re-use.
I unzip the above package into a folder 'c:\sqlpackage' (my windows docker run doesn't allow relative paths), and then mount that into the container with the bacpac, like such:
docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Asdf1234" -v c:\sqlpackage:/opt/sqlpackage -v c:\yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
here is what a *nix user could run alternatively:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Asdf1234' -v ./sqlpackage:/opt/sqlpackage -v ./yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
and finally, attach to your container and run:
/opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:targetdbname /tu:sa /tp:Asdf1234 /sf:/tmp/yourdb.bacpac
After this, you should be able to connect with SSMS to localhost, username and password as you provide them above, and see 'targetdbname'! These are mostly notes I wrote for myself but I'm sure others could use them too.
You can use free Azure Data Studio from Microsoft. Once you have it installed, install the extension "Admin Pack for SQL Server" from Microsoft. Then you can import bacpac files with ease.
This is not a supported feature with a LINUX implementation it seems.
See this link.

Resources