Install JDK on Mac OS

Step 1: Check if JDK has been Pre-Installed

In some Mac systems (earlier than Mac OS X 10.7 Lion), JDK has been pre-installed. To check if JDK has been installed, open a "Terminal" (Search "Terminal"; or Finder ⇒ Go ⇒ Utilities ⇒ Terminal) and issue this command:

javac -version
  • If a JDK version number is returned (e.g.,JDK 1.x.x), then JDK has already been installed. If the JDK version is prior to 1.7, proceed to Step 2 to install the latest JDK; otherwise, proceed to "Step 3: Write a Hello-world Java program".
  • If message "command not found" appears, JDK is NOT installed. Proceed to the "Step 2: Install JDK".
  • If message "To open javac, you need a Java runtime" appears, select "Install" and follow the instructions to install JDK. Then, proceed to "Step 3: Write a Hello-world Java program".
Step 2: Download JDK
  1. Goto Java SE download site @ http://www.oracle.com/technetwork/java/javase/downloads/index.html . Under "Java Platform, Standard Edition" ⇒ "Java SE 8u{xx}" ⇒ Click the "JDK Download" button
  2. Check "Accept License Agreement"
  3. Choose your operating platform, e.g., "Mac OS X" (jdk-8u{xx}-macosx-x64.dmg). Download the installer.
Step 3: Install JDK/JRE
  1. Double-click the downloaded Disk Image (DMG) file. Follow the screen instructions to install JDK/JRE.
  2. Eject the DMG file.
  3. To verify your installation, open a "Terminal" and issue these commands.

     // Display the JDK version
     javac -version
     javac 1.x.x_xx
     // Display the JRE version
     java -version
     java version "1.x.x_xx"
        Java(TM) SE Runtime Environment (build 1.x.x_xx-xxx)
        Java HotSpot(TM) Client VM (build 22.1-b02, mixed mode, sharing)
     // Display the location of Java Compiler
        which javac
     /usr/bin/javac
     // Display the location of Java Runtime
        which java
     /usr/bin/java
    

Install scala on Mac

1.Homebrew (Third-party package)
$ brew install sbt

2.进入官网下载压缩包

官网传送门
http://www.scala-lang.org/download/

3.将该压缩包进行解压移动/usr/local/目录下:

sudo tar -zxf ~/downloads/scala-2.12.1.tgz -C /usr/local/

然后使用命令cd /usr/local进入到该目录下,由于解压后的文件名为:scala-2.12.1

所以为了之后配置的方便我们使用命令:

sudo mv ./scala-2.12.1 ./scala

将文件名修改为scala。因为该目录属于管理员级别的目录所以如果当前用户不是管理员的话应该在命令前面使用sudo关键字表示使用管理员权限。

4.配置scala的环境变量

export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin

source /etc/profile
令行输入:

scala

并敲击回车,看到控制台打印如下信息说明我们的scala安装并成功配置了环境变量:

安装spark

1.同scala的安装方法一样,我们使用命令:

sudo tar -zxf ~/Dowmloads/spark-2.0.2-bin-hadoop2.7.tgz -C /usr/local/

直接将该压缩包解压并移动到/usr/local/目录下,然后我们

cd /usr/local进入到/usr/local目录下,

使用命令更改该目录下的spark文件名:

sudo mv ./spark-2.0.2-bin-hadoop2.7 ./spark

将文件名改为spark

2.进行Spark环境变量的配置。

使用命令:sudo vim /etc/profile,在文件中加入Spark的环境变量:

export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin

3.进入到Spark目录的conf配置文件中:

cd /usr/local/spark/conf

,执行命令:

cp spark-env.sh.template spark-env.sh

将spark-env.sh.template拷贝一份,然后打开拷贝后的spark-env.sh文件:

vim spark-env.sh

在里面加入如下内容:

export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=localhost
export SPARK_WORKER_MEMORY=4g

3.测试测试一下Spark,在根目录(因为我们配置了spark环境变量,所以可以直接在根目录)下输入命令:

spark-shell

4.验证安装情况

此时就可以检验成果喽

进入安装包的sbin 目录执行 start-all.sh 脚本

./start-all.sh
报错:localhost: ssh: connect to host localhost port 22: Connection refused

执行 start-all.sh 错误: Connection refused

如果是使用 Mac 操作系统运行 start-all.sh 发生下面错误时:

% sh start-all.sh
starting org.apache.spark.deploy.master.Master, logging to ...
localhost: ssh: connect to host localhost port 22: Connection refused

你需要在你的电脑上打开 “远程登录” 功能。进入系统偏好设置--->共享勾选打开远程登录

通过jps 命令 检验运行情况

查看执行状态

a. http://localhost:8080/,查看spark 集群运行情况。 此端口一般与其他端口冲突

      在spark-env.sh 中加入 export SPARK\_MASTER\_WEBUI\_PORT=98080 来指定

b. http://localhost:4040/jobs/ ,查看 spark task job运行情况

c. http://localhost:50070/ hadoop集群运行情况

停止spark
进入spark的sbin目录,执行命令

./stop-all.sh

5.pyspark jupyter

Set up environment variables

import os
import sys
os.environ["SPARK_HOME"] = "/usr/hdp/current/spark-client"
os.environ["PYLIB"] = os.environ["SPARK_HOME"] + "/python/lib"
sys.path.insert(0, os.environ["PYLIB"] +"//py4j-0.10.4-src.zip")
sys.path.insert(0, os.environ["PYLIB"] +"/pyspark.zip")
export SPARK_HOME="/usr/local/spark"

export PYTHONPATH=$SPARK_HOME/python/:$SPARK_HOME/python/lib//py4j-0.10.4-src.zip:$SPARK_HOME/python/lib/pyspark.zip:$PYTHONPATH

export PATH=/usr/local/anaconda/bin:$PATH

jupyter notebook --no-browser --ip 0.0.0.0 --port 8888

Load PySpark module

from pyspark import SparkContext

6 tensorflow on spark

1 Clone TensorFlowOnOnSpark code

git clone https://github.com/yahoo/TensorFlowOnSpark.git
cd TensorFlowOnSpark
export TFoS_HOME=$(pwd)

2Install Spark

3 Install TensorFlow and TensorFlowOnSpark

sudo pip install tensorflow
sudo pip install tensorflowonspark

7.Jupyter Notebook with Spark Kernel

Next, if you want to install a kernel, you want to make sure you get Apache Toree installed. Install Toree via pip withpip install toree. Next, install a jupyter applicationtoree:

jupyter toree install --spark_home=/usr/local/spark/ --interpreters=Scala,PySpark

Make sure that you fill out thespark_homeargument correctly and also note that if you don’t specifyPySparkin theinterpretersargument, that the Scala kernel will be installed by default. This path should point to the unzipped directory that you have downloaded earlier from the Spark download page. Next, verify whether the kernel is included in the following list:

jupyter kernelspec list

Start Jupyter notebook as usual withjupyter notebookor configure Spark even further with, for example, the following line:

SPARK_OPTS='--master=local[4]' jupyter notebook

In which you specify to run Spark locally with 4 threads.

安装过程出现的问题分析

1.没有配置SSH

2.运行命令:scala时控制台出现:

的错误信息,表示scala没有成功安装,或者安装的scala与jdk不兼容,所以这里我建议你们就按本教程的2.11.8scala版本与1.7jdk版本来操作吧。这些坑我都试过了,所以才能为你们总结经验。(大哭脸)

3.其他错误,有以下原因,你们一定要一一进行检查:

  • 1.关于JDK:JDK版本不对,所以我建议大家用1.7;或者是JDK版本正确但是没有成功配置它的环境变量,我配置时更改了两个文件的环境变量:一个是/etc/profile目录下的,一个是.bash_profile文件中的,配置环境变量信息如下:

  • export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/Home
    export PATH=$JAVA_HOME/bin:$PATH
    
  • 2.关于scala:Scala版本不对,所以我建议大家用2.11.8;或者是没有成功配置scala的环境变量,配置环境变量按照文中介绍的即可。

  • 3.Hadoop版本与Spark版本不兼容:所以大家在Spark官网下载Spark的时候一定要注意下载Spark时选择的第二条信息的Hadoop版本要与电脑上面已经安装的Hadoop一致才行。

results matching ""

    No results matching ""