Using Apache Spark to Analyze Large Neuroimaging Datasets

Apache Sparkdata sciencepython

This article was written by Sergul Aydore, Ph.D., and Syed Ashrafulla, Ph.D.

Sergul and Syed received their Ph.D.s in Electrical Engineering in 2014 from the University of Southern California, applying signal processing to neuroimaging data. They continue to use machine learning on brain imaging data as a pastime and sharing their knowledge with the community. They constantly challenge each other as good buddies and like to call themselves “signal learners.”

The views expressed in this article are of the authors and not of their employers or Domino Data Lab.

Principal Component Analysis in Neuroimaging Data Using PySpark

In this post we will describe how we used PySpark, through Domino's data science platform, to analyze dominant components in high-dimensional neuroimaging data. We will demonstrate how to perform Principal Components Analysis (PCA) on a dataset large enough that standard single-computer techniques will not work.

PCA is a widely used tool in machine learning, data analysis, and statistics. The goal of PCA is to represent the data in a lower dimensional space. PCA is useful for denoising, as well as for data exploration of dominant linear trends in the data. As part of the setup, PCA ensures that the dimensions returned are orthonormal and capture the largest directions of variance in the data.[2]

1- What is PCA?

To describe PCA, we will start with a data set that has m features and n observations, represented as a matrix \bX with m rows and n columns. Let's also assume we've subtracted a constant from each feature so that the resulting features all have mean 0.

Our goal is to find the d vectors that best capture the variation in the dataset across observations, represented as a matrix \bZ with d uncorrelated rows and n columns. There are two ways to find this summarization of the data: Using the sample covariance matrix, and using Singular Value Decomposition (SVD).

1.1 - PCA Using the Covariance Matrix

Specifically, we will use the eigenvalue decomposition of the sample covariance matrix. The sample covariance matrix is \bC_{\bx\bx} = \frac{1}{n}\bX \bX^T where C_{ij} entry is the covariance between i-th feature with j-th feature. Our goal is to find a matrix \bP that projects \bX onto a lower-dimensional subspace

(1)   \begin{equation*} \bZ = \mathbf{P}\bX  \end{equation*}

where the covariance matrix of \bZ, \bC_{\bz\bz}, is diagonal (equivalently, the rows of \bZ are orthogonal to each other):

(2)   \begin{equation*} \bC_{\bz\bz} = \frac{1}{n}\bZ \bZ^T = \bP \bC_{\bx\bx} \bP^T = \bD  \end{equation*}

and D_{ij} = 0 for i \neq j.

Since \bC_{\bx\bx} is a symmetric matrix the eigenvalue decomposition can be represented by:

(3)   \begin{equation*} \bC_{\bx\bx} = \bE \bD \bE^T \end{equation*}

where \bE is an orthogonal matrix, each column an eigenvector of \bC_{\bx\bx}. Plugging into 2, we can see that setting \bP = \bE^T satisfies the conditions on \bD. Hence we get our principal components by setting \bZ =\bE^T \bX.

1.2 - PCA Using Singular Value Decomposition

We don't need to use the sample covariance matrix to get the largest principal components. Define \bY = \frac{1}{\sqrt{n}}\bX^T; as a result, each column of \bY has zero mean. From linear algebra we can decompose \bY into three matrices

(4)   \begin{equation*}  \bY = \bU \bS \bV^T  \end{equation*}

where \bU is n \times k, and \bV is k \times m and \bS is k \times k, for some k. k is called the rank of the matrix. If we carefully look at the product \bY^T\bY:

(5)   \begin{align*} \bY^T\bY &= \frac{1}{n} \bX \bX^T = \bC_{\bx\bx} \\ &= \bV \bS^2 \bV^T \end{align*}

the columns of \bV are not only the eigenvectors of \bY^T\bY; they are also the eigenvectors of \bC_{\bx\bx}. Therefore, \bV can be the projection matrix used to compute \bZ: \bZ = \bV^T \bX.

Singular value decomposition is often preferred over eigendecomposition of the covariance matrix because the calculation of the covariance matrix is a source of error. In singular value decomposition, with such a large dataset, we are much more robust to errors due to dynamic range of numbers or computational error.

2 - Toy Example

2.1 - Covariance matrix

Let's start with a toy example \bX = \left[ \begin{array}{ccc} 0 & 2 & 4 \\ 1 & 0 & 0 \\  0 & 3 & 0 \\ 7 & 4 & 6 \\ 0 & 5 & 7 \end{array} \right]
where each column is another observation of five dimensional random variable. Let's compute the sample covariance matrix of \bX:

(6)   \begin{align*}  \bC_{\bx\bx} = \frac{1}{3} \left( \bX - \boldsymbol{\mu} \right) \left( \bX -  \boldsymbol{\mu}  \right)^T \end{align*}

where five-dimensional vector \boldsymbol{\mu} \mbox{\stackrel{\triangle}{=}} \left[ \begin{array}{ccccc} 2 & 0.33 & 1 & 5.6 & 4  \end{array} \right]^T is the mean across all observations. Hence the covariance matrix is:

(7)   \begin{align*}   \bC_{\bx\bx} = \left[ \begin{array}{ccccc} 2.66 & -0.66 & 0 & -0.66 & 4.66 \\                                                                    -0.66 &  0.22 & -0.33 &  0.44 & -1.33 \\                                                                    0 & -0.33 &  2 & -1.66 &  1 \\                                                                    -0.66 &  0.44 & -1.66 & 1.55 & -2 \\                                                                     4.66 & -1.33 &  1 & -2 &   8.66      \end{array} \right]  \end{align*}

and eigen value decomposition is:

(8)   \begin{equation*}   \bC_{\bx\bx} = \bE \bD \bE^T  \end{equation*}

where \bE = \left[ \begin{array}{cc}  -0.44 & -0.28 \\                          0.13 & -0.05 \\                          -0.12 &  0.76 \\                          0.21 & -0.56 \\                          -0.84 & -0.11  \end{array} \right] and \bD =  \left[ \begin{array}{cc}  12 & 0 \\ 0 & 3.1 \end{array} \right].

There are 2 eigenvectors because the rank of \bX is 2 and each column of \bE represents an eigenvector which are ordered by the magnitude of eigen values represented in diagonal elements of matrix \bD. If we want to represent \bX in 2-dimensional space, all we need to do is to multiply \bX by \bE^T:

(9)   \begin{align*} \bZ &= \bP \bX  \\ &= \bE^T \bX \\ &= \left[ \begin{array}{ccc} 1.64 & -4.64 & -6.42 \\                       -4.01 & -1.11 & -5.33  \end{array} \right]. \end{align*}

If you want your output to be zero mean across columns, you can multiply \bE^T by \bX - \bm:

(10)   \begin{align*} \bZ_{{zeromean}} &= \bE^T \left(\bX - \bm \right)     \\ &= \left[ \begin{array}{ccc} 4.79 & -1.5 & -3.28  \\                       -0.52 & 2.37 & -1.84  \end{array} \right]. %&= \left[ \begin{array}{ccc} 4.79 & -1.5 & -3.28  \\ %                     -0.52 &  2.37 & -1.84  \end{array} \right]. \end{align*}

2.2 - Singular Value Decomposition (SVD)

Following our notation in section 1.2, the matrix \bY is:

(11)   \begin{align*} \bY &= \frac{1}{\sqrt{t}} \bX^T \\ &=\frac{1}{\sqrt{3}} \left[ \begin{array}{ccccc}  0 & 1 & 0 & 7 & 0 \\                                       2 & 0 & 3 & 4 & 5 \\                                       4 & 0 & 0 & 6 & 7  \end{array} \right]. \end{align*}

Computing SVD results:

(12)   \begin{equation*} \bY = \bU \bS \bV^T  \end{equation*}

where \bU = \left[ \begin{array}{cc} 0.79 & -0.17 \\ -0.25 & 0.77 \\ -0.54 & -0.60  \end{array} \right], \bD = \left[ \begin{array}{cc} 3.46 & 0 \\ 0 & 1.76  \end{array} \right] and \bV =  \left[ \begin{array}{cc}  -0.44 & -0.28 \\                          0.13 & -0.05 \\                          -0.12 &  0.76 \\                          0.21 & -0.56 \\                          -0.84 & -0.11  \end{array} \right].

Note that matrices \bV and \bE are equivalent, as discussed in section 1.2.

3 - Implementing PCA

3.1 - Using scikit-learn

The diagram in Figure 1 demonstrates PCA using scikit-learn. The results are consistent with theoretical results we showed in section 2 except the sign of the components. This is not an issue because sign does not affect the direction of the components.

Demonstration of PCA in sklearn

Figure 1: Demonstration of PCA in sklearn.

3.2 - Using PySpark

As discussed before, we are using large datasets. Scikit-learn will crash on single computers trying to compute PCA on datasets such as these. As a result, we look to PySpark to distribute the computation of PCA.

3.2.1 - Method 1: Spark's ML Package

Figure 2 shows PCA in PySpark using Spark's ML package. The transformed matrix looks different than sklearn's result. This is because sklearn subtracts the mean of the input to make sure that the output is zero mean. However, the PCA module in PySpark applies transformation to the original input. Basically, sklearn computes \bZ_{zeromean} (equation 10) and PySpark computes \bZ (equation 9).

Figure 2: Demonstration of PCA in PySpark using PCA module in ML

Figure 2: Demonstration of PCA in PySpark using PCA module in ML.

You might have noticed that an identity matrix was necessary to pull out the features. Spark's PCA module had limited outputs, so to be able to see the components we had to use a dummy transformation on the features.

3.2.2 - Method 2: SVD through Wrapping Java/Scala

In Spark 1.6.0 we can compute SVD using some modules from Spark's mllib package. However, SVD in Spark was implemented in Scala and Java. We resorted to making PySpark wrappers to those SVD classes using the functions written by Elias Abou Haydar.[1]

from pyspark.mllib.common import callMLlibFunc, JavaModelWrapper
from pyspark.mllib.linalg.distributed import RowMatrix
from pyspark.mllib.linalg import _convert_to_vector, Matrix, DenseMatrix
from import StandardScaler

class RowMatrix_new(RowMatrix):
    def multiply(self, matrix):
        Multiplies the given RowMatrix with another matrix.
        :param matrix: Matrix to multiply with.
        :returns: RowMatrix
        >>> rm = RowMatrix(sc.parallelize([[0, 1], [2, 3]]))
        >>> rm.multiply(DenseMatrix(2, 2, [0, 2, 1, 3])).rows.collect()
        [DenseVector([2.0, 3.0]), DenseVector([6.0, 11.0])]
        if not isinstance(matrix, DenseMatrix):
            raise ValueError("Only multiplication with DenseMatrix "
                             "is supported.")
        j_model ="multiply", matrix)
        return RowMatrix_new(j_model)

class SVD(JavaModelWrapper):
    """Wrapper around the SVD scala case class"""
    def U(self):
        """ Returns a RowMatrix whose columns are the left singular vectors of the SVD if computeU was set to be True."""
        u ="U")
        if u is not None:
            return RowMatrix(u)

    def s(self):
        """Returns a DenseVector with singular values in descending order."""

    def V(self):
        """ Returns a DenseMatrix whose columns are the right singular vectors of the SVD."""

def computeSVD(row_matrix, k, computeU=False, rCond=1e-9):
    Computes the singular value decomposition of the RowMatrix.
    The given row matrix A of dimension (m X n) is decomposed into U * s * V'T where
    * s: DenseVector consisting of square root of the eigenvalues (singular values) in descending order.
    * U: (m X k) (left singular vectors) is a RowMatrix whose columns are the eigenvectors of (A X A')
    * v: (n X k) (right singular vectors) is a Matrix whose columns are the eigenvectors of (A' X A)
    :param k: number of singular values to keep. We might return less than k if there are numerically zero singular values.
    :param computeU: Whether of not to compute U. If set to be True, then U is computed by A * V * sigma^-1
    :param rCond: the reciprocal condition number. All singular values smaller than rCond * sigma(0) are treated as zero, where sigma(0) is the largest singular value.
    :returns: SVD object
    java_model ="computeSVD", int(k), computeU, float(rCond))
    return SVD(java_model)

def computePCA_sergul(input, numReducedDim):
    Computer PCA for a given input using SVD
    :param input: A matrix of dimension num_of_observation x num_of_features
    :param numReducedDim: Dimension of reduced space
    :return: components, projection and singular values
    df = sqlContext.createDataFrame(input,["features"])
    standardizer = StandardScaler(withMean=True, withStd=False,
    model_normalizer =
    df_normalized = model_normalizer.transform(df)
    pca_features ="std_features") row : row[0])
    mat = RowMatrix_new(pca_features)
    svd = computeSVD(mat,numReducedDim,True)
    components ="V").toArray() # dim_of_data x reduced dim
    projection = RowMatrix_new((pca_features)).multiply("V"))
    singularValues ="s").toArray()
    return components, projection, singularValues

We can apply this framework to the same input from the toy example as shown in Figure 3.

Demonstration of PCA in PySpark using PySpark wrappers for SVD implementation

Figure 3: Demonstration of PCA in PySpark using PySpark wrappers for SVD implementation.

The results are consistent with the theoretical results in section 2.

4 - Application: Functional Magnetic Resonance Imaging

We applied PCA to a neuroimaging data set to explore neuronal signatures in the human brain. We used resting state functional magnetic resonance imaging data (fMRI) for 900 subjects, collected as part of the Human Connectome Project (HCP).[3]

The preprocessing of this data set is beyond the scope of this article, but you can see the dimensions involed in creating the dataset in Figure 4.

The number of time points for each subject is 1,200 and the number of voxels is 13,000. Since there are 900 subjects, we need to compute SVD on matrix of size 1080000 \times 13000 (matrix \bX^T in our notation).

Concatenation of 900 subjects for PCA computation.

Figure 4: Concatenation of 900 subjects for PCA computation.

Computing SVD on a matrix of size 1080000 \times 13000 is computationally expensive. Therefore, we backed PySpark with commodity hardware provided by Amazon Web Services and accessed through the Domino data science platform.

PCA on the entire dataset took 27 hours using hardware with 16 cores and 30GB RAM. Figure 5 shows the top four components from four different views.

While interpreting such images is hard, two conclusions are of note:
1. The maps are roughly symmetric, in line with what is expected for resting state networks of the brain.
2. The third component has hotspots at the dorsal medial prefrontal cortex and the precuneus, two key components of the default mode network.

Concatenation of 900 subjects for PCA computation

Figure 5: Concatenation of 900 subjects for PCA computation.


  1. Elias Abou Haydar. 2015.
  2. Jonathon Shlens. A tutorial on principal component analysis. arXiv preprint
    , 2014.
  3. Stephen M Smith, Christian F Beckmann, Jesper Andersson, Edward J Auerbach, Janine Bijsterbosch, Gwenaëlle Douaud, Eugene Duff, David A Feinberg, Ludovica Griffanti, Michael P Harms, et al. Resting-state fmri in the human connectome project. Neuroimage, 80:144–168, 2013.

Banner image titled "DSC_2236" by Edward Blake. Licensed under CC BY 2.0
LaTeX rendering by QuickLaTeX.