FG-86

Parallel Analysis of EEG Data with Hadoop on FutureGrid

Enabling Energy-Efficient Analysis of Massive Neural Signals Using GPGPU

[Chen] Chen, D., L. Wang, S. Wang, M. Xiong, G. von Laszewski, and X. Li, "Enabling Energy-Efficient Analysis of Massive Neural Signals Using GPGPU", Proceedings of the 2010 IEEE/ACM Int'l Conference on Green Computing and Communications & Int'l Conference on Cyber, Physical and Social Computing, Washington, DC, USA, IEEE Computer Society, pp. 147–154, 2010.

Parallel Analysis of EEG Data with Hadoop on FutureGrid

Project Details

Project Lead
Lizhe Wang 
Project Manager
Rewati Ovalekar 
Project Members
Lizhe Wang, Gregor von Laszewski, Geoffrey Fox  
Institution
Indiana University, Pervasive Technology Institute  
Discipline
Computer Science (401) 

Abstract

Background - Science Neural signals are signatures of neural activities generated by human brain, a highly complex nonlinear and non-stationary system. Analysis of neural signals such as EEG data has long been a hot topic [1], since it is vital in detection, diagnosis, and treatment of brain disorders and the related diseases [2]. Experimental techniques for recording neural activities have been advanced quickly, and it is now possible to implant more than several hundred electrodes to simultaneously study the activity of many neurons or neural networks. As a result, the density and the spatial scale of neural signals have been increasing exponentially brought by the rapidly increasing number of channels and sampling frequencies. In conclusion, analysis of EEG signal is a data-intensive and compute-intensive application [4]. The EEMG has multiple levels of parallelism (see Figure 1): 1. Epoch level: The EEMD procedure for an epoch of time series is treated as a whole at this level. The data in an epoch are input to the same EEMD procedure individually, and the outputs from any instance of EEMD procedure will not be consumed by another. 2. Trial level: A trial (a noise-added epoch) is treated as whole at this level. Given a number of trials per EEMD instance, the decomposition of each trial is always performed independently from the others. 3. It is also possible to parallelize the application at the data channel level, which means the processing on a whole channel of data will be treated as an individual task. The grain of parallelism at this level is extremely coarse.

Intellectual Merit

(Provided Later)

Broader Impacts

This is an independent study project in Indiana University. Have to develop a code using Hadoop

Scale of Use

5 days a week

Syndicate content