Archive

Archive for the ‘brain’ Category

A few recent NIH grants awarded related to NIRS

April 25th, 2017

The following email was sent from Stork to me. Stork is an easy-to-use app to alert me of new scientific publications and NIH grants based on my own keywords. Below are a few grants awarded in the NIRS field.

Dear Xu,

Stork has brought you 15 new publications.

David Boas

Awarded Grants
Multifunctional, GBM-activatable nanocarriers for image-guided photochemotherapy by Huang-chiao Huang (2017) NIH Grants Awarded (Amount: $179,035) Duration: 2017-04-01 to 2018-03-31

fmri nirs

Awarded Grants
Quantifying the Fluctuations of Intrinsic Brain Activity in Healthy and Patient Populations by Manish Saggar (2017) NIH Grants Awarded (Amount: $249,000) Duration: 2017-03-20 to 2018-02-28

fmri resting state parent child

Awarded Grants
NEUROIMAGING IN EARLY ONSET DEPRESSION: LONGITUDINAL ASSESSMENT OF BRAIN CHANGES by Deanna M Barch (2017) NIH Grants Awarded (Amount: $768,901) Duration: 2017-04-01 to 2018-03-31

hyperscanning

Awarded Grants
Brain-to-brain dynamical Coupling: A New framework for the communication of social knowledge by Uri Hasson (2017) NIH Grants Awarded (Amount: $524,425) Duration: 2017-04-01 to 2018-03-31

nirs brain

Awarded Grants
The Neurodevelopmental MRI Database by John E Richards (2017) NIH Grants Awarded (Amount: $61,625) Duration: 2017-04-01 to 2018-03-31

nirs breast

Awarded Grants
Longitudinal Assessment of Tumor Hypoxia in vivo Using Near-Infrared Spectroscopy by Bing Yu (2017) NIH Grants Awarded (Amount: $399,062) Duration: 2017-01-01 to 2019-01-31

Russell Poldrack, stanford

Awarded Grants
Elucidate the Mechanisms Underlying Inhibition Induced Devaluation by Patrick Graham Bissett (2017) NIH Grants Awarded (Amount: $59,466) Duration: 2017-04-01 to 2018-03-31

Author: Xu Cui Categories: nirs Tags:

RA and Postdoc position at Stanford

April 19th, 2017

Brain Dynamics Lab (bdl.stanford.edu) is a computational neuropsychiatry lab dedicated to developing computational methods for a better understanding of individual differences in brain functioning in healthy and patient populations.

Current projects include – [1] Characterizing spatiotemporal dynamics in brain activity to develop person- and disorder-centric biomarkers; [2] Understanding the role of brain dynamics for optimized learning and performance in individual and team settings; and [3] Developing methods that use network science (or graph theory), connectomics, machine learning, and signal processing for better understanding of brain dynamics.

To apply for either position — please email your CV, names of 3 references and a cover letter to saggar@stanford.edu

——RA position——
Applications are currently being invited for a Research Assistant position in the Brain Dynamics Lab @ Stanford, under the direction of Dr. Manish Saggar.

Responsibilities for this position include:
Developing neuroimaging experiments, collecting neuroimaging data, processing and analysis. Imaging modalities to be handled include functional and structural MRI, EEG, and fNIRS.

Job Qualifications:
[1] Bachelors in Computational Neuroscience, Cognitive Science, Computer Science, or other related scientific fields.
[2] Proficient in programming in Matlab, Python, and other related computing languages
[3] Experience with neuroimaging data collection (fMRI and/or fNIRS)
[4] Experience with one or more MRI/EEG/NIRS data analysis packages (e.g., AFNI, FSL, EEGLAB, HOMER etc.) is preferred, but not required.
[5] Ability to work effectively in a very collaborative and multidisciplinary environment.

—— Postdoc position ——
A full-time postdoctoral position is available in the Brain Dynamics Lab @ Stanford, under the direction of Dr. Manish Saggar.

The postdoctoral fellow will lead computational neuroimaging projects involving multimodal neuroimaging data (EEG+fMRI/fNIRS) to understand the role of fluctuations in intrinsic brain activity in healthy and patient populations. The fellow will participate in collecting and analyzing multimodal neuroimaging data, training and supervising students and research assistants, preparing manuscripts for publication, as well as assisting with grant applications. The position provides a unique training opportunity in computational modeling, neuroimaging, network science and machine learning.

Job Qualifications:
[1] PhD (or MD/PhD) or equivalent in computational neuroscience, computer science, psychology, statistics, bioengineering or a related field.
[2] Strong writing skills demonstrated by peer reviewed publications
[3] Proficient in programming in Matlab, Python, and other related computing languages
[4] Experience with one or more MRI/EEG/NIRS data analysis packages (e.g., AFNI, FSL, EEGLAB, HOMER etc.) is preferred, but not required.
[5] Familiarity with advanced data analysis methods, multivariate statistics, machine learning, data mining and visualization, and cloud computing is a plus.

— — — —

Author: Xu Cui Categories: brain, life Tags:

Updated loadHitachiText.m

March 16th, 2017

Some labs have been using our script readHitachiData.m to load NIRS data from Hitachi ETG machines. We recently found that some output MES data contains abnormal timestamp. For example, the timestamp should be like

16:49:25.406

But for some rows (although rarely), the time is like (note the ending character)

16:49:25.406E

This will cause our script to choke. We just fixed this issue, and you need to replace loadHitachiText.m. The new version can be found here.

Author: Xu Cui Categories: brain, nirs Tags:

Learning deep learning (project 2, image classification)

March 7th, 2017

In this class project, I built a network to classify images in the CIFAR-10 dataset. This dataset is freely available.

The dataset contains 60K color images (32×32 pixel) in 10 classes, with 6K images per class.

Here are the classes in the dataset, as well as 10 random images from each:

airplane
automobile
bird
cat
deer
dog
frog
horse
ship
truck

You can imagine it’s not possible to write down all rules to classify them, so we have to write a program which can learn.

The neural network I created contains 2 hidden layers. The first one is a convolutional layer with max pooling. Then drop out 70% of the connections. The second layer is a fully connected layer with 384 neurons.

def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    model = conv2d_maxpool(x, conv_num_outputs=18, conv_ksize=(4,4), conv_strides=(1,1), pool_ksize=(8,8), pool_strides=(1,1))
    model = tf.nn.dropout(model, keep_prob)

    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    model = flatten(model)

    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    model = fully_conn(model,384)

    model = tf.nn.dropout(model, keep_prob)

    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    model = output(model,10)

    # TODO: return output
    return model

Then I trained this network using Amazon AWS g2.2xlarge instance. This instance has GPU which is much faster for deep learning (than CPU). I did a simple experiment and find GPU is at least 3 times faster than CPU:

if all layers in gpu: 14 seconds to run 4 epochs,
if conv layer in cpu, other gpu, 36 seconds to run 4 epochs

This is apparently a very crude comparison but GPU is definitely much faster than CPU (at least the ones in AWS g2.2xlarge, cost: $0.65/hour)

Eventually I got ~70% accuracy on the test data, much better than random guess (10%). The time to train the model is ~30 minutes.

You can find my entire code at:
http://www.alivelearn.net/deeplearning/dlnd_image_classification_submission2.html

Author: Xu Cui Categories: brain, deep learning Tags:

Chin rest (head holder) device for NIRS

January 30th, 2017

When we set up our NIRS lab back in 2008, we needed a device to prevent participants’ head movement during the experiment and during the digitizer measurement. Even though NIRS is tolerant to head motion, we still want to minimize it. During the digitizer measurement phase, the probe will poke the participants’ heads, resulting inaccurate probe position. We definitely need something to minimize it.

In addition, we feared that metal might interfere the magnetic positioning system (digitizer), so we wanted the device to be all-plastic.

We contacted Ben Krasnow , who has been very helpful in creating MRI compatible devices (e.g. keyboard) for Lucas Center @ Stanford in the past. He suggested us use University of Houston’s “headspot”.

Headspot

Ben then replaced the metal part with plastics.

we have been using it for almost 10 years! It works great, as expected. The height is also adjustable. I recently checked the price and it is $500, which is slightly higher than in 2008 ($415), but not much different. Ben charged $325 to replace the metal. The total (with tax) was $774.

headspot webpage

headspot webpage

Author: Xu Cui Categories: brain, nirs Tags:

We contributed to MatLab (wavelet toolbox)

January 25th, 2017

We use MatLab a lot! It’s the major program for brain imaging data analysis in our lab. However, I never thought we could actually contribute to MatLab’s development.

In MatLab 2016, there is a toolbox called Wavelet Toolbox. If you read the document on wavelet coherence (link below), you will find that they used our NIRS data as an example:

https://www.mathworks.com/help/wavelet/examples/compare-time-frequency-content-in-signals-with-wavelet-coherence.html

Back in 2015/4/9, Wayne King from MathWorks contacted us, saying that they are developing the wavelet toolbox and asking if we can share some data as an example. We did. I’m glad that it’s part of the package now.

The following section are from the page above:


Find Coherent Oscillations in Brain Activity

In the previous examples, it was natural to view one time series as influencing the other. In these cases, examining the lead-lag relationship between the data is informative. In other cases, it is more natural to examine the coherence alone.

For an example, consider near-infrared spectroscopy (NIRS) data obtained in two human subjects. NIRS measures brain activity by exploiting the different absorption characteristics of oxygenated and deoxygenated hemoglobin. The data is taken from Cui, Bryant, & Reiss (2012) and was kindly provided by the authors for this example. The recording site was the superior frontal cortex for both subjects. The data is sampled at 10 Hz. In the experiment, the subjects alternatively cooperated and competed on a task. The period of the task was approximately 7.5 seconds.

load NIRSData;
figure
plot(tm,NIRSData(:,1))
hold on
plot(tm,NIRSData(:,2),'r')
legend('Subject 1','Subject 2','Location','NorthWest')
xlabel('Seconds')
title('NIRS Data')
grid on;
hold off;

Obtain the wavelet coherence as a function of time and frequency. You can use wcoherence to output the wavelet coherence, cross-spectrum, scale-to-frequency, or scale-to-period conversions, as well as the cone of influence. In this example, the helper function helperPlotCoherence packages some useful commands for plotting the outputs of wcoherence.

[wcoh,~,F,coi] = wcoherence(NIRSData(:,1),NIRSData(:,2),10,'numscales',16);
helperPlotCoherence(wcoh,tm,F,coi,'Seconds','Hz');

In the plot, you see a region of strong coherence throughout the data collection period around 1 Hz. This results from the cardiac rhythms of the two subjects. Additionally, you see regions of strong coherence around 0.13 Hz. This represents coherent oscillations in the subjects’ brains induced by the task. If it is more natural to view the wavelet coherence in terms of periods rather than frequencies, you can use the ‘dt’ option and input the sampling interval. With the ‘dt’ option, wcoherence provides scale-to-period conversions.

[wcoh,~,P,coi] = wcoherence(NIRSData(:,1),NIRSData(:,2),seconds(0.1),...
    'numscales',16);
helperPlotCoherence(wcoh,tm,seconds(P),seconds(coi),...
    'Time (secs)','Periods (Seconds)');

Again, note the coherent oscillations corresponding to the subjects’ cardiac activity occurring throughout the recordings with a period of approximately one second. The task-related activity is also apparent with a period of approximately 8 seconds. Consult Cui, Bryant, & Reiss (2012) for a more detailed wavelet analysis of this data.

Conclusions

In this example you learned how to use wavelet coherence to look for time-localized coherent oscillatory behavior in two time series. For nonstationary signals, it is often more informative if you have a measure of coherence that provides simultaneous time and frequency (period) information. The relative phase information obtained from the wavelet cross-spectrum can be informative when one time series directly affects oscillations in the other.

References

Cui, X., Bryant, D.M., and Reiss. A.L. “NIRS-Based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation”, Neuroimage, 59(3), pp. 2430-2437, 2012.

Grinsted, A., Moore, J.C., and Jevrejeva, S. “Application of the cross wavelet transform and wavelet coherence to geophysical time series”, Nonlin. Processes Geophys., 11, pp. 561-566, 2004.

Maraun, D., Kurths, J. and Holschneider, M. “Nonstationary Gaussian processes in wavelet domain: Synthesis, estimation and significance testing”, Phys. Rev. E 75, pp. 016707(1)-016707(14), 2007.

Torrence, C. and Webster, P. “Interdecadal changes in the ESNO-Monsoon System,” J.Clim., 12, pp. 2679-2690, 1999.

Author: Xu Cui Categories: brain, matlab, nirs, programming Tags:

Communications between two MatLabs (1) over file

October 3rd, 2016

Ref to: Communications between two MatLabs (2): over socket

It’s common that two MatLab programs needs to communicate. For instance, one program is collecting the brain imaging data but not display them, and the other program is to display the data. (Another case is at http://www.alivelearn.net/?p=1265) Sometimes it is not practical to merge the two program together (e.g. to keep the code clean). In this case we can run two MatLabs simultaneously. One keeps saving the data to a file, and the other keep reading the file.

Here I played with such a setup, and find they communicate well with small delay (small enough for hemodynamic responses). Check out the video below:

writeSomething.m

for ii=1:100
    save('data','ii');
    disp(['write ' num2str(ii)])
    pause(1)
end
readSomething.m

last_ii = 0;
while(1)
    try
        load data
        if(ii ~= last_ii)
            disp(['get data. i=' num2str(ii)])
        end
        last_ii = ii;
    end
    pause(0.1)
end

Caveat: writing/reading to/from disc is slow. So if your experiment requires real time communication without any delay (say <1ms), this method may not work. Also, the amount of data to write/read each time should be very small, and the frequency of write should be small too. The file needs to locate in your local hard drive instead of a network drive.

———- Comments ———–
Paul Mazaika from Stanford:
Cool piece of code! There may be a way to do this with one umbrella Matlab program that calls both components as subroutines. The potential advantage is that one program will keep memory in cache, not at disk, which can support rapidly updating information. For high speeds, it may be better to only occasionally update the graphical display, which otherwise may be a processing bottleneck.
-Paul

Aaron Piccirilli from Stanford:
There is, sort’ve! I think Xu’s little nugget is probably best choice for many applications, but if speed is an especially big concern then there are a couple of options that I’ve come across that will maintain some sort of shared memory.

Perhaps the easiest is to use sockets to communicate data, via UDP or TCP/IP, just like you use over the internet, but locally. You write some data to a socket in one program, and read it from that same socket in another program. This keeps all of your data in memory as opposed to writing it to disk, but there is definitely some overhead for housekeeping and to move the data from one program’s memory into the operating system’s memory then back into the other program’s memory. An added bonus here: you can communicate between different languages. If you have a logging function written in Python and a visualization program in MATLAB, they can pretty easily communicate with each other via sockets.

MATLAB doesn’t have explicit parallel computing built-in like many other languages, sadly, but we all have access here to the Parallel Computing Toolbox, which is another option for some more heavy-duty parallel processing where you have a problem you can easily distribute to multiple workers.

Finally, true shared memory might be more trouble than it’s worth for most applications, as you then have to deal with potential race conditions of accessing the same resource at the same time.

Aaron

More on this topic: Please continue to read Communications between two MatLabs (2): over socket

Author: Xu Cui Categories: brain, matlab, nirs, programming Tags:

Stork是我最好的科研助手(2):经费提醒

August 11th, 2016

Stork

Stork

1. 我的导师是否有经费?
2. 我在寻找博后的职位;我未来的老板是否有足够的经费支持我?
3. 有多少经费拨给了我的研究领域(比如NIRS)?谁得到了这些经费?他们将用这些经费做什么?

你是否也曾考虑过这些问题?当我还是一个研究生的时候,我很少问这些关于“钱”的问题。“钱”似乎并不是一个“真正”的科学家应该在意的事。而当我得知我的导师为经费申请花着大半部分时间的时候,我更加疑惑了,他难道不应该把时间用在做实验以及撰写论文上吗?

当我开始博后研究时,我发现自己同样花费很多时间申请经费。我也逐渐意识到我的未来科研事业是否成功,很大一部分是由我是否能有足够稳定的经费支持来决定的。我见到一些同事,由于缺少经费不得不离开科研界。我想到如果能有一个工具及时提醒我当下的经费拨款情况,我的科学研究应当可以更加有的放矢。

Stork 就是这样一个工具。

我在Stork里输入如下关键字,“pearl chiu”(我之前同事的名字)以及“NIRS brain”(我的研究领域)。以下是Stork发给我的邮件:

Stork notifies me of awarded grants

Stork告知我获得拨款的研究者

有了Stork提供的信息,我了解了在我的研究领域,谁得到了经费以及他们打算用这笔经费做什么研究。实际上邮件里的第三个基金是拨给我的同事Manish, 用于他进行利用NIRS对静息状态下的脑回路的研究。我还看到Pearl得到了很大一笔经费,所以我给她发送了一封祝贺邮件。与Stork的另一个功能论文提醒比起来,经费提醒让我更早的对自己研究领域的趋势了如指掌。得到经费支持的研究,通常需要几年之后才有相关论文发表出来。

如果你也想变成第一个在你领域知道有哪些新经费颁发的人,快去试试Stork你一定会发现很多惊喜!

Author: Xu Cui Categories: brain, programming, stork, web, writing Tags:

A mistake in my False discovery rate (FDR) correction script

August 8th, 2016

I have posted an FDR script at http://www.alivelearn.net/?p=1840. I noticed that there is a small bug. In rare cases, this bug will cause the most significant voxel to be classified as ‘non-significant’ while other voxels are ’significant’.

Consider the following example:

p = [0.8147 0.9058 0.0030 0.9134 0.6324 0.0029 0.2785 0.5469 0.9575 0.9649 0.1576 0.9706 0.9572 0.4854 0.8003 0.1419 0.4218 0.9157];

The previous script will classify p(3) as significant but p(6) as non-significant.

Here is the updated version of the script:

function y = fdr0(p, q)
% y = fdr0(p, q)
%
% to calculate whether a pvalue survive FDR corrected q
%
% p: an array of p values. (e.g. p values for each channel)
% q: desired FDR threshold (typically 0.05)
% y: an array of the same size with p with only two possible values. 0
% means this position (channel) does not survive the threshold, 1 mean it
% survives
%
% Ref:
% Genovese et al. (2002). Thresholding statistical maps in functional
% neuroimaging using the false discovery rate. Neuroimage, 15:722-786.
%
% Example:
%   y = fdr0(rand(10,1),0.5);
%
% Xu Cui
% 2016/3/14
%

pvalue = p;
y = 0 * p;

[sortedpvalue, sortedposition] = sort(pvalue);
v = length(sortedposition);
for ii=1:v
    if q*ii/v >= sortedpvalue(ii)
        y(sortedposition(1:ii)) = 1;
    end
end

return;
Author: Xu Cui Categories: brain, matlab, nirs, programming Tags:

OpenBCI EEG

June 27th, 2016

Manish Saggar in our lab brought something very cool - a helmet like EEG system. He called it “dry” EEG because it does not requite gel. The design is not polished, but it’s cheap, like $800. And it does not need long wires to transmit the signal to a computer, which is pretty nice. This product is manufactured by OpenBCI, an open source community.

Manish

Manish Saggar

I decided to give it a try and borrowed the device from Manish.

Wearing OpenBCI Biosensing Headset

Wearing OpenBCI Biosensing Headset

The headset does not fit my head perfectly, leaving most channels in the front untouched with my head. It’s also not comfortable to wear for a long time; the material is hard and exert a lot of pressure on my head. But it’s wireless and we can collect the data and display the signal on screen in real time. This part is cool.

To collect data, we need two pieces of software and a USB dongle.

USB dongle

USB dongle

First, let’s download the dongle driver. The driver of the dongle can be found at http://www.ftdichip.com/Drivers/VCP.htm. Since I am using Windows 7, I download the Windows executable setup version. You can download the file (CDM21218_Setup.zip) directly from http://www.ftdichip.com/Drivers/CDM/CDM21218_Setup.zip. Then unzip and install it.

Then, let’s plugin the dongle into a USB. Make sure you plug it the right way. If you do, the dongle will emit blue lights. I did it wrong in the first place and have to reverse the side and try again.

We also need to download the OpenBCI software which can collect and visualize the data in real time. You can download it at http://openbci.com/index.php/downloads. I downloaded the Windows 64bit version.

Now it is time to play with it. First let’s open OpenBCI software.

OpenBCI

OpenBCI

The dongle acts as COM3 port on my computer, so I select COM3. I also switch my headset power on. Then I click “START SYSTEM”.

OpenBCI

OpenBCI

At this time I click the round “head” bring up the signal panel. Then I click “Start data stream” to collect the data. The figure below shows my brain wave in real time!

OpenBCI data collection

OpenBCI data collection

So far it’s all impressive. The setup is easy. But the question is if the signal is reliable. What we found is that it’s highly motion sensitive. If I move my head, or blink my eye, the signal will change. At this point, I hesitate to draw any conclusion on the quality of the signal.

In the past few years a number of companies have produced consumer-use brain signal recording devices. Check out this list in wikipedia. A few NIRS companies (e.g. Hitachi) are also working on a consumer NIRS device. There is no doubt that we will have a reliable personal brain sensor in the near future.

Author: Xu Cui Categories: brain Tags: