Archive

Author Archive

Stork美国国立卫生研究院(NIH)经费数据库(中英版)

July 26th, 2017

美国国立卫生研究院每年要拨款300亿美元用于生物和医学的研究。这些经费大约要资助6万个课题。如果能访问到这些数据,并且实时翻译成中文,那将是医生、教授、学生和其他科研人员宝贵的资源库。 为此,我们开发了这款美国国立卫生研究院经费数据库(中英版)。这个数据库可以使我们:

  1. 方便地了解美国生物医学最新科研动态
  2. 为自己的研究和基金申请带来灵感
  3. 在美国找到合作伙伴、合作机会
  4. 在美国找到学习、进修机会
  5. 该数据库每月更新,Stork也会把新的相关的课题用电子邮件通知我们

NIH经费数据库使用非常简单

第一步,输入关键词(支持中文关键词哦!),回车,中英文对照结果立刻呈现。

第二步,要仔细阅读某个经费,单击标题,摘要也中英文显示。

怎么才能使用呢?

NIH经费数据库(中英版)是Stork的付费高级功能。 要使用,首先要免费注册Stork账户。 请按照这个说明流程注册。 注册后就可以看到高级功能说明。 单击NIH Grant图标就可以看到付费流程。

现在就看看NIH经费数据库(中英版)会给你带来什么惊喜吧!

Stork官网地址: https://www.storkapp.me/

Author: Xu Cui Categories: programming, stork Tags:

A few recent NIH grants awarded related to NIRS (2017-07-05)

July 5th, 2017
The following email was sent from Stork to me. Stork is an easy-to-use app to alert me of new scientific publications and NIH grants based on my own keywords. Below are a few grants awarded in the NIRS field.
David Boas
Awarded Grants
Enabling
widespread use of high resolution imaging of oxygen in the brain
by
David A Boas (2017) NIH Grants Awarded (Amount: $288,619) Duration: 2017-07-01 to
2018-06-30

fmri nirs

Awarded Grants
Mechanisms
of Interpersonal Social Communication: Dual-Brain fNIRS Investigation

by Joy Hirsch (2017) NIH Grants Awarded (Amount: $416,250) Duration: 2017-06-01 to
2018-05-31

Multimodal
Neuroimaging of Cigarette Smoking
by Yunjie Tong (2017) NIH Grants
Awarded (Amount:
$137,928)
Duration: 2017-06-19 to 2017-11-30

nirs brain

Awarded Grants
Neural
Mechanisms for Social Interactions and Eye Contact in ASD
by Joy
Hirsch (2017) NIH Grants Awarded (Amount: $640,560) Duration: 2017-07-01 to 2018-06-30

Free
Surfer Development, Maintenance, and Hardening
by Bruce Fischl (2017)
NIH Grants Awarded (Amount: $523,203) Duration: 2017-07-01 to 2018-06-30

Coherent
hemodynamics spectroscopy for cerebral autoregulation and blood flow
by
Sergio Fantini (2017) NIH Grants Awarded (Amount: $517,756) Duration: 2017-05-01 to
2018-04-30

Training
in Drug Abuse and Brain Imaging
by Scott E Lukas (2017) NIH
Grants Awarded (Amount:
$256,451)
Duration: 2017-07-01 to 2018-06-30

Author: Xu Cui Categories: brain, nirs Tags:

Matlab figure disappears in multiple monitor setup

June 17th, 2017

I have multiple monitors attached to my laptop. Whenever I want to create a new figure in MatLab, the main screen seems to flash once and the figure is nowhere to find.

The reason is the DefaultFigurePosition property. If you run

get(0,'DefaultFigurePosition')

You probably will find the values are out of your screen, and that’s exactly the reason. For example, on my computer the first value (left) is negative, and 2nd value (bottom) is too large.

The solution is to use the following command

set(0,'defaultfigureposition',[680 278 560 420])

You don’t want to manually run the above command everytime you run MatLab. So you can put the command to the startup.m file, which is located in \toolbox\local\startup.m

Author: Xu Cui Categories: matlab Tags:

Hyperscanning experiment file (matlab)

June 8th, 2017

Below is the experiment script (in MatLab) for our hyperscanning project (”NIRS-based hyperscanning reveals increased interpersonal coherence in superior frontal cortex during cooperation.”). For detailed information please refer to http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3254802

Psychtoolbox-3 is required.

hyperscancooperation.m
hyperscancompetition.m
hyperscan1player.m

Author: Xu Cui Categories: brain, matlab, nirs, programming, psychToolbox Tags:

Jobs available @ UCSF

May 30th, 2017

Posted for Fumiko Hoeft, Director of BrainLENS at UCSF:

Join us at UCSF Hoeft Neuroscience Lab and Precision Learning Center, a multicampus science of learning initiative consisting of 6 Univ. CA schools (Berkeley, Davis, Irvine, LA, Merced, SF) and Stanford.

We are expanding and hiring!

(1) 2 RESEARCH SCIENTISTS or POSTDOCS. Experts in signal processing, neuroimaging and big data analytics
(2) 2-3 RESEARCH ASSISTANTS. Interested in neuropsychological assessment (English, Spanish, Cantonese)
(3) VOLUNTEERS

UCSF is situated at the heart of San Francisco, CA, and is a premier biomedical research institution, ranked second in the world for Neuroscience and Behavior by US News.

https://lnkd.in/gKwZr38
https://lnkd.in/gCztfPZ

Author: Xu Cui Categories: life Tags:

Learning deep learning (project 5, generate new celebrity faces)

May 30th, 2017

In this class project, I used generative adversarial network (GAN) to generate new images of faces, similar to celebrity faces in the database.

The model we use is a deep convolutional network, which has been used widely in image classification.

First, we use the MNIST database (collection of 60,000 handwriting digits). After the training, the model can generate digits similar to what we have in the training set. We only trained it for two epochs.  I believe we can generate more realistic images if we train it longer.

Generate new handwriting digits

Generate new handwriting digits

Then we use ~200,000 images of celebrity faces to train our model. The training takes much longer time, but with my Nvidia 1080 Ti it’s fast. In the beginning, just after learning from 20,000 images, the model was able to generate face-like patterns. Then after the complete 10 epoch training, it generate very clear faces.

Generate new faces

Generate new faces

The project can be found at http://www.alivelearn.net/deeplearning/dlnd_face_generation.html

Author: Xu Cui Categories: deep learning Tags:

Does Facebook’s “mind reading” project use NIRS?

May 12th, 2017

Facebook just announced that they are experimenting with mind-reading technology using optical neuro-imaging systems. This technology will allow people to type words by thoughts at 100 words per minute. Check out the news here.

Wow! This is unbelievable! The “optical neuro-imaging” technology is probably NIRS (Near Infrared Spectroscopy). As a NIRS researcher myself, I have done some mind-reading experiments and found NIRS signal (blood flow) is too slow for rapid mind-reading. With machine learning technology such as SVM, we can decode a signal at most ~2s after a behavior event (see our paper). This is still too far from a real life application.

But some researchers have suggested that there might be some subtle “fast signal” embedded in NIRS signal. In a 2004 (!) paper, Morren et al published a paper tilted “Detection of fast neuronal signals in the motor cortex from functional near infrared spectroscopy measurements using independent component analysis“. In this paper, they claimed that fast signal, in the range of milliseconds rather than seconds, can be detected.

Maybe this is what Facebook is using?

Author: Xu Cui Categories: brain, nirs Tags:

Learning deep learning (project 4, language translation)

April 28th, 2017

In this project, I built a neural network for machine translation (English -> French).  I built and trained a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. The model was trained on my own laptop with a Nvidia M1200 GPU. In the end, we reached ~95% accuracy. And here is an example of the translation:

Input

English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']

Prediction

French Words: ['il', 'a', 'vu', 'un', 'vieux', 'camion', 'jaune', '.', '<EOS>']

As I do not know French, I checked Google Translate and it looks like the translation is pretty good.

The full project with code can be found here:
dlnd_language_translation.html

Author: Xu Cui Categories: deep learning, programming Tags:

A few recent NIH grants awarded related to NIRS

April 25th, 2017

The following email was sent from Stork to me. Stork is an easy-to-use app to alert me of new scientific publications and NIH grants based on my own keywords. Below are a few grants awarded in the NIRS field.

Dear Xu,

Stork has brought you 15 new publications.

David Boas

Awarded Grants
Multifunctional, GBM-activatable nanocarriers for image-guided photochemotherapy by Huang-chiao Huang (2017) NIH Grants Awarded (Amount: $179,035) Duration: 2017-04-01 to 2018-03-31

fmri nirs

Awarded Grants
Quantifying the Fluctuations of Intrinsic Brain Activity in Healthy and Patient Populations by Manish Saggar (2017) NIH Grants Awarded (Amount: $249,000) Duration: 2017-03-20 to 2018-02-28

fmri resting state parent child

Awarded Grants
NEUROIMAGING IN EARLY ONSET DEPRESSION: LONGITUDINAL ASSESSMENT OF BRAIN CHANGES by Deanna M Barch (2017) NIH Grants Awarded (Amount: $768,901) Duration: 2017-04-01 to 2018-03-31

hyperscanning

Awarded Grants
Brain-to-brain dynamical Coupling: A New framework for the communication of social knowledge by Uri Hasson (2017) NIH Grants Awarded (Amount: $524,425) Duration: 2017-04-01 to 2018-03-31

nirs brain

Awarded Grants
The Neurodevelopmental MRI Database by John E Richards (2017) NIH Grants Awarded (Amount: $61,625) Duration: 2017-04-01 to 2018-03-31

nirs breast

Awarded Grants
Longitudinal Assessment of Tumor Hypoxia in vivo Using Near-Infrared Spectroscopy by Bing Yu (2017) NIH Grants Awarded (Amount: $399,062) Duration: 2017-01-01 to 2019-01-31

Russell Poldrack, stanford

Awarded Grants
Elucidate the Mechanisms Underlying Inhibition Induced Devaluation by Patrick Graham Bissett (2017) NIH Grants Awarded (Amount: $59,466) Duration: 2017-04-01 to 2018-03-31

Author: Xu Cui Categories: nirs Tags:

Deep learning speed test, my laptop vs AWS g2.2xlarge vs AWS g2.8xlarge vs AWS p2.xlarge vs Paperspace p5000

April 21st, 2017

It requires a lot of resources, especially GPU and GPU memory, to train a deep-learning model efficiently. Here I test the time it took to train a model in 3 computers/servers.

1. My own laptop.
CPU: Intel Core i7-7920HQ (Quad Core 3.10GHz, 4.10GHz Turbo, 8MB 45W, w/Intel HD Graphics 630
Memory: 64G
GPU: NVIDIA Quadro M1200 w/4GB GDDR5, 640 CUDA cores

2. AWS g2.2xlarge
CPU: 8 vCPU, High Frequency Intel Xeon E5-2670 (Sandy Bridge) Processors
Memory: 15G
GPU: 1 GPU, High-performance NVIDIA GPUs, each with 1,536 CUDA cores and 4GB of video memory

3. AWS g2.8xlarge
CPU: 32 vCPU, High Frequency Intel Xeon E5-2670 (Sandy Bridge) Processors
Memory: 60G
GPU: 4 GPU, High-performance NVIDIA GPUs, each with 1,536 CUDA cores and 4GB of video memory

The AMI. I used udacity-dl - ami-60f24d76 (The official AMI of Udacity’s Deep Learning Foundations) from the community AMIs.

Test script. Adopted from https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py
Time spent is tracked

import time

from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb

'''Trains a LSTM on the IMDB sentiment classification task.
The dataset is actually too small for LSTM to be of any advantage
compared to simpler, much faster methods such as TF-IDF + LogReg.
Notes:
- RNNs are tricky. Choice of batch size is important,
choice of loss and optimizer is critical, etc.
Some configurations won't converge.
- LSTM loss decrease patterns during training can be quite different
from what you see with CNNs/MLPs/etc.
'''
start_time = time.time()

max_features = 20000
maxlen = 80  # cut texts after this number of words (among top max_features most common words)
batch_size = 32

print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')

print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)

print('Build model...')
model = Sequential()
model.add(Embedding(max_features, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation='sigmoid'))

# try using different optimizers and different optimizer configs
model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

print('Train...')
model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=5,
          validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test,
                            batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
time_taken = time.time() - start_time
print(time_taken)

Result: The table below shows the number of seconds it took to run the above script at 3 sets of parameters (batch_size and LSTM size).

batch_size LSTM size Laptop g2.2xlarge g2.8xlarge
32 128 546 821 878
256 256 155 152 157
1024 256 125 107 110

The result is surprising and confusing to me. I was expecting g2 servers to be much much faster than my own laptop given the capacity of the GPU. But the result shows my laptop is actually faster in smaller parameter values, and only slightly worse in higher parameter values.

I do not know what is going on … Anybody has clue?

[update 2017-04-23]

I was thinking maybe the operating system or some configuration was optimal in AWS. The AMI I used was udacity-dl - ami-60f24d76 (The official AMI of Udacity’s Deep Learning Foundations) from the community AMIs. So I tried a different AMI, a commercial AMI from bitfusion: https://aws.amazon.com/marketplace/pp/B01EYKBEQ0 Maybe it will make a difference? I also tested a new instant type p2.xlarge which has 1 NVIDIA K80 GPU (24G GPU memory) and 60G memory.

batch_size LSTM size Laptop g2.2xlarge g2.8xlarge p2.xlarge
1024 256 125 151 148 101

The result is still disappointing. The AWS g2 instances perform worse than my laptop, and p2 instance only 20% better.

(Of course, GPU is still faster than CPU. On my own laptop, using GPU is ~10x faster than using CPU to run the above code)

[update 2017-04-02]

I checked out Paperspace’s p5000 computer. It comes with a dedicated p5000 GPU with 2048 cores and 16G GPU memory. I tested with the same code. I find the training is much faster on p5000 (ironically, the data downloading part is slow). The training part is 4x faster than my laptop.

batch_size LSTM size Laptop g2.2xlarge g2.8xlarge p2.xlarge paperspace p5000
1024 256 125 151 148 101 50

(Note, the above time includes the data downloading part, which is about 25 seconds).

Paperspace p5000 wins so far!

[update 2017-05-03]

I purchased Nvidia’s 1080 Ti and installed it on my desktop. It has 3,584 cores and 11G GPU memory. It look 7s for this GPU to train 1 epoch of the above script and it’s 3x times faster than my laptop.

batch_size LSTM size Laptop (1 epoch) 1080 Ti (1 epoch)
1024 256 21 7
Author: Xu Cui Categories: deep learning, programming Tags: