Stork文献鸟显示期刊的中科院分区

September 12th, 2018

每天的新文献太多了,根本没时间读。怎么办?这时候就需要快速地甄别出重要文献。为了帮助大家做到这一点,文献鸟做了两件事情:

  1. 高亮标记了高影响因子的文献,并且文献按影响因子排列
  2. 显示中科院期刊分区信息,并用不同的颜色标识不同分区

对于Pro用户,文献鸟还允许设置过滤分区。比如如果设置最大分区是2,则只有分区为1和2的期刊文献才会被推送,其它的会被自动过滤。

中科院期刊分区的显示

Pro用户可以设置分区过滤

如果需要购买Pro,可以单击这个链接

Stork官网地址: https://www.storkapp.me/

Author: Xu Cui Categories: stork, web, writing Tags:

BOLD5000, A public fMRI dataset of 5000 images

September 11th, 2018

Official website and download
Full text paper link

Good news for brain imaging researchers. There is a new dataset available for you to play with.

BOLD5000 is a large-scale, slow event-related fMRI dataset collected on 4 subjects, each observing 5,254 images over 15 scanning sessions. The images are selected from three computer vision datasets.

  1. 1,000 images from Scene Images (with scene categories based on SUN categories)
  2. 2,000 images from the COCO dataset
  3. 1,916 images from the ImageNet dataset

BOLD5000 image data

BOLD5000 image data

Author: Xu Cui Categories: brain, web Tags:

Stork API, a single line becomes a list of new publications

September 8th, 2018

I want to show a list of my own publications on my webpage, is there an easy way to do so? Yes, Stork API, a single line of code, allows you to show a list of publications given a keyword. You only need to put the code to your webpage once, and then even if there are new publications, the list will update itself.

Let’s look at this list:

The above list was generated by the following single line of code:

<iframe style="border: 0;" src="https://www.storkapp.me/api/new.php?apiKey=STORKDEMO&amp;format=html&amp;num=20&amp;k=cui+xu+(stanford+psychiatry+OR+houston)" width="100%" height="600" frameborder="0"></iframe>

What about a list of publications in fNIRS field? It’s easy too. As you can see, all you need to change the the “k” parameter (which stands for keyword).

<iframe style="border: 0;" src="https://www.storkapp.me/api/new.php?apiKey=STORKDEMO&amp;format=html&amp;num=4&amp;k=(nirs OR fnirs) brain" width="100%" height="600" frameborder="0"></iframe>

The line of code above becomes

Author: Xu Cui Categories: programming, web, writing Tags:

Google Dataset search, a great tool for fNIRS and fMRI?

September 6th, 2018

Google just launched a new search engine: Google Dataset search. With this app, scientists can search public datasets published in scientific journals (and possibly other sources). According to Google, “Dataset Search enables users to find datasets stored across thousands of repositories on the Web, making these datasets universally accessible and useful.”

I searched ‘fNIRS’ and it returned 30+ results. See figure below. I clicked the first one, fNIRS/EEG/EOG classification, and it shows some meta information (e.g. the source and authors). Then I clicked the ‘zenodo.org’ website and did see the download link of the MAT file.

google dataset search

google dataset search

I also tried to search ‘fMRI’. The number of datasets for fMRI is much larger than that for fNIRS.

Currently the number of datasets indexed by Google is still limited, but I expect it will grow rapidly and become a very useful tool for scientists and anybody who want to play with data.

Link: Google Dataset search

Author: Xu Cui Categories: nirs, web Tags:

fNIRS 2018

August 29th, 2018

fNIRS 2010 conference will be held during October 5-8, 2018 in Tokyo, Japan. You may find more information at http://fnirs2018.org/.

The early registration deadline is 2018-09-05.

Author: Xu Cui Categories: nirs Tags:

Temporal resolution of CW fNIRS devices

August 10th, 2018

This is a guest post by Ning Liu from Stanford University.

Temporal resolution provides information on the distance of time between the acquisitions of two images (data) of the same area. It is the reciprocal of sampling rate (or acquisition rate) of an fNIRS device. For some devices, sampling rate is a fixed number; for some other ones, sampling rate may depend on number of sources or detectors to use. Why is that? It is because they have different instrumental design. For those with unfixed sampling rate, multiple sources time-share an optical detector by means of a multiplexing circuit that turns the sources on and off in sequence, so that only one source within the detector range is on at any given time. The NIRx system, for instance, are using this type of design. For those with fixed sampling rate, they usually use low frequency modulated light source to provide the excitation light, thus one detector can ‘see’ only one source.

For instance, Hitachi ETG4000 system has sampling rate of 10Hz (from http://www.hitachi.com/businesses/healthcare/products-support/opt/etg4000/contents2.html), thus its temporal resolution is 100ms. Some other device, such as NIRScout, has sampling rate from 2.5 – 62.5 Hz (from https://nirx.net/nirscout/), thus its temporal resolution is 16 - 400ms. Why the sampling rate is changing from 2.5 – 62.5 Hz? That’s because users can choose different number of sources and detectors in their configuration. The more number of sources and detectors to use, the smaller the sampling rate. The following table is from a review article (Scholkmann, et al., 2014) on NeuroImaging volume 85 (2104), a special issue of functional near-infrared spectroscopy. It summarizes the specifications of some popular commercially available fNIRS devices, mainly focused on continuous wave devices.

Time resolution of NIRS devices

Time resolution of NIRS devices (click to enlarge, F. Scholkmann et al. / NeuroImage 85 (2014) 6–27)

本文作者为斯坦福大学刘宁。她提供NIRS培训服务。

Author: Xu Cui Categories: nirs Tags:

Deep learning training speed with 1080 Ti and M1200

June 19th, 2018

I compared the speed of Nvidia’s 1080 Ti on a desktop (Intel i5-3470 CPU, 3.2G Hz, 32G memory) and NVIDIA Quadro M1200 w/4GB GDDR5, 640 CUDA cores on a laptop (CPU: Intel Core i7-7920HQ (Quad Core 3.10GHz, 4.10GHz Turbo, 8MB 45W, Memory: 64G).

The code I used is Keras’ own example (mnist_cnn.py) to classiy MNIST dataset:

MNIST dataset

MNIST dataset

'''Trains a simple convnet on the MNIST dataset.

Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
#model.add(Dropout(1))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs,
          verbose=1, validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

The result is that 1080 Ti is 3 times faster than M1200:

M1200 (1 epoch) 1080 Ti (1 epoch)
18s 6s
Author: Xu Cui Categories: deep learning, programming Tags:

You can start to use Stork in 10s!

May 21st, 2018

Stork is a simple app for researchers to follow up scientific publications. It only takes me 10s to start.

https://www.storkapp.me/?ref=alivelearn

Author: Xu Cui Categories: stork, writing Tags:

Find cheapest flight

April 26th, 2018

I booked a direct flight (non-stop) from San Francisco to Beijing for my father last October. The cost is ~$300. It’s fairly cheap.

The tool I used was https://matrix.itasoftware.com/. If your departure date can be flexible, this website will give you the price on a range of days. Then I would pick up the cheapest date, and view details such as airline names/time.

You can’t book tickets on this website directly; but with the names of airlines you can go to the website of airlines and book there.

ITA software

ITA software

Author: Xu Cui Categories: life Tags:

Recommend 3blue1brown

April 24th, 2018

When I was in high school and saw the following equation, my mind was blown!

Why is Pi here? Isn’t it supposed to show up only in circle related problem? But the left-hand has nothing to do with circle. And it’s Pi squared!

Even today I still do not have an intuitive understanding why the above equation is true, until I watched a visual explanation from 3blue1brown. This video provide an elegant and visual proof of the equation. Go ahead and watch it:

The link is:
https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/featured

Author: Xu Cui Categories: life, math Tags: