Skip to content

上海交通大学 致远数学方向 专业研讨课3 [计算神经科学专题]

Notifications You must be signed in to change notification settings

NeoNeuron/professional-workshop-3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

专业研讨课(3) 教学大纲

2023年秋季学期

课程目标: 这门课程以数学与生命学科,尤其是神经科学交叉为背景,着眼于交叉学科的思维能力以及数学定量的训练,通过介绍一些常用的数学、统计学等方法在神经科学中的应用,配合一些小的研究项目来培养跨学科思想以及从事交叉学科研究的基本能力。

课程形式:9周课程(课堂编程练习)+小组Project大作业

  • 30 分钟: 背景知识介绍
  • 1 小时:课堂编程练习+小组讨论

课程成绩:

  • 70% - Project大作业, 包括Project Presentation (40%) + Project书面报告 (30%)
  • 30% - 平时出勤与期末400字课程总结反馈

注意事项:

  • 平时课堂参与度将纳入平时出勤考核。
  • 大作业以小组形式进行,每位同学需在Project书面报告中标注个人贡献,结合小组project完成度与个人贡献情况打分。

预备知识

编程基础

课程使用Python为主要编程语言,编程材料与联系将以Jupyter Notebook的形式呈现。

如果你从未接触过编程,或Python相关的编程练习,请从现在开始认识Python,做一些简单的练习。我们期望大家熟悉Python的变量variable、列表list、字典dict,以及Numpy,Scipy,Matplotlib等科学计算相关的Python package。

关于Python的零基础入门学习,我们推荐大家访问Software carpentry 1-day Python tutorial进行学习。其中,强烈推荐大家根据Setup教程安装Python3版本的anaconda,方便后续的Python的使用和库管理。

学有余力或对Python进阶用法感兴趣的同学,可以进一步学习Scipy-Lecture-Notes的文档,进一步了解诸如Numpy, Scipy, Matplotlib, sklearn等科学计算库的进阶用法。

请大家自学完成W0_PythonWorkshop

数学基础

线性代数,概率统计,微积分

神经科学

课程主要涉及计算神经科学领域的最前沿研究问题与方法。为了更好的了解神经科学的基本框架有更好的了解,请大家在课程开始之前观看以下视频。

课程材料

课程大纲 (实际顺序根据课程进度调整)

Week 0: Python Workshop

Description: Two workshop for absolute Python beginners. Learn essential Python skills for course tutorials and practice by coding a neuronal simulation.

Week 1: Model Types

Lecture Details
Intro 1. Model classifications, characteristics and merits of different type of models.
2. Intro to each of the individual projects.
Tutorials "What"/"How"/"Why" models based on the example of ISI disributions.

Week 2: Intro to CNS, Single Neuron Models and Network Models

Lecture Details
Intro 1. Intro to computational neuroscience.
2. Introduction to LIF and HH neuron model, and network dynamics.
Tutorials 1. Coding to simulate LIF neuron;
2. Coding to simulate LIF network model with the help of Brian2, and tune the network to reach different dynamical region;
Bonus Synaptic dynamics.

Week 3: Dimensionality Reduction

Lecture Details
Intro 1. Low dim manifold in high dim data;
2. high-D signals and low-D behavior;
3. bonus: geometric point of view for low dim manifold;
Tutorials Practice PCA with synthatic data;

Week 4: Model Fitting and GLM

Lecture Details
Intro 1. How to fit data with linear regression model.
2. Extend linear model to Poisson GLM to fit an encoding model to spike data.
3. Regularizations.
Tutorials 1. Linear regression, multi-dimensional linear regression and polynomial regression.
2. GLMs and predicting neural responses. Logistic regression, regularization, and decoding neural activity.
Bonus: Model comparison and cross-validations.

Week 5: Machine Learning

Lecture Details
Intro Current application of deep learning in neuroscience researches.
Tutorials 1. Train a CNN with MNIST dataset using Pytorch. Calculate and observe the receptive field of trained ANN neurons.
2. Train a RNN to perform a perceptual decision making tasks. Apply analysis of dynamical system on RNNs.

Week 6: Network Causality

Description: Ways of discovering causal relations, ways of estimating networks, what we can do with networks

Lecture Details
Intro 1. definition of and Tools for causal inference;
2. Applying GC, TDMI, TE onto neural data to infer connectivity;.
Tutorials 1. Model based inference, and GC
2. Model free inference, TDMI, compare the performance of GC and TDMI for different synthatic data.

Week 7: Bayesian Statistics

Lecture Details
Intro Uncertainty in signal detection theory of visual search.
Tutorials 1. Bayes with a binary hidden state.
2. Bayesian decision and inference with continuous hidden state.

Week 8: Hidden Markov Model

Lecture Details
Intro Combining linear dynamics and Bayesian statistics to form Hidden Markov Models (HMMs).
Tutorials 1. Fishing (for a binary latent state).
2. Tracking astrocat (for a gaussian latent state).
3. Kalmann filter.

Week 9: Optimal Control

Lecture Details
Intro Intro to optimal control by adding actions to maximize utility in previous HMM system.
Tutorials 1. Fishing example, updating your fishing location to catch the most fish.
2. Astrocat example, using the cat’s jetpack to keep the cat on target.

Week 10: Reinforcement Learning

Lecture Details
Intro Intro to Reinforcement Learning.
Tutorials 1. how we learn the value of future states from our experience.
2. how to make and learn from actions and the explore-exploit dilemma.
3. how we can efficiently learn the future value of actions from experience.
4. how having a model of the world’s dynamics can help you to learn and act.

Week 10: Students Presentations (20 min each group)