Skip to content

Latest commit

 

History

History
89 lines (77 loc) · 15.5 KB

before-2010.md

File metadata and controls

89 lines (77 loc) · 15.5 KB

before 2010

  • A fast learning algorithm for deep belief nets. [url] ⭐
  • A Tutorial on Energy-Based Learning. [url]
  • [LeNet] Gradient-based learning applied to document recognition. [pdf] ⭐
  • Constructing Informative Priors using Transfer Learning. [url]
  • Connectionist Temporal Classification: Labelling unsegmented Sequence Data with Recurrent Neural Networks. [url]
  • Deep Boltzmann Machines. [url] ⭐
  • Exploring Strategies for Training Deep Neural Networks. [url]
  • Efficient Learning of Sparse Representations with an Energy-Based Model. [url]
  • Efficient sparse coding algorithms. [url]
  • Energy-Based Models in Document Recognition and Computer Vision. [url]
  • Extracting and Composing Robust Features with Denoising Autoencoders. [url]
  • Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition. [url]
  • Gaussian Process Models for Link Analysis and Transfer Learning. [url]
  • Greedy Layer-Wise Training of Deep Networks. [url]
  • Learning Invariant Features through Topographic Filter Maps. [url]
  • Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification. [url]
  • Mapping and Revising Markov Logic Networks for Transfer Learning. [url]
  • Nonlinear Learning using Local Coordinate Coding. [url] ⭐
  • Notes on Convolutional Neural Networks. [url]
  • Reducing the Dimensionality of Data with Neural Networks. [science] ⭐
  • To Recognize Shapes, First Learn to Generate Images. [url]
  • Scaling Learning Algorithms towards AI. [url] ⭐
  • Sparse deep belief net model for visual area V2. [url] ⭐
  • Sparse Feature Learning for Deep Belief Networks. [url]
  • Training restricted Boltzmann machines using approximations to the likelihood gradient. [url]
  • Training Products of experts by minimizing contrastive divergence. [[url]](Training Products of Experts by Minimizing Contrastive Divergence)] ⭐
  • Using Fast Weights to Improve Persistent Contrastive Divergence. [url] ⭐
  • Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition. [url]
  • What is the Best Multi-Stage Architecture for Object Recognition?. [url] ⭐

Transfer learning

  • A Survey on Transfer Learning. [url]] ⭐
  • Modeling Transfer Relationships Between Learning Tasks for Improved Inductive Transfer. [pdf]
  • To Transfer or Not To Transfer.[url]
  • Transfer learning for text classification. [url]
  • Transfer learning for collaborative filtering via a rating-matrix generative model.[url]
  • Transfer learning from multiple source domains via consensus regularization. [url]
  • Transfer Learning for Reinforcement Learning Domains: A Survey. [url] ⭐
  • [Zero-Shot] Zero-Shot Learning with Semantic Output Codes. pdf

Instance transfer

  • An improved categorization of classifier’s sensitivity on sample selection bias. [pdf]
  • Boosting for transfer learning. [url] ⭐
    • A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. [pdf] ⭐
  • Correcting sample selection bias by unlabeled data. [pdf]
  • Cross domain distribution adaptation via kernel mapping. [pdf]
  • Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation.[pdf]
  • Discriminative learning for differing training and test distributions. [pdf]
  • Domain Adaptation via Transfer Component Analysis. [pdf] ⭐
  • Instance Weighting for Domain Adaptation in NLP. [pdf]
  • Logistic regression with an auxiliary data source. [pdf]
  • Transferring Naive Bayes Classifiers for Text Classification. [pdf]

Feature representation transfer

  • A Spectral Regularization Framework for Multi-Task Structure Learning. [pdf]
  • Biographies, bollywood, boom- boxes and blenders: Domain adaptation for sentiment classification. [pdf]
  • Co-clustering based Classification for Out-of-domain Documents. [pdf] ⭐
  • Domain adaptation with structural correspondence learning. [pdf]
  • Frustratingly easy domain adaptation. [pdf] ⭐
  • Kernel-based inductive transfer. [pdf]
  • Learning a meta-level prior for feature relevance from multiple related tasks. [pdf]
  • Multi-task feature and kernel selection for svms. [pdf]
  • Multi-task feature learning. [pdf] ⭐
  • Self-taught Clustering. [pdf]
  • Self-taught Learning-Transfer Learning from Unlabeled Data. [url] ⭐
  • Spectral domain-transfer learning. [url] ⭐
  • Transfer learning via dimensionality reduction. [pdf]

Parameter transfer

  • Knowledge transfer via multiple model local structure mapping. [pdf]
  • Learning Gaussian Process Kernels via Hierarchical Bayes. [pdf]
  • Learning to learn with the informative vector machine. [pdf]
  • Multi-task Gaussian Process Prediction. [pdf]
  • Regularized multi-task learning. [pdf]
  • The more you know, the less you learn: from knowledge transfer to one-shot learning of object categories.[pdf]

Relational knowledge transfer

  • Deep transfer via second-order markov logic. [pdf]
  • Mapping and revising markov logic networks for transfer learning. [pdf]
  • Transfer learning by mapping with minimal target data. [pdf]
  • Translated learning: Transfer learning across different feature spaces.[url] ⭐