Skip to content
Tao Luo edited this page Dec 9, 2019 · 1 revision

wangkuiyi

Porting Majel and improving the building system

Errands

helinwang

gongweibao

dongzhihong

Yancey1989(yanxu)

GangLiao

Xingzhaolong

  • pr
  • auto pruning
    • Investigation on dynamic network surgery
    • Investigation on direct convolution
    • the whole pruning process for a model takes several stages. For example, if u want to reach the ratio of sparse of 0.7, u must not directly specify the ratio of 0.7 for fine-tuning. It will cause a lot of accuracy drop. So, it would be better to increase the sparsity in several stages. This will keep the accuracy loss within a reasonable range, but it will take a lot of time. I have did a demo on caffe platform to complete it in a whole process. it's have a perfect result on oxford flowers 102 reasult see here, the test on larger dataset is in the process. If test well, i'll update it to paddle.

wanghaoshuang

qiaolongfei

ParameterUpdater using go:

Code review:

survey and ducument:

luotao

  • remove duplicated examples among demo/models/book, and rename remain demo to v1_api_demo: #2357
  • fix bugs:
    • fix Broken link to DL 101 book: #2358 #2391
    • remove top_k argument in classification_cost #2412
  • Wechat PaddlePaddle:
    • support artical with latex formulas
    • 179 fans -> 216 fans

qijun

fengjiayi

livc(Zhao Li)

Dang qingqing

Xinghai Sun

Yibing Liu

Yu Yang

  • ComputationGraph Refactorization.
    • Survey on Tensorflow/Caffe2/MXNet/DyNet/PyTorch and give two talks this week.
      1. design overview of these frameworks
      2. Computation graph implementation survey of these frameworks.
  • Keep servuy on mxnet, caff2's computation graph implementation.
  • Writting a toy project to help me thinking how to implement a compuatation graph.

Caoying

Shaoyong Xu

typhoonzero(wuyi)

hedaoyuan

yangyaming

juliecbd

Liu Yiqun

Yan Chunwei

Clone this wiki locally