Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ryt V3 Implementation #88

Closed
wants to merge 130 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
130 commits
Select commit Hold shift + click to select a range
9fdfde6
fixed istanbul process logic
wlt-cortex Jan 19, 2021
82917ec
add golang deps: c_api.h
wlt-cortex Jan 19, 2021
27bb582
[update]: remove ycm conf file
wlt-cortex Aug 30, 2021
a0614b4
[feature]: add common utils and main
wlt-cortex Aug 30, 2021
b7acd15
initialize cvm_main
declmal Sep 3, 2021
6b33174
update main.py default stage
declmal Sep 6, 2021
c798248
upt
declmal Sep 6, 2021
f0ee236
update main.py prepare
declmal Sep 6, 2021
2304b60
upt
declmal Sep 8, 2021
48edb9e
upt
declmal Sep 8, 2021
7050cf2
update main.py split model
declmal Sep 8, 2021
dfea2c5
upt main.py calibration
declmal Sep 8, 2021
95b313c
upt main.py quantize
declmal Sep 8, 2021
8d003e3
update main.py evaluate compile
declmal Sep 8, 2021
b9398ce
upt
declmal Sep 11, 2021
13ba06e
seprate main.py mrt_prepare mrt_calibrate
declmal Sep 11, 2021
e01660b
separate main.py mrt_quantize (not merge yet)
declmal Sep 11, 2021
abce6eb
seperate main.py mrt_quantize (with merge)
declmal Sep 13, 2021
1ca2573
upt
declmal Sep 13, 2021
1a40de3
seperate main.py evaluate
declmal Sep 13, 2021
1ffb87c
upt main.py mrt module
declmal Sep 13, 2021
5bc3bde
seperate main.py compile
declmal Sep 13, 2021
4719b1d
fix broadcast_div
declmal Sep 18, 2021
c35fae0
upt
declmal Sep 22, 2021
5cebcc2
upt
declmal Sep 22, 2021
cdb0583
upt
declmal Sep 22, 2021
365f5db
upt
declmal Sep 22, 2021
5d23e2a
upt
declmal Sep 22, 2021
aec80dd
upt
declmal Sep 22, 2021
f118298
upt
declmal Sep 22, 2021
455ac7d
upt
declmal Sep 22, 2021
cc83812
upt yaml congifuration
declmal Sep 26, 2021
7230ca8
upt
declmal Sep 27, 2021
b1e6a0b
upt
declmal Sep 27, 2021
230bd90
upt
declmal Sep 27, 2021
ec8c103
add mrt user doc
declmal Sep 27, 2021
37478c3
upt doc and todo
declmal Sep 29, 2021
81327bd
upt
declmal Oct 9, 2021
f5f0a60
upt
declmal Oct 9, 2021
1c7e1eb
Merge branch 'ryt' of github.com:CortexFoundation/cvm-runtime into ryt
declmal Oct 9, 2021
7d30238
upt
declmal Oct 13, 2021
d0e1e26
upt
declmal Oct 13, 2021
495bd0c
upt
declmal Oct 13, 2021
632fbeb
upt
declmal Oct 13, 2021
efac7da
upt
declmal Oct 15, 2021
d37512d
Merge branch 'ryt' of github.com:CortexFoundation/cvm-runtime into ryt
declmal Oct 15, 2021
315d48e
upt
declmal Oct 15, 2021
6cc0e26
upt
declmal Oct 15, 2021
67eb13e
upt
declmal Oct 15, 2021
acc62d1
Merge branch 'ryt' of github.com:CortexFoundation/cvm-runtime into ryt
declmal Oct 15, 2021
7dcdf71
update frontend
declmal Oct 15, 2021
3fe6d04
simplify yaml interfaces for mrt.v3
declmal Oct 20, 2021
8264e4e
upt
declmal Oct 23, 2021
3847863
fix config file
declmal Oct 23, 2021
ae14392
upt doc
declmal Oct 23, 2021
4968a11
Merge branch 'ryt' of github.com:CortexFoundation/cvm-runtime into ryt
declmal Oct 23, 2021
697b4c7
Merge branch 'ryt' into ryt_frontend
declmal Oct 23, 2021
8b47af3
upt
declmal Oct 27, 2021
ea113a7
upt
declmal Oct 29, 2021
23bbb95
upt
declmal Nov 1, 2021
51c17c1
upt
declmal Nov 2, 2021
a790613
upt
declmal Nov 2, 2021
c68b67b
upt
declmal Nov 10, 2021
4b185c3
upt
declmal Nov 11, 2021
09abb3f
upt
declmal Nov 12, 2021
d248433
upt
declmal Nov 12, 2021
9591a43
upt
declmal Nov 12, 2021
f0e93d9
upt
declmal Nov 12, 2021
a7b85eb
upt
declmal Nov 12, 2021
b9af5e5
upt
declmal Nov 12, 2021
f5e40fb
pupt
declmal Nov 15, 2021
55947ba
upt
declmal Nov 22, 2021
714b3de
upt
declmal Nov 25, 2021
dd0c814
upt
declmal Nov 25, 2021
5a02372
upt
declmal Nov 26, 2021
48db21a
upt
declmal Dec 17, 2021
b000c9a
upt
declmal Dec 17, 2021
d88b64d
upt
declmal Dec 17, 2021
1c4c106
upt
declmal Dec 20, 2021
f7c0b68
upt
declmal Dec 20, 2021
35a4253
upt
declmal Dec 21, 2021
e267327
upt
declmal Dec 21, 2021
05119b7
upt
declmal Dec 22, 2021
232ccc2
upt
declmal Dec 22, 2021
a2f476c
upt
declmal Dec 22, 2021
0cd8503
upt
declmal Dec 24, 2021
b758984
Merge pull request #87 from CortexFoundation/dev
wlt-cortex Dec 24, 2021
95f6daf
Merge remote-tracking branch 'origin/master' into ryt
declmal Dec 24, 2021
22e3aa6
upt
declmal Dec 28, 2021
3cb7812
elemwisemul rewrite
declmal Dec 28, 2021
846be3e
unify name in json, remove params key prefix
declmal Dec 28, 2021
10ed36b
std random dataset
declmal Dec 30, 2021
4ad2108
test_prediction_SCTF.py
declmal Dec 30, 2021
ddbba70
upt
declmal Dec 30, 2021
1333f49
forward_utils.py
declmal Dec 30, 2021
2ad69cb
test_op_equiv
declmal Dec 31, 2021
0405bca
preprocess_prediction_SCTF.py
declmal Dec 31, 2021
92b4aa4
upt --help prompt for main2.py
declmal Jan 4, 2022
32dcca3
upt test_op_equiv
declmal Jan 7, 2022
884060c
upt
declmal Jan 7, 2022
f00437d
[doc]: add math formalization doc
wlt-cortex Jan 7, 2022
2078f80
Merge remote-tracking branch 'origin/ryt' into wlt
wlt-cortex Jan 7, 2022
390dd4f
Merge remote-tracking branch 'origin/ryt' into wlt
wlt-cortex Jan 7, 2022
ca3e7df
[fix]: remove main.py and add pip requirement
wlt-cortex Jan 7, 2022
385b53a
[stash]: move install to conf
wlt-cortex Jan 7, 2022
8151b81
[fix]: move unuseful code into deprecated
wlt-cortex Jan 7, 2022
93ae341
[test] V3 mrt models
declmal Jan 10, 2022
b66758a
[doc] V3 architecture part 1
declmal Jan 11, 2022
be1b508
[enhancement] inference_original_model inference_quantized_model get_…
declmal Jan 20, 2022
843e53a
[fix bug] set batch_size as 16 times len(device_ids) for yolov5s
declmal Jan 20, 2022
222b27a
add todo
declmal Jan 21, 2022
0719f6d
[enhancement] Yolov5sDataset Yolov5Metric
declmal Jan 25, 2022
03e45b9
[fix bug] tests/mrt/yolov5s/test_yolov5s.py
declmal Jan 25, 2022
f0ff44a
[config] tests/mrt/yolov5s/yolov5s-0040.yaml
declmal Jan 25, 2022
5d339a2
[enhancement] _make_grid
declmal Jan 26, 2022
5ec972a
[fix bug] Yolov5Dataset data iter
declmal Jan 26, 2022
1b76017
[docs,enhancement,tests] V3 docs, V3 test module
declmal Feb 9, 2022
5e49d50
[prune] remove redundancy for main.py
declmal Feb 9, 2022
fd0356c
[docs] add V3.png architecture
declmal Feb 9, 2022
174fc9a
[reconstruct] metric_v2; [bug fix] split batch,init logger [enhanceme…
declmal Feb 16, 2022
a8864b7
[doc] mrt.V3.evaluate; transfer mrt.frontend
declmal Feb 25, 2022
4afe38a
update doc index
declmal Feb 25, 2022
abc992a
[doc] tfm_ops FullyConnected.reduce ElemwiseMul.rewrite Activation.re…
declmal Feb 28, 2022
ca6a688
add list_model.py; quantization resnet101_v1, resnet152_v1 resnet18_v…
declmal Apr 23, 2022
fa3292a
upt unit tests
declmal Apr 23, 2022
caebd90
list model
declmal Apr 23, 2022
5d58254
add new models using cifar10 as dataset
declmal May 6, 2022
acc5f12
quantize inception model
declmal May 25, 2022
caeb961
git repo initialize for yamrt
declmal May 25, 2022
23037a9
add has_multi_outs
declmal May 25, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,6 @@ out/*
docs/html
docs/doctrees
docs/doxygen_output

# django
python/mrt/web/db.sqlite3
94 changes: 0 additions & 94 deletions .ycm_extra_conf.py

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
1 change: 1 addition & 0 deletions install/requirements.txt → conf/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
numpy
cython
decorator
yacs
Binary file added docs/assets/V3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
93 changes: 59 additions & 34 deletions docs/deep_dive/math_formalization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,16 @@ Operator Math Formalization
Write this section document refer to the doc:
:ref:`Math Format <write_math_formalization>` please.

This will give a full exhaustive explanation to CVM operators.
The FORMAL version source code has a strong correlation
This will give a full exhaustive explanation to CVM-Runtime operators.
The source code of FORMAL version has a strong correlation
with this mathematical description, while other versions like
CPU, CUDA, will only promise the consistent inference result,
with arbitrary process logic.

All the operators' formalization obeys the unify format:
.. note::
All numbers refered to by the symbol are integers by default.

All the operators' formalization obeys the unified format:

.. math::

Expand All @@ -28,7 +31,7 @@ All the operators' formalization obeys the unify format:

which means that for given value range, the forluma in the first
line is always true, subjecting to the constraints listed as the
condition variable.
condition statements.

.. _op_list:

Expand All @@ -46,6 +49,8 @@ Reduction is performed on the given axes, other dimensions remains the same and
We abstract the common reduce logic as formalization here and specify the reduce function
for each operators respectively.

*Math Formalization*

- Input: :math:`X`, a tensor of :math:`N` dimensions, namely :math:`(n_0, n_1, \cdots, n_{N-1})`
- Output: :math:`Y`
- Attribute:
Expand Down Expand Up @@ -172,29 +177,46 @@ Broadcast Operators

A broadcast operator performs the broadcast function to input data, and the process logic over all kinds of operators are the same.

*Math Formalization*

- Input: There are 2 inputs.

+ :math:`A`, a tensor of :math:`M` dimensions, namely :math:`(m_0, m_1, \cdots, m_{M-1})`
+ :math:`B`, a tensor of :math:`N` dimensions, namely :math:`(n_0, n_1, \cdots, n_{N-1})`

- Output: :math:`Y`, a tensor with :math:`max(M, N)` dimensions, the higher dimension of the two inputs, and it's shape is identical to the input with higher dimension.
The two input shapes of tensor must satisfy the assertions as below:

.. math::
P = \min(M, N) \\
Q = \max(M, N)

The lower :math:`min(M, N)` dimensions of the two inputs must be the same. The input with lower dimension is expanded with 1 so that the two inputs can have the same dimension.
.. math::
m_i = n_i \text{ or } m_i = 1 \text{ or } n_i = 1,
\forall i \in [0, P)

Then the elementwise opertaion is performed to the inputs with broadcast.
- Output: :math:`Y`, a tensor with :math:`Q` dimensions, the higher dimension of the two inputs, and it's shape is identical to the input with higher dimension.


We abstract the formalization here and introduce the details as below:

.. math::

Y[d_0, d_1, \cdots, d_{K-1}] = \begin{cases}
A[d_{N-M}, d_1, \cdots, d_{N-1}] \text{ OP } B[d_0, d_1, \cdots, d_{N-1}], & M \leq N \\
A[d_0, d_1, \cdots, d_{M-1}] \text{ OP } B[d_{M-N}, d_1, \cdots, d_{M-1}], & M > N
\end{cases}, \\

\forall d_i \in [0, n_i) \text{ if } N \geq M \text{ or } d_i \in [0, m_i) \text{ otherwise} \\

\text{where } i \in [0, max(M, N))\\
Y[d_0, d_1, \cdots, d_{K-1}] =
A[a_0, a_1, \cdots, a_{M-1}] \text{ OP } B[b_0, a_1, \cdots, a_{N-1}], \\

\forall i \in [0, Q) \wedge d_i \in [0, \max(em_i, en_i)), \\

\text{where }
a_j = d_{Q-M+j} \text{ if } d_{Q-M+j} < m_j \text{ else } 0, \forall j \in [0, M) \text{ and} \\
b_j = d_{Q-N+j} \text{ if } d_{Q-N+j} < n_j \text{ else } 0, \forall j \in [0, N) \text{ and} \\
em_i = \begin{cases}
1, & i < Q - M \\
m_{Q-M+i}, & \text{otherwise}
\end{cases}, \forall i \in [0, Q) \text{ and} \\
en_i = \begin{cases}
1, & i < Q - N \\
n_{Q-N+i}, & \text{otherwise}
\end{cases}, \forall i \in [0, Q)



Expand All @@ -221,7 +243,6 @@ set :math:`\text{OP}` to :math:`\text{add}`.
broadcast_sub
~~~~~~~~~~~~~
set :math:`\text{OP}` to :math:`\text{sub}`.
Note that there's no need to make sure that the dimension of the minuend :math:`A` is higher than subtractor :math:`B`

broadcast_mul
~~~~~~~~~~~~~
Expand Down Expand Up @@ -272,11 +293,15 @@ We only supported 2-D convolution operator. Also alias *Group-wise Convolution*.
p \in \left[0, \text{Y_HMAX} \right) \wedge
q \in \left[0, \text{Y_WMAX} \right),

\text{where } \text{Y_HMAX} = \left\lfloor{H+2 \cdot \text{PH}-\text{DH} \cdot (\text{KH}-1)-1\over\text{SH}}\right\rfloor+1\wedge\\
\text{Y_WMAX} = \left\lfloor{W+2 \cdot \text{PW}-\text{DW} \cdot (\text{KW}-1)-1 \over \text{SW}}\right\rfloor+1 \wedge\\
OPG = OC / \text{groups, } OPG \in \mathbb N^+ \text{ since } OC \text{ mod } \text{groups} = 0\\
\text{where } \text{Y_HMAX} = \left\lfloor{
H+2 \cdot \text{PH}-\text{DH} \cdot (\text{KH}-1)-1 \over \text{SH}
}\right\rfloor + 1 \text{ and} \\
\text{Y_WMAX} = \left\lfloor{
W+2 \cdot \text{PW}-\text{DW} \cdot (\text{KW}-1)-1 \over \text{SW}
}\right\rfloor + 1 \text{ and} \\
OPG = OC / \text{groups, } OPG \in \mathbb N^+ \text{ since } OC \text{ mod } \text{groups} = 0\\

where :math:`\text{kernel}` function does the 2D image convolution calculation, and the formulation is
where :math:`\text{kernel}` function does the 2D image convolution calculation, and the formulation is:

.. math::

Expand Down Expand Up @@ -322,8 +347,8 @@ Relu performs elementwise rectified linear unit function.
- Output: :math:`Y`, the same shape as :math:`X`

.. math::
Y[d_0, d_1, \cdots, d_{n-1}] = max(0, X[d_0, d_1, \cdots, d_{n-1}]) \\
\forall i \in [0, N), d_i \in [0, n_i)
Y[d_0, d_1, \cdots, d_{n-1}] = max(0, X[d_0, d_1, \cdots, d_{n-1}]), \\
\forall i \in [0, N) \wedge d_i \in [0, n_i)

max_pool2d
~~~~~~~~~~
Expand Down Expand Up @@ -361,7 +386,7 @@ Max_pool2d performs max pooling over every plane for each batch and channel.
\end{cases} \text{ and} \\
\text{pad}(n, i, p, q) = \begin{cases}
X[n, i, p, q], & \text{ if } p \in [0, H) \wedge q \in [0, W) \\
INT32_MIN, & \text{otherwise}
INT32\_MIN, & \text{otherwise}
\end{cases}


Expand Down Expand Up @@ -402,9 +427,9 @@ This operator calculates absolute value of input data.
-x, & x < 0
\end{cases},\\

\forall i \in [0, N), d_i \in [0, n_i),\\
\forall i \in [0, N) \wedge d_i \in [0, n_i),\\

\text{, where }x \text{ denotes } X[d_0, d_1, \cdots, d_{N-1}]
\text{where } x \text{ denotes } X[d_0, d_1, \cdots, d_{N-1}]

cvm_precision
~~~~~~~~~~~~~
Expand All @@ -423,9 +448,9 @@ The precision operator gives how many bits the absolute value of a number takes.
1, & x = 0
\end{cases},\\

\forall i \in [0, N), d_i \in [0, n_i),\\
\forall i \in [0, N) \wedge d_i \in [0, n_i),\\

\text{ where } x \text{ denotes } X[d_0, d_1, \cdots, d_{N-1}]
\text{where } x \text{ denotes } X[d_0, d_1, \cdots, d_{N-1}]


elemwise_add
Expand Down Expand Up @@ -492,9 +517,9 @@ This operator performs clip, cutting the data into a range, to the input tensor.
\text{a_min}, & x \leqslant \text{a_min}
\end{cases},\\

\forall i \in [0, N) \wedge d_i \in [0, n_i),
\forall i \in [0, N) \wedge d_i \in [0, n_i), \\

\text{ where } x \text{ denotes } X[d_0, d_1, \cdots, d_{N-1}]
\text{where } x \text{ denotes } X[d_0, d_1, \cdots, d_{N-1}]

cvm_cilp
~~~~~~~~
Expand Down Expand Up @@ -764,6 +789,8 @@ slice_like

This operator slices the input :math:`X` to a shape that looks like the other given input ``shape_like``.

TODO: need more consideration.

*Math Formalization*

- Input: there are 2 inputs
Expand All @@ -786,13 +813,11 @@ This operator slices the input :math:`X` to a shape that looks like the other gi
.. math::
Y[d_0, d_1, \cdots, d_{N-1}] = X[d_0, d_1, \cdots, d_{N-1}], \\

\forall d_j \in \begin{cases}
\forall j \in [0, N) \wedge d_j \in \begin{cases}
[0, m_j), & j \in \text{sliced_axes} \\
[0, n_j), & j \notin \text{sliced_axes}
\end{cases},\\

\text{where } j \in [0, N)

take
~~~~

Expand Down Expand Up @@ -1051,7 +1076,7 @@ This operator implements the nms algorithm, finding valid bounding boxes.
\text{where } T = \text{max}\{
\text{min}(N, \text{valid_count}[b]), 0\} \text{ and} \\
I: \{ i \mid i \in [0, T) \} \to \{ i \mid i \in [0, T) \}, \\
\text {s.t. } X[b, I(i), 1] > X[b, I(j), 1] \vee \\
\text {s.t. } X[b, I(i), 1] > X[b, I(j), 1] \text{ or } \\
(X[b, I(i), 1] = X[b, I(j), 1] \wedge I(i) < I(j)),
\forall 0 \leqslant i < j < T

Expand All @@ -1074,7 +1099,7 @@ This operator implements the nms algorithm, finding valid bounding boxes.
\text{OLR}(R[b, p, :], R[b, q, :]), &
\begin{array}{}
\text{force_suppress is true}\\
\vee R[b, p, 0] = R[b, q, 0]
\text{ or } R[b, p, 0] = R[b, q, 0]
\end{array} \\[1ex]
0, & \text{otherwise}
\end{cases} \text{ and} \\
Expand Down
Loading