Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local variable 'bias_shift' referenced before assignment #178

Open
ken4647 opened this issue Feb 7, 2023 · 5 comments
Open

local variable 'bias_shift' referenced before assignment #178

ken4647 opened this issue Feb 7, 2023 · 5 comments

Comments

@ken4647
Copy link

ken4647 commented Feb 7, 2023

Traceback (most recent call last):
    generate_model(model=loaded,x_test=representative_dataset_gen(),name="weights/nnom_weight.h")
  File "C:\Users\Fake Bug\Desktop\modeltransfer\nn_scripts\nnom.py", line 750, in generate_model
    quantize_weights(model, per_channel_quant=per_channel_quant, name=name, format=format, layer_q_list=layer_q_list)
  File "C:\Users\Fake Bug\Desktop\modeltransfer\nn_scripts\nnom.py", line 733, in quantize_weights
    f.write('#define ' + layer.name.upper() + '_BIAS_LSHIFT '+to_cstyle(bias_shift) +'\n\n')
UnboundLocalError: local variable 'bias_shift' referenced before assignment

The problem I encountered when I tried to convert h5 model(which convert from onnx model) to weights.h. Its reason seems to be there:

def is_shift_layer(layer):
    ''' layer which can change the output encoding'''
    #FIXME: add more which will change the output shift
    if('input' in layer.name or
       'conv2d' in layer.name or
       'conv1d' in layer.name or
       'dense' in layer.name or
       'softmax' in layer.name or
        'sigmoid' in layer.name or
        'tanh' in layer.name or
        ('add' in layer.name and 'zero' not in layer.name) or # the name, zero_padding contains 'add'
        'subtract' in layer.name or
        'multiply' in layer.name or
       ('activation' in layer.name and layer.get_config()['activation'] == 'softmax')or
        ('activation' in layer.name and layer.get_config()['activation'] == 'hard_sigmoid') or
        ('activation' in layer.name and layer.get_config()['activation'] == 'tanh') or
        ('activation' in layer.name and layer.get_config()['activation'] == 'hard_tanh') or
        is_rnn_layer(layer)
    ):
        return True
    return False

While my layer's name like LAYER_0 . Is it right?
The question is why the layer is judged on its name attribute instead of type() (just as a beginner)? And is there a good way to fix it?

@ken4647
Copy link
Author

ken4647 commented Feb 7, 2023

image
Here is my h5 model.

@majianjia
Copy link
Owner

Please check if you have enable bias for conv layers. Conv layers must have bias for successful conversion, it is a requirement for the backend.

@ken4647
Copy link
Author

ken4647 commented Feb 13, 2023

Thanks for your reply!I have confirmed that all biases in the model have been enabled. I have try to fix this promblem by change the name of layer.After struggling for days, I have successfully convert my model to nnom's. However, the result is not right, as I find that ONNX model is in the format of NCHW and h5 model is default in NHWC, while cause my model can't produce true result.
image
And I want to ask if there any other way to convert other formats of model like pytorch/tflite/onnx/savedmodel? Or is there any good way to convert onnx model to h5 or nnom model? I have used the onnx2keras, which stop updating for too long.
keras is simple of course, however re-train all my model is really a tough thing. Thanks for your reply!

@ken4647
Copy link
Author

ken4647 commented Feb 13, 2023

As Keras is in format of NHWC, nnom seems directly do conv2d and pad and maxpool at the first two dimensions, make the computation become totally wrong.

@majianjia
Copy link
Owner

majianjia commented Feb 17, 2023

I only tested it in keras/tf2. ONNX model are not tested. Looks like the data format is an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants