Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch converter doesn't work. #50

Open
areeb-agha opened this issue Jun 19, 2023 · 7 comments
Open

Pytorch converter doesn't work. #50

areeb-agha opened this issue Jun 19, 2023 · 7 comments

Comments

@areeb-agha
Copy link

I trained VGG16 model on CIFAR100 dataset on pytorch. When I run:

import hls4ml
import plotting

config = hls4ml.utils.config_from_pytorch_model(model, granularity='layer')
print("-----------------------------------")
print("Configuration")
plotting.print_dict(config)
print("-----------------------------------")
hls_model = hls4ml.converters.convert_from_pytorch_model(
    model, hls_config=config, output_dir='model_3/hls4ml_prj', part='xcu250-figd2104-2L-e'
)

I get the error on the last line:
TypeError: cannot unpack non-iterable NoneType object

While I ran pre-trained VGG16 model of keras on hls4ml, it runs smoothly without any error. The cause of the error I found out is the config file generated from config = hls4ml.utils.config_from_pytorch_model(model, granularity='layer'). When I print this variable config, it shows:
{'Model': {'Precision': 'ap_fixed<16,6>', 'ReuseFactor': 1, 'Strategy': 'Latency'}}
which shows there is no information regarding the layers. In case of Keras i.e. config = hls4ml.utils.config_from_keras_model(model, granularity='layer') generates following output:


Interpreting Model
Topology:
Layer name: input_1, layer type: InputLayer, input shapes: [[None, 224, 224, 3]], output shape: [None, 224, 224, 3]
Layer name: block1_conv1, layer type: Conv2D, input shapes: [[None, 224, 224, 3]], output shape: [None, 224, 224, 64]
Layer name: block1_conv2, layer type: Conv2D, input shapes: [[None, 224, 224, 64]], output shape: [None, 224, 224, 64]
Layer name: block1_pool, layer type: MaxPooling2D, input shapes: [[None, 224, 224, 64]], output shape: [None, 112, 112, 64]
Layer name: block2_conv1, layer type: Conv2D, input shapes: [[None, 112, 112, 64]], output shape: [None, 112, 112, 128]
Layer name: block2_conv2, layer type: Conv2D, input shapes: [[None, 112, 112, 128]], output shape: [None, 112, 112, 128]
Layer name: block2_pool, layer type: MaxPooling2D, input shapes: [[None, 112, 112, 128]], output shape: [None, 56, 56, 128]
Layer name: block3_conv1, layer type: Conv2D, input shapes: [[None, 56, 56, 128]], output shape: [None, 56, 56, 256]
Layer name: block3_conv2, layer type: Conv2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 56, 56, 256]
Layer name: block3_conv3, layer type: Conv2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 56, 56, 256]
Layer name: block3_pool, layer type: MaxPooling2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 28, 28, 256]
Layer name: block4_conv1, layer type: Conv2D, input shapes: [[None, 28, 28, 256]], output shape: [None, 28, 28, 512]
Layer name: block4_conv2, layer type: Conv2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 28, 28, 512]
Layer name: block4_conv3, layer type: Conv2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 28, 28, 512]
Layer name: block4_pool, layer type: MaxPooling2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv1, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv2, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv3, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_pool, layer type: MaxPooling2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 7, 7, 512]
Layer name: flatten, layer type: Reshape, input shapes: [[None, 7, 7, 512]], output shape: [None, 25088]
Layer name: fc1, layer type: Dense, input shapes: [[None, 25088]], output shape: [None, 4096]
Layer name: fc2, layer type: Dense, input shapes: [[None, 4096]], output shape: [None, 4096]
Layer name: predictions, layer type: Dense, input shapes: [[None, 4096]], output shape: [None, 1000]
{'Model': {'Precision': 'fixed<16,6>', 'ReuseFactor': 1, 'Strategy': 'Latency', 'BramFactor': 1000000000, 'TraceOutput': False}}

Please resolve this issue.

@zyt1024
Copy link

zyt1024 commented Jul 22, 2023

hello,I also encountered this problem. have you solve this issue?

@areeb-agha
Copy link
Author

No, I am still waiting for their reply. It seems their Pytorch converter has some bug. I temporarily switched to keras, which works fine.

@poulamiM25
Copy link

No, I am still waiting for their reply. It seems their Pytorch converter has some bug. I temporarily switched to keras, which works fine.

Hi,did you use Vitis HLS or the Vivado HLS?
Please reply.

@areeb-agha
Copy link
Author

I used Vitis HLS

@poulamiM25
Copy link

Can you please help me how you run the code on Vitis HLS. The github repo is not working for Vitis hls

@areeb-agha
Copy link
Author

Are you using Pytorch or Keras?

@poulamiM25
Copy link

I am using Keras only

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants