Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-output will only pass the first output to the score funuction #50

Open
frank-qcd-qk opened this issue Mar 8, 2021 · 5 comments
Open

Comments

@frank-qcd-qk
Copy link

loss_values = [loss(y) for y, loss in zip(outputs, losses)]

Say if I have an output of [tf.tensor , tf.tensor], after this line executed even I use my own scoring function, the obtained "output" is now only the first tf.tensor.

I don't think this is the expected behavior...

Sample model: https://github.com/duckietown/challenge-aido_LF-baseline-behavior-cloning/blob/master/duckieChallenger/frankModel.py

@bersbersbers
Copy link

Do you base this on experiments or code analysis? losses should have the same length as self.model.outputs, so this list comprehension should return more than one element:

losses = self._get_losses_for_multiple_outputs(loss)

def _get_losses_for_multiple_outputs(self, loss):
losses = listify(loss)
if len(losses) == 1 and len(losses) < len(self.model.outputs):
losses = losses * len(self.model.outputs)
if len(losses) != len(self.model.outputs):
raise ValueError(('The model has {} outputs, '
'but the number of loss-functions you passed is {}.').format(
len(self.model.outputs), len(losses)))
return losses

Or what am I missing?

@frank-qcd-qk
Copy link
Author

I did this base on an experiment...

But maybe I have misunderstood something that could also happen...

I am not sure this losses was working tho... I am using v0.6.0 if that helps.

@bersbersbers
Copy link

bersbersbers commented Mar 8, 2021

I did this base on an experiment...

In that case, a short reproducible example would be helpful, I'm pretty sure.

I am using v0.6.0 if that helps.

Note that this has not been released yet as far as I can see.

Also, you may want to look into this code to see what may be going wrong: 9ebe940, although much seems the same except for renamed variables:

scores = self._get_scores_for_multiple_outputs(score)

score_values = [score(y) for y, score in zip(outputs, scores)]

def _get_scores_for_multiple_outputs(self, score):
scores = listify(score)
if len(scores) == 1 and len(scores) < len(self.model.outputs):
scores = scores * len(self.model.outputs)
if len(scores) != len(self.model.outputs):
raise ValueError(('The model has {} outputs, '
'but the number of score-functions you passed is {}.').format(
len(self.model.outputs), len(scores)))
return scores

@keisen
Copy link
Owner

keisen commented May 1, 2021

@frank-qcd-qk , I'm so sorry for late reply.
@bersbersbers , Thank you for the detailed explanation!

So Was your problem solved?
If not, please let me know that. I will help you resolve it.
If resolved, please close this issue.

Thanks!

@ck37
Copy link

ck37 commented Nov 29, 2022

Would it be possible to show an example of using tf-keras-vis with a multi-output model? I'm trying to use it for a 3-output model (EfficientNet backbone) but am struggling with how to define the score functions, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants