Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

Potential bug in timing embedding #1923

Open
addtt opened this issue Jan 19, 2023 · 0 comments
Open

Potential bug in timing embedding #1923

addtt opened this issue Jan 19, 2023 · 0 comments

Comments

@addtt
Copy link

addtt commented Jan 19, 2023

Hi,

There might be a small bug here:

log_timescale_increment = (
math.log(float(max_timescale) / float(min_timescale)) /
tf.maximum(tf.to_float(num_timescales) - 1, 1))
inv_timescales = min_timescale * tf.exp(
tf.to_float(tf.range(num_timescales)) * -log_timescale_increment)

I think in the last line the exp should be divided by min_timescale rather than multiplied, since it's inverse timescales. Usually min_timescale is 1 so it doesn't matter. But e.g. if you fix max_timescale and change min_timescale, the resulting inverse timescale corresponding to max_timescale changes.

A simpler implementation could be roughly something like this:

inv_timescales = exp(-linspace(log(min_timescale), log(max_timescale), num_timescales))

and from this one you can derive the current implementation, except with division instead of multiplication. It can be even simpler with logspace but tf seems to have this function only as experimental.

Let me know if this makes sense.

Thanks a lot!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant