Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v2 FEATURE]: Increase customizability of activation functions #878

Open
shihchengli opened this issue May 23, 2024 · 2 comments
Open

[v2 FEATURE]: Increase customizability of activation functions #878

shihchengli opened this issue May 23, 2024 · 2 comments
Labels
enhancement a new feature request
Milestone

Comments

@shihchengli
Copy link
Contributor

Is your feature request related to a problem? Please describe.
Currently, only the activation functions defined in nn/utils.py can be used, which limits the customizability of activation functions.

Discussion
We can provide the flexibility to accept both Activation and nn.Module objects as inputs for activation functions (floowing the format like this). Additionally, we can add an activation attribute to the neural network, similar to what PyTorch Geometric does (link). I am open to any suggestions.

@shihchengli shihchengli added the enhancement a new feature request label May 23, 2024
@kevingreenman kevingreenman added this to the v2.1.0 milestone May 23, 2024
@davidegraff
Copy link
Contributor

I don't like the design pattern specifying both the initializer function (or an alias) as well as the parameters as inputs into a separate class. At that point, you're using composition without actually designing around it, so it's an all-around loss. For example, compare the following two design patterns using the Bar class:

class Bar(ABC):
    def __init__(*args, **kwargs):
        ...

class BarA(Bar): ...
class BarB(Bar): ...

def get_bar(bar_type: str) -> type[Bar]: ...
  1. Implicit composition:
class Foo:
    def __init__(self, bar_type: str, bar_args, bar_kwargs):
       self.bar = get_bar(bar_type)(*bar_args, **bar_kwargs)
  1. Explicit Composition
class Foo:
    def __init__(self, bar):
       self.bar = bar

I think (2) is significantly more clear of what's going on, as it doesn't require users to cross reference any functions or initializers to see what's going on. All of this is to say, that I'm fine with moving away from the usage of ActivationType inside core module code and favoring explicitly passing activation: nn.Module. We can move get_activation_function into the CLI, as there's really no need to restrict the activation types used inside our core modules. This was a reimplementation of v1 logic, but I don't agree with it (even if I chose to write it).

@shihchengli
Copy link
Contributor Author

Thanks for sharing your thoughts. I agree explicitly passing the activation function into the MessagePassing and Predictor blocks is a clear way to improve both customizability and code readability. Do you disagree with this idea because you think passing a callable function into a neural network builder as an attribute is not good practice?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement a new feature request
Projects
None yet
Development

No branches or pull requests

3 participants