Skip to content
/ CPTNN Public

unofficial implementation of "CPTNN: CROSS-PARALLEL TRANSFORMER NEURAL NETWORK FOR TIME-DOMAIN SPEECH ENHANCEMENT"

License

Notifications You must be signed in to change notification settings

Honee-W/CPTNN

Repository files navigation

CPTNN unofficial pytorch implementation

original paper: "CPTNN: CROSS-PARALLEL TRANSFORMER NEURAL NETWORK FOR TIME-DOMAIN SPEECH ENHANCEMENT"

single-channel time domain speech enhancement neural network


How to use:

step1: add cptnn.py, TRANSFORMER.py, process_for_cptnn.py to your model directory.

step2: import cptnn in your training framework and ready to go.

configuration:

current params: 1.1M

frame_len, hop_size: transform wavform to segments

feat_dim, hidden_size, num_heads, cptm_layers: tune your hyperparameters based on your task

About

unofficial implementation of "CPTNN: CROSS-PARALLEL TRANSFORMER NEURAL NETWORK FOR TIME-DOMAIN SPEECH ENHANCEMENT"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages