This repository contains custom implementations of common loss functions and activation functions in Machine Learning. These implementations are done without the use of external libraries like PyTorch, providing a deeper understanding of their underlying principles.
- L1 Loss (Mean Absolute Error): Computes the mean absolute error between predicted and target values.
- L2 Loss (Mean Squared Error): Computes the mean squared error between predicted and target values.
- Binary Cross-Entropy Loss: Computes the binary cross-entropy loss between predicted probabilities and target labels.
- Categorical Cross-Entropy Loss: Computes the categorical cross-entropy loss between predicted probabilities and one-hot encoded target labels.
- ReLU (Rectified Linear Unit): Applies the rectified linear unit activation function element-wise.
- Sigmoid Function: Computes the sigmoid activation function element-wise.
- Hyperbolic Tangent (Tanh) Function: Computes the hyperbolic tangent activation function element-wise.
- Softmax Function: Computes the softmax activation function across classes.
The implementations are provided in a Jupyter Notebook format for easy experimentation and understanding.