Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review and improve the way the cloud modules send their output to the ossec.log #14535

Open
CarlosRS9 opened this issue Aug 10, 2022 · 6 comments · May be fixed by #23448
Open

Review and improve the way the cloud modules send their output to the ossec.log #14535

CarlosRS9 opened this issue Aug 10, 2022 · 6 comments · May be fixed by #23448
Assignees
Labels

Comments

@CarlosRS9
Copy link
Contributor

CarlosRS9 commented Aug 10, 2022

Affected integration
AWS, GCloud, Azure and DockerListener

Description

We should review and update the way we are capturing the output of our cloud modules (Azure, AWS y GCP) as well as the Docker-Listener module.

To better understand why this is necessary, first it is important to know how these modules are implemented. They are made of two different component:

  • The core part, written in C, which launches the modules, catches the output and prints it to the ossec.log.
  • The module itself, written in python, which executes the integration and generates some output to be shown in the stdout.

The issue here is that the way the output is written and sent to the ossec.log is not consistent between modules. In addition to that, the output of the different modules will be ignored, even if there are warning or error messages, if the debug mode is not enabled, which is disabled by default.

Here is a detailed explanation on how every module works currently:

AWS

The python module runs and its output, which is written using print statements, is caught. However, for the output to be dumped to the ossec.log at least one of the following conditions must be met:

  • If there is an internal error in the C part the output will be written to ossec.log if debug mode is 1 or higher.
  • If the execution ends with an error code 1 the output will be written to ossec.log if the Unknown error string was present.
  • If the execution ends with an error code 2 the output will be written to ossec.log if the aws.py: error: string was present.
  • If the execution ends with an error code 3 or higher the output will be written to ossec.log as a WARNING, regardless of if it was an ERROR.
  • If the execution ends successfully with an return value 0 or fails and returns a negative value the output will be written to the ossec.log only if debug mode is set to 2.

GCP

The python module runs and its output, which is written using logging statements, is caught. However, for the output to be dumped to the ossec.log at least one of the following conditions must be met:

  • If debug level is 0 only CRITICAL, ERROR and WARNING messages will be written
  • If debug level is 1 it will show INFO messages in addition to the previously listed.
  • If debug level is 2 it will show DEBUG messages in addition to the previously listed.

This is done by looking for the WARNING, ERROR, INFO strings present in the output. If not present, the output is skipped.

Azure

The azure module uses logging in a similar way to how the GCP was implemented. However, the C implementation is not expecting output from this module and it is not processing it, hence nothing will be written into the ossec.log at all. This is because the older implementation wrote its output directly to a custom file and now it is set to send it to the stdout but the core part must be updated to expect and catch this output.

Docker Listener

There is output written using print statements that will be shown only when running the module manually. The module sent his output to the analysis engine in the form of events.

Proposed solution

We do not want to write directly on the ossec.log, as this task is the responsibility of modulesd, but we should keep the logic of determining when a message should be shown and when not depending on the debug level in the modules part. This logic currently already exists. The solution would be to just update the C part to just capture the output it receives as it is, without filtering it at all, and write it to the ossec.log.

This solution involves updating the four modules as well as the core part and their unit testing.

Related Issues

Branch

@CarlosRS9 CarlosRS9 added module/aws module/gcp Google Cloud integration module/azure module/cloud monitoring Monitoring external services (AWS, Azure, GCP, O365...) module/docker reporter/framework labels Aug 10, 2022
@snaow snaow added this to the Release 4.5.0 milestone Nov 16, 2022
@snaow snaow removed this from the Release 4.5.0 milestone Dec 21, 2022
@davidjiglesias davidjiglesias added type/bug Something isn't working level/task and removed module/cloud monitoring Monitoring external services (AWS, Azure, GCP, O365...) reporter/framework labels Mar 20, 2023
@EduLeon12 EduLeon12 self-assigned this Mar 31, 2023
@EduLeon12
Copy link
Contributor

Issue Update

A meeting with the team has been set for 04/03 to discuss a common solution to standardize behavior throughout all modules.

@EduLeon12
Copy link
Contributor

EduLeon12 commented Apr 3, 2023

Issue Update.

After reviewing with the team an epic based on this issue will be created.

Next Step:

  • Review and analyze the behavior of the Logging module to test possible integration with the current module and test how it module reacts to different levels of logging while keeping the output through stdout.

@GGP1
Copy link
Member

GGP1 commented May 15, 2024

Update

Rebased the epic branch to 4.9.0, resolved conflicts and opened the pull request.

The changes to the AWS module are missing, I'm leaving the issue in progess until we decide how to proceed.

@GGP1
Copy link
Member

GGP1 commented May 21, 2024

Update

  • Included the AWS logging refactor made in the branch 16737-refactor-aws-logs/ to the epic branch.

  • Fixed some conflicts due to the branch being outdated.

  • Rebased the epic branch to 4.9.0 and migrated the cloud logger the new azure structure.

@GGP1
Copy link
Member

GGP1 commented May 31, 2024

Update

Rebased the epic branch to 4.9.0 after the changes in #16314, solved multiple conflicts and made some modifications to the said issue to accomodate with the new logging mechanism.

@GGP1
Copy link
Member

GGP1 commented Jun 3, 2024

Update

Created the issues #23841 and #23843 to introduce some enhancements to the logging messages and to fix the AWS integration test cases that were affected by the changes in the logs formatting. Moving the issue to blocked until they are ready.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Blocked
Development

Successfully merging a pull request may close this issue.

7 participants