Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complete isolated generation of resources in AWS IT #22514

Closed
Tracked by #17788
Selutario opened this issue Mar 14, 2024 · 15 comments · Fixed by #23419
Closed
Tracked by #17788

Complete isolated generation of resources in AWS IT #22514

Selutario opened this issue Mar 14, 2024 · 15 comments · Fixed by #23419
Assignees
Labels
level/subtask type/bug Something isn't working

Comments

@Selutario
Copy link
Member

Selutario commented Mar 14, 2024

Description

We need to complete the resources isolation that @EduLeon12 started in wazuh/wazuh-qa#4714. These are the pending tasks according to what he described in the last update:

Due to internal team changes, the tasks needed to complete this issue based on each test module will be listed.

[!NOTE]
This tasks are not conclusive, they are just a proposal.
They might require redesigns, changes, or more work to achieve.

For all modules.

  • Add the resource_type tag and value to each data module.
  • Remove the profile for the configuration template of the module since the profile will be handled via ENV.
  • Add a logic block In the fixture manage_bucket_files to manage the amount of files that will be created.
    • This logic should take into consideration the creation of multiple files.
    • If all tests are creating the file to the same bucket no change needs to be added since they will be deleted on the cleanup phase of the test.
  • Test the creation of the log stream and logs inside a recently created log group.
  • Add the necessary fixtures to each test.
Discard regex
  • Ensure the discard_regex configuration is being inserted as expected (This wasn't changed and should not require a change).
  • Test the regex parameter and check the log monitor response to ensure the assertion is the expected.
Log groups
  • Apply the same logic of the manage_bucket_files but for the messages of the log-stream.
Only logs after
  • If the logic of the creation of the file was handled successfully this module should not require any changes.
Parser
  • Since the metadata is missing from the data, Key errors will arise before the Wazuh setup phase. This behavior needs to be handled accordingly. Either capture the exceptions to allow the continuation of the test to reach to the point where the logs will be handled to assert the tests or change the logic regarding the tests.

If it is decided to manage the exception the use of a logger

Path Suffix
  • Manage the key value passed to the file creation to ensure the path is being created as expected.
Path
  • Should be the same logic as above
Remove from bucket
  • The file exist method was deleted from the current flow since my thought was to delete the interaction of the tests with the testing framework, I think that responsibility belongs to the fixture so a redesign of the block will need to be made if it is concluded to be that way.

Ex of change:

# From
assert not file_exists(filename=metadata['uploaded_file'], bucket_name=bucket_name)

# To
expected_log = "INSERT LOG WHEN THE BUCKET  OR LOG STREAM IS EMPTY" 

log_monitor.start(
        timeout=session_parameters.default_timeout,
        callback=event_monitor.make_aws_callback(expected_log)
    )

Also, if multiple files are present in the bucket when asserting and the empty bucket is not a reassurance that the removal was correct the regex can be defined to look for the file using the specific key assigned to the metadata of the test and assert for that the callback_result should be None since no match should occur.

Regions
  • This module is the one that will require the biggest change or thought process, since the current design has the region set as an environment variable and this module needs to create resources in multiple regions. Multiple solutions can be performed for this.
    • Create a new fixture for the creation of the resource of another region that will be received by the list of regions in the metadata of the test.
    • Modify the resource creation fixture to have the region as a parameter instead based on the content of the metadata and create the resource in said region, this will also require a modification in the deletion part.
    • Create a session of BOTO for every region or extra region set in the fixtures and manage the creation in a local scope.

Note

For this development it will be necessary to use 4714-generate-isolated-resources-for-aws-its as the base branch (link). This is because the PR of the previous issue has not yet been merged, but the delivery date requires that we parallelize both issues. In theory it shouldn't be a problem.

@fdalmaup
Copy link
Member

Issue Update

  • Started the analysis of the developed code and proposed changes.

@fdalmaup
Copy link
Member

Issue Update

Made the necessary modifications to run the test_discard_regex.py suite. Currently debugging it due to some found errors:

test_discard_regex.py
root@vagrant:/wazuh/tests/integration/test_aws# pytest -x test_discard_regex.py 
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 17 items                                                                                                                                                                                                

test_discard_regex.py .F

==================================================================================================== FAILURES =====================================================================================================
__________________________________________________________________________________ test_bucket_discard_regex[vpc_discard_regex] ___________________________________________________________________________________

configuration = {'metadata': {'bucket_name': 'wazuh-vpcflow-integration-tests-99a3fa46-todelete', 'bucket_type': 'vpcflow', 'descripti...elements': [{'disabled': {'value': 'no'}}, {'bucket': {'attributes': [...], 'elements': [...]}}], 'section': 'wodle'}]}
metadata = {'bucket_name': 'wazuh-vpcflow-integration-tests-99a3fa46-todelete', 'bucket_type': 'vpcflow', 'description': 'VPC discard regex configurations', 'discard_field': 'srcport', ...}
create_test_bucket = None, manage_bucket_files = None, load_wazuh_basic_configuration = None, set_wazuh_configuration = None, clean_s3_cloudtrail_db = None, configure_local_internal_options_function = None
truncate_monitored_files = None, restart_wazuh_function = None, file_monitoring = None

    @pytest.mark.tier(level=0)
    @pytest.mark.parametrize('configuration, metadata',
                             zip(configurator.test_configuration_template, configurator.metadata),
                             ids=configurator.cases_ids)
    def test_bucket_discard_regex(
            configuration, metadata, create_test_bucket, manage_bucket_files, load_wazuh_basic_configuration,
            set_wazuh_configuration, clean_s3_cloudtrail_db, configure_local_internal_options_function,
            truncate_monitored_files, restart_wazuh_function, file_monitoring,
    ):
        """
        description: Check that some bucket logs are excluded when the regex and field defined in <discard_regex>
                     match an event.
    
        test_phases:
            - setup:
                - Load Wazuh light configuration.
                - Apply ossec.conf configuration changes according to the configuration template and use case.
                - Apply custom settings in local_internal_options.conf.
                - Truncate wazuh logs.
                - Restart wazuh-manager service to apply configuration changes.
            - test:
                - Check in the ossec.log that a line has appeared calling the module with correct parameters.
                - Check the expected number of events were forwarded to analysisd, only logs stored in the bucket and skips
                  the ones that match with regex.
                - Check the database was created and updated accordingly.
            - teardown:
                - Truncate wazuh logs.
                - Restore initial configuration, both ossec.conf and local_internal_options.conf.
    
        wazuh_min_version: 4.6.0
    
        parameters:
            - configuration:
                type: dict
                brief: Get configurations from the module.
            - metadata:
                type: dict
                brief: Get metadata from the module.
            - load_wazuh_basic_configuration:
                type: fixture
                brief: Load basic wazuh configuration.
            - set_wazuh_configuration:
                type: fixture
                brief: Apply changes to the ossec.conf configuration.
            - clean_s3_cloudtrail_db:
                type: fixture
                brief: Delete the DB file before and after the test execution.
            - configure_local_internal_options_function:
                type: fixture
                brief: Apply changes to the local_internal_options.conf configuration.
            - truncate_monitored_files:
                type: fixture
                brief: Truncate wazuh logs.
            - restart_wazuh_daemon_function:
                type: fixture
                brief: Restart the wazuh service.
            - file_monitoring:
                type: fixture
                brief: Handle the monitoring of a specified file.
    
        assertions:
            - Check in the log that the module was called with correct parameters.
            - Check the expected number of events were forwarded to analysisd.
            - Check the database was created and updated accordingly.
    
        input_description:
            - The `configuration_bucket_discard_regex` file provides the module configuration for this test.
            - The `cases_bucket_discard_regex` file provides the test cases.
        """
        bucket_name = metadata['bucket_name']
        bucket_type = metadata['bucket_type']
        only_logs_after = metadata['only_logs_after']
        discard_field = metadata['discard_field']
        discard_regex = metadata['discard_regex']
        found_logs = metadata['found_logs']
        skipped_logs = metadata['skipped_logs']
        path = metadata['path'] if 'path' in metadata else None
    
        pattern = fr'.*The "{discard_regex}" regex found a match in the "{discard_field}" field.' \
                  ' The event will be skipped.'
    
        parameters = [
            'wodles/aws/aws-s3',
            '--bucket', bucket_name,
            '--only_logs_after', only_logs_after,
            '--discard-field', discard_field,
            '--discard-regex', discard_regex,
            '--type', bucket_type,
            '--debug', '2'
        ]
    
        if path is not None:
            parameters.insert(5, path)
            parameters.insert(5, '--trail_prefix')
    
        # Check AWS module started
        log_monitor.start(
            timeout=session_parameters.default_timeout,
            callback=event_monitor.callback_detect_aws_module_start
        )
    
        assert log_monitor.callback_result is not None, ERROR_MESSAGE['failed_start']
    
        # Check command was called correctly
        log_monitor.start(
            timeout=session_parameters.default_timeout,
            callback=event_monitor.callback_detect_aws_module_called(parameters)
        )
    
        assert log_monitor.callback_result is not None, ERROR_MESSAGE['incorrect_parameters']
    
        log_monitor.start(
            timeout=TIMEOUT[20],
            callback=event_monitor.callback_detect_event_processed_or_skipped(pattern),
            accumulations=found_logs + skipped_logs
        )
    
>       assert log_monitor.callback_result is not None, ERROR_MESSAGE['incorrect_discard_regex_message']
E       AssertionError: The AWS module did not show the correct message about discard regex or, did not process the expected amount of logs
E       assert None is not None
E        +  where None = <wazuh_testing.tools.monitors.file_monitor.FileMonitor object at 0x7fca5f6f69b0>.callback_result

test_discard_regex.py:149: AssertionError
----------------------------------------------------------------------------------------------- Captured log setup ------------------------------------------------------------------------------------------------
DEBUG    wazuh_testing:conftest.py:182 Created new bucket: type wazuh-vpcflow-integration-tests-99a3fa46-todelete
DEBUG    wazuh_testing:conftest.py:231 Uploaded file: AWSLogs/819751203818/vpcflowlogs/us-east-1/2024/03/25/819751203818_vpcflowlogs_us-east-1_fl-0754d951c16f517fa_20240325T2118Z_1376354592399921175.log to bucket "wazuh-vpcflow-integration-tests-99a3fa46-todelete"
DEBUG    wazuh_testing:conftest.py:183 Set local_internal_option to {'wazuh_modules.debug': '2', 'monitord.rotate_log': '0'}
DEBUG    wazuh_testing:conftest.py:206 Restarting all daemon
DEBUG    wazuh_testing:conftest.py:242 Initializing file to monitor to /var/ossec/logs/ossec.log
---------------------------------------------------------------------------------------------- Captured log teardown ----------------------------------------------------------------------------------------------
DEBUG    wazuh_testing:conftest.py:250 Trucanted /var/ossec/logs/ossec.log
DEBUG    wazuh_testing:conftest.py:218 Stopping all daemons
DEBUG    wazuh_testing:conftest.py:188 Restore local_internal_option to {}
============================================================================================= short test summary info =============================================================================================
FAILED test_discard_regex.py::test_bucket_discard_regex[vpc_discard_regex] - AssertionError: The AWS module did not show the correct message about discard regex or, did not process the expected amount of logs
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
===================================================================================== 1 failed, 1 passed in 86.75s (0:01:26) ======================================================================================

@fdalmaup
Copy link
Member

Issue Update

The following errors were found during the test_discard_regex.py execution:

FAILED test_discard_regex.py::test_bucket_discard_regex[vpc_discard_regex] - AssertionError: The AWS module did not show the correct message about discard reg...
FAILED #  - AssertionError: The AWS module was not called with the correct parameters
FAILED test_discard_regex.py::test_cloudwatch_discard_regex_json[cloudwatch_discard_regex_json] - AssertionError: The AWS module did not show the correct mess...
ERROR test_discard_regex.py::test_cloudwatch_discard_regex_simple_text[cloudwatch_discard_regex_simple_text] - botocore.errorfactory.ResourceAlreadyExistsExce...
ERROR test_discard_regex.py::test_inspector_discard_regex[inspector_discard_regex] - KeyError: 'log_group_name'
====================================================== 3 failed, 12 passed, 2 errors in 601.10s (0:10:01) =======================================================

Analysis of each error

VPC

This error is related to how the AWS module implements the feature to fetch VPC Flow logs. To achieve this it is necessary to have an active Flow Log inside an EC2 Network interface. In the current tests, these two resources are static and have been created to run the tests in a determined environment, achieving the goal of correctly fetching the logs.
We could generate these resources for other AWS environments but If the team wants consistency regarding the test resource usage and creation, we should dynamically generate and delete these during the test run. It implies more permissions in the AWS environment for the test-running user to be able to generate them. Still, we consider that is better than the risk of getting the static resource deleted by accident.

The implementation of the functions for the VPC resources will impact every related test, not only the one for discard regex.

Cisco Umbrella

This error was related to the following reasons:

  • The generated data did not contain the field value to discard the log: reviewing the UmbrellaDataGenerator class in the qa-integration-framework repository, it was found that the value corresponding to the action column is always filled with the Allowed value and the test case from tests/integration/test_aws/data/test_cases/discard_regex_test_module/cases_bucket_discard_regex.yaml contained the Blocked value, which would never match with the expected discard log.
  • The addition of the --trail_prefix parameter generated an error when validating the used parameters for the AWS module due to the position in which it was inserted.

These two causes were already fixed and tested. Nevertheless, it was found that modifying the found_logs and skipped_logs values does not influence the test's results. This will be further analyzed.

Services errors

This is related to the task mentioned in the issue's description: Test the creation of the log stream and logs inside a recently created log group. The functions in charge of creating these resources require further analysis and modifications.

@fdalmaup
Copy link
Member

Issue Update

The fixtures for the creation and deletion of the log group, log stream, and the CloudWatch Logs events were developed based on the previous work left. The case for the JSON events that can be obtained from the CloudWatch service was successfully tested:

# pytest /wazuh/tests/integration/test_aws/test_discard_regex.py::test_cloudwatch_discard_regex_json[cloudwatch_discard_regex_json]
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 1 item                                                                                                                                                

wazuh/tests/integration/test_aws/test_discard_regex.py .                                                                                                  [100%]

====================================================================== 1 passed in 19.20s =======================================================================

Which can be verified in the ossec.log file:

2024/03/27 21:03:34 wazuh-modulesd:aws-s3[120205] wm_aws.c:62 at wm_aws_main(): INFO: Module AWS started
2024/03/27 21:03:34 wazuh-modulesd:aws-s3[120205] wm_aws.c:84 at wm_aws_main(): INFO: Starting fetching of logs.
2024/03/27 21:03:34 wazuh-modulesd:aws-s3[120205] wm_aws.c:171 at wm_aws_main(): INFO: Executing Service Analysis: (Service: cloudwatchlogs)
2024/03/27 21:03:34 wazuh-modulesd:aws-s3[120205] wm_aws.c:558 at wm_aws_run_service(): DEBUG: Create argument list
2024/03/27 21:03:34 wazuh-modulesd:aws-s3[120205] wm_aws.c:662 at wm_aws_run_service(): DEBUG: Launching S3 Command: wodles/aws/aws-s3 --service cloudwatchlogs --only_logs_after 2023-JUL-03 --regions us-east-1 --aws_log_groups wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete --discard-field message --discard-regex .*event.*number.*0 --debug 2
2024/03/27 21:03:37 wazuh-modulesd:aws-s3[120205] wm_aws.c:703 at wm_aws_run_service(): DEBUG: Service: cloudwatchlogs  -  OUTPUT: DEBUG: +++ Debug mode on - Level: 2
DEBUG: +++ Getting alerts from "us-east-1" region.
DEBUG: Generating default configuration for retries: mode standard - max_attempts 10
DEBUG: only logs: 1688342400000
DEBUG: +++ Table does not exist; create
DEBUG: Getting log streams for "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete" log group
DEBUG: Found "wazuh-cloudwatchlogs-integration-tests-stream-56d61c02-todelete" log stream in wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete
DEBUG: Getting data from DB for log stream "wazuh-cloudwatchlogs-integration-tests-stream-56d61c02-todelete" in log group "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete"
DEBUG: Token: "None", start_time: "None", end_time: "None"
DEBUG: Getting CloudWatch logs from log stream "wazuh-cloudwatchlogs-integration-tests-stream-56d61c02-todelete" in log group "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete" using token "None", start_time "1688342400000" and end_time "None"
DEBUG: +++ Sending events to Analysisd...
DEBUG: +++ The ".*event.*number.*0" regex found a match in the "message" field. The event will be skipped.
DEBUG: The message is "{"message":"Test event number 1"}"
DEBUG: The message is "{"message":"Test event number 2"}"
DEBUG: +++ The ".*event.*number.*0" regex found a match in the "message" field. The event will be skipped.
DEBUG: The message is "{"message":"Test event number 1"}"
DEBUG: The message is "{"message":"Test event number 2"}"
DEBUG: +++ The ".*event.*number.*0" regex found a match in the "message" field. The event will be skipped.
DEBUG: The message is "{"message":"Test event number 1"}"
DEBUG: The message is "{"message":"Test event number 2"}"
DEBUG: +++ Sent 6 events to Analysisd
DEBUG: Getting CloudWatch logs from log stream "wazuh-cloudwatchlogs-integration-tests-stream-56d61c02-todelete" in log group "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete" using token "f/38169362378745667072970509643223859822310145595188314114/s", start_time "1688342400000" and end_time "None"
DEBUG: +++ There are no new events in the "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete" group
DEBUG: Saving data for log group "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete" and log stream "wazuh-cloudwatchlogs-integration-tests-stream-56d61c02-todelete".
DEBUG: The saved values are "{'token': 'f/38169362378756817443500609668052528088467778387149029375/s', 'start_time': 1688342400000, 'end_time': 1711573404339}"
DEBUG: Purging the BD
DEBUG: Getting log streams for "wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete" log group
DEBUG: Found "wazuh-cloudwatchlogs-integration-tests-stream-56d61c02-todelete" log stream in wazuh-cloudwatchlogs-integration-tests-56d61c02-todelete
DEBUG: committing changes and closing the DB

The methods will be extended and enhanced for the simple text case. Also, it remains to be checked how the found_logs information modifies the expected results of the tests.

@fdalmaup
Copy link
Member

fdalmaup commented Apr 4, 2024

Issue Update

CloudWatch Logs simple text discard regex feature test succesfully modified:

root@vagrant:/# pytest wazuh/tests/integration/test_aws/test_discard_regex.py::test_cloudwatch_discard_regex_simple_text[cloudwatch_discard_regex_simple_text]
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 1 item                                                                                                                                                

wazuh/tests/integration/test_aws/test_discard_regex.py .                                                                                                  [100%]

====================================================================== 1 passed in 22.43s =======================================================================

The next task was modifying and adding the necessary functions to generate the Inspector Classic service findings. Reading the AWS docs, a finding is a detailed report about a vulnerability that affects one of the user's AWS resources. Therefore, a resource is needed to run the report on it to generate the findings. In wazuh/wazuh-qa#3345, a template was generated with that objective and then the report was run against an EC2 instance with a determined tag (one that matched the .*inspector-integration-test.* regex) to have findings for the test. So, if we want to generate new findings for each test run, we would need to have available resources to obtain the reports from them. This approach needs to be discussed with the team since it would involve new costs.

The alternative is to follow the first approach regarding the Inspector service, having previously obtained findings in the service. These do not generate the race condition that was the cause of wazuh/wazuh-qa#4714 since new resources were never inserted into the service.

@fdalmaup
Copy link
Member

fdalmaup commented Apr 5, 2024

Issue Update

After discussing with the team, it was decided that trying to generate new Inspector findings would be an overkill for the current development since it would require to set up an EC2 instance, an Inspector template and running the assesment to obtain new findings in each run. This would make the test slower and would consume more resources that do not have the risk of the race condition present in the bucket files. Therefore, new findings were generated in the dev environment in order to be able to test there the Inspector integration. Following the results of the test:

# pytest wazuh/tests/integration/test_aws/test_discard_regex.py::test_inspector_discard_regex[inspector_discard_regex]
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 1 item                                                                                                                                                

wazuh/tests/integration/test_aws/test_discard_regex.py .                                                                                                  [100%]

====================================================================== 1 passed in 31.65s =======================================================================

VPC resources generation

boto3 methods and resources management in AWS

The following EC2 methods are the ones to be used to generate and delete the Flow logs:

The flow logs can be generated for Network interfaces, as suggested in our documentation, but these would require to create a VPC, a Subnet, and a Network interface before proceding with the flow log.

Therefore, since the flow logs can also be created for VPC instances, we could create these for each run and an associated flow log id that would match the file to upload to the S3 bucket. This approach has already been tested:

image

# /var/ossec/wodles/aws/aws-s3 --bucket wazuh-aws-wodle-vpcflow --only_logs_after 2024-APR-03 --type vpcflow --debug 2
DEBUG: +++ Debug mode on - Level: 2
DEBUG: Generating default configuration for retries: mode standard - max_attempts 10
DEBUG: +++ Table does not exist; create
DEBUG: +++ Working on 123456789123 - us-east-1
DEBUG: +++ Marker: AWSLogs/123456789123/vpcflowlogs/us-east-1/2024/04/03
DEBUG: ++ Found new log: AWSLogs/123456789123/vpcflowlogs/us-east-1/2024/04/04/123456789123_vpcflowlogs_us-east-1_fl-0b8ac521ebebf340f_20240404T0000Z_ce6176d8.log.gz

Inclusion of the required methods in the current code

Currently, the FLOW_LOG_ID constant is present in the qa-integration-framework to upload the VPC log file. Due to the approach to be taken, the value will be variable and dependant of the response given by the previously mentioned method to create the flow log.

An analysis on where the creation and deletion of the necessary resources is on going, since we should not break the consistency of the code already developed or give sections of the tests or the framework responsibilities that do not correspond.

@fdalmaup
Copy link
Member

fdalmaup commented Apr 9, 2024

Issue Update

  • Some new functions are being added to the qa-integration-framework/src/wazuh_testing/modules/aws/utils.py file to create a VPC and its corresponding Flow Log.
  • The manage_bucket_files fixture is being modified to contemplate the VPC bucket file and the management of the abovementioned resources.

@fdalmaup
Copy link
Member

fdalmaup commented Apr 10, 2024

Issue Update

  • The development of the new VPC-related functions and methods was successful, the tests passed:
root@vagrant:/# pytest /wazuh/tests/integration/test_aws/test_discard_regex.py
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 17 items                                                                                                                                              

wazuh/tests/integration/test_aws/test_discard_regex.py .................                                                                                  [100%]

================================================================ 17 passed in 601.95s (0:10:01) =================================================================
  • The behavior related to the found_logs and skipped_logs metadata fields was reviewed. These fields are passed to the FileMonitor.start method as part of the accumulations argument. According to the method's description, the accumulations parameter is used to stop the monitoring before the timeout is reached, e.g., if the method finds 2 matches and the passed argument for accumulations is 2, then it will stop monitoring and return.
    Therefore, the defined values for the found_logs and skipped_logs fields do not seem to determine the direct success or failure of the tests unless the value 0 is used. This last case is due to the assert log_monitor.callback_result is not None line found in the test, where log_monitor.callback_result is defined in the FileMonitor._match method, and changes its value in each iteration inside the start method. Since the accumulations is equal to the matches (0), no matching new value with the defined parameter is saved in callback_result.

@fdalmaup
Copy link
Member

fdalmaup commented Apr 11, 2024

Issue Update

  • It found that the tests were not correctly checking the log skipping due to using the callback_detect_event_processed_or_skipped method. This method checks if the given log corresponds to one that follows the given discard message pattern or if it matches the callback_detect_event_processed log type. Due to this, the log monitor did not differentiate between processed or discarded logs. The solution was to separate the monitoring of these logs by changing the callback_detect_event_processed_or_skipped method to callback_detect_event_skipped and using the skipped_logs as the number of accumulations of matches for it.
root@vagrant:/# pytest -x /wazuh/tests/integration/test_aws/test_discard_regex.py
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 17 items                                                                                                                                              

wazuh/tests/integration/test_aws/test_discard_regex.py .................                                                                                  [100%]

================================================================ 17 passed in 392.74s (0:06:32) =================================================================
  • The next modifications were the ones carried out to the test_log_groups.py. The tests were correctly set with previously modified fixtures and the run gave the following results:
test_log_groups.py
root@vagrant:/# pytest -x /wazuh/tests/integration/test_aws/test_log_groups.py
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 2 items                                                                                                                                               

wazuh/tests/integration/test_aws/test_log_groups.py F

=========================================================================== FAILURES ============================================================================
_____________________________________________________ test_log_groups[cloudwatchlogs_log_groups_with_data] ______________________________________________________

configuration = {'metadata': {'description': 'CloudWatch log groups configurations', 'expected_results': 3, 'log_group_name': 'wazuh-c...lements': [{'disabled': {'value': 'no'}}, {'service': {'attributes': [...], 'elements': [...]}}], 'section': 'wodle'}]}
metadata = {'description': 'CloudWatch log groups configurations', 'expected_results': 3, 'log_group_name': 'wazuh-cloudwatchlogs...og-group-b18a8eeb-todelete', 'log_stream_name': 'wazuh-cloudwatchlogs-integration-tests-stream-b18a8eeb-todelete', ...}
create_test_log_group = None, create_test_log_stream = None, manage_log_group_events = None, load_wazuh_basic_configuration = None
set_wazuh_configuration = None, clean_aws_services_db = None, configure_local_internal_options_function = None, truncate_monitored_files = None
restart_wazuh_function = None, file_monitoring = None

    @pytest.mark.tier(level=0)
    @pytest.mark.parametrize('configuration, metadata',
                             zip(configurator.test_configuration_template, configurator.metadata),
                             ids=configurator.cases_ids)
    def test_log_groups(
            configuration, metadata, create_test_log_group, create_test_log_stream, manage_log_group_events,
            load_wazuh_basic_configuration, set_wazuh_configuration, clean_aws_services_db,
            configure_local_internal_options_function, truncate_monitored_files, restart_wazuh_function, file_monitoring,
    ):
        """
        description: Only the events for the specified log_group are processed.
        test_phases:
            - setup:
                - Load Wazuh light configuration.
                - Apply ossec.conf configuration changes according to the configuration template and use case.
                - Apply custom settings in local_internal_options.conf.
                - Truncate wazuh logs.
                - Restart wazuh-manager service to apply configuration changes.
            - test:
                - Check in the ossec.log that a line has appeared calling the module with correct parameters.
                - If a region that does not exist was specified, make sure that a message is displayed in the ossec.log
                  warning the user.
                - Check the expected number of events were forwarded to analysisd, only logs stored in the bucket
                  for the specified region.
                - Check the database was created and updated accordingly.
            - teardown:
                - Truncate wazuh logs.
                - Restore initial configuration, both ossec.conf and local_internal_options.conf.
                - Delete the uploaded file.
        wazuh_min_version: 4.6.0
        parameters:
            - configuration:
                type: dict
                brief: Get configurations from the module.
            - metadata:
                type: dict
                brief: Get metadata from the module.
            - create_log_stream:
                type: fixture
                brief: Create a log stream with events for the day of execution.
            - load_wazuh_basic_configuration:
                type: fixture
                brief: Load basic wazuh configuration.
            - set_wazuh_configuration:
                type: fixture
                brief: Apply changes to the ossec.conf configuration.
            - clean_aws_services_db:
                type: fixture
                brief: Delete the DB file before and after the test execution.
            - configure_local_internal_options_function:
                type: fixture
                brief: Apply changes to the local_internal_options.conf configuration.
            - truncate_monitored_files:
                type: fixture
                brief: Truncate wazuh logs.
            - restart_wazuh_daemon_function:
                type: fixture
                brief: Restart the wazuh service.
            - file_monitoring:
                type: fixture
                brief: Handle the monitoring of a specified file.
        assertions:
            - Check in the log that the module was called with correct parameters.
            - Check the expected number of events were forwarded to analysisd.
            - Check the database was created and updated accordingly, using the correct path for each entry.
        input_description:
            - The `configuration_regions` file provides the module configuration for this test.
            - The `cases_regions` file provides the test cases.
        """
        service_type = metadata['service_type']
        log_group_names = metadata['log_group_name']
        expected_results = metadata['expected_results']
    
        parameters = [
            'wodles/aws/aws-s3',
            '--service', service_type,
            '--only_logs_after', '2023-JAN-12',
            '--regions', 'us-east-1',
            '--aws_log_groups', log_group_names,
            '--debug', '2'
        ]
    
        # Check AWS module started
        log_monitor.start(
            timeout=session_parameters.default_timeout,
            callback=event_monitor.callback_detect_aws_module_start
        )
    
        assert log_monitor.callback_result is not None, ERROR_MESSAGE['failed_start']
    
        # Check command was called correctly
        log_monitor.start(
            timeout=session_parameters.default_timeout,
            callback=event_monitor.callback_detect_aws_module_called(parameters)
        )
    
        if expected_results:
            log_monitor.start(
                timeout=TIMEOUT[20],
                callback=event_monitor.callback_detect_service_event_processed(expected_results, service_type),
                accumulations=len(log_group_names.split(','))
            )
        else:
            log_monitor.start(
                timeout=TIMEOUT[10],
                callback=event_monitor.make_aws_callback(pattern=fr"{NON_EXISTENT_SPECIFIED_LOG_GROUPS}")
            )
    
            assert log_monitor.callback_result is not None, ERROR_MESSAGE['incorrect_no_existent_log_group']
    
        assert path_exist(path=AWS_SERVICES_DB_PATH)
    
        if expected_results:
            log_group_list = log_group_names.split(",")
            for row in get_multiple_service_db_row(table_name='cloudwatch_logs'):
>               assert row.aws_log_group in log_group_list
E               AssertionError: assert 'wazuh-cloudwatchlogs-integration-tests' in ['wazuh-cloudwatchlogs-integration-tests-b18a8eeb-todelete', 'temporary-log-group-b18a8eeb-todelete']
E                +  where 'wazuh-cloudwatchlogs-integration-tests' = ServiceCloudWatchRow(aws_region='us-east-1', aws_log_group='wazuh-cloudwatchlogs-integration-tests', aws_log_stream='w...token='f/38198203508651813109938003761325147006176055714992062463/s', start_time=1673481600000, end_time=1673481600000).aws_log_group

wazuh/tests/integration/test_aws/test_log_groups.py:151: AssertionError
---------------------------------------------------------------------- Captured log setup -----------------------------------------------------------------------
DEBUG    wazuh_testing:conftest.py:315 Created log group: wazuh-cloudwatchlogs-integration-tests-b18a8eeb-todelete
DEBUG    wazuh_testing:conftest.py:315 Created log group: temporary-log-group-b18a8eeb-todelete
DEBUG    wazuh_testing:conftest.py:360 Created log stream wazuh-cloudwatchlogs-integration-tests-stream-b18a8eeb-todelete within log group wazuh-cloudwatchlogs-integration-tests-b18a8eeb-todelete
DEBUG    wazuh_testing:conftest.py:360 Created log stream wazuh-cloudwatchlogs-integration-tests-stream-b18a8eeb-todelete within log group temporary-log-group-b18a8eeb-todelete
DEBUG    wazuh_testing:conftest.py:183 Set local_internal_option to {'wazuh_modules.debug': '2', 'monitord.rotate_log': '0'}
DEBUG    wazuh_testing:conftest.py:206 Restarting all daemon
DEBUG    wazuh_testing:conftest.py:242 Initializing file to monitor to /var/ossec/logs/ossec.log
----------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------
INFO     wazuh_testing:db_administrator.py:19 Connection established with /var/ossec/wodles/aws/aws_services.db
--------------------------------------------------------------------- Captured log teardown ---------------------------------------------------------------------
DEBUG    wazuh_testing:conftest.py:250 Trucanted /var/ossec/logs/ossec.log
DEBUG    wazuh_testing:conftest.py:218 Stopping all daemons
DEBUG    wazuh_testing:conftest.py:188 Restore local_internal_option to {'wazuh_modules.debug': '2\n', 'monitord.rotate_log': '0\n'}
==================================================================== short test summary info ====================================================================
FAILED wazuh/tests/integration/test_aws/test_log_groups.py::test_log_groups[cloudwatchlogs_log_groups_with_data] - AssertionError: assert 'wazuh-cloudwatchlog...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
====================================================================== 1 failed in 44.72s =======================================================================

The failure is related to hardcoded values in the services database, which contains log group names values defined in the qa-integration-framework/src/wazuh_testing/constants/aws.py file:

PERMANENT_CLOUDWATCH_LOG_GROUP = 'wazuh-cloudwatchlogs-integration-tests'
TEMPORARY_CLOUDWATCH_LOG_GROUP = 'temporary-log-group'
FAKE_CLOUDWATCH_LOG_GROUP = 'fake-log-group'

A solution is being studied.

@fdalmaup
Copy link
Member

fdalmaup commented Apr 12, 2024

Issue Update

  • Modifications made for test_log_groups.py. Now the resource_type is used to validate if it is needed to generate resources or not. This was determined after checking a use case where an error is expected due to an inexistent log group.
root@vagrant:/# pytest -x /wazuh/tests/integration/test_aws/test_log_groups.py
====================================================================== test session starts ======================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 2 items                                                                                                                                               

wazuh/tests/integration/test_aws/test_log_groups.py ..                                                                                                    [100%]

====================================================================== 2 passed in 55.00s =======================================================================
  • Continuing with test_only_logs_after.py, the test presents a series of cases where the only_logs_after parameter behavior is checked both for buckets and services:
    • when no parameter is used
    • when the parameter is used
    • multiple calls with different parameter values

Given how the parameter works, it is needed to be able to upload files from different dates for the module to retrieve them accordingly. Added to this is the need for variety in the logs detected when the modifications for the discard_regex tests were made. Therefore, the development will continue with the necessary modifications to the qa-integration-framework functions and then with the tests. To achieve this, the rest of the tests will be reviewed to determine other modifications not contemplated.

@fdalmaup
Copy link
Member

Issue Update

@fdalmaup
Copy link
Member

fdalmaup commented May 9, 2024

Issue Update

  • Currently adapting the test_only_logs_after tests, specifically the test_bucket_multiple_calls for the VPCFlow type. This case is failing since the module checks for logs for every available flow log ID. The test expects only one No logs to process for... message but receives up to 115. The previous test cases are passing successfully.

@fdalmaup
Copy link
Member

fdalmaup commented May 10, 2024

Issue Update

test_only_logs_after

Finished adapting the tests for the only_logs_after parameter.

========================================================================================= short test summary info =========================================================================================
FAILED wazuh/tests/integration/test_aws/test_only_logs_after.py::test_bucket_multiple_calls[vpc_only_logs_after_multiple_calls] - wazuh_testing.modules.aws.utils.OutputAnalysisError: Some logs may hav...
========================================================================== 1 failed, 44 passed, 2 xfailed in 1550.11s (0:25:50) ===========================================================================

The error in the test_bucket_multiple_calls test for the VPCFlow type is due to how the analyze_command_output function used to check the logs works.

https://github.com/wazuh/qa-integration-framework/blob/3a1f81e28f99d43e77d83b82f14311484f84691f/src/wazuh_testing/modules/aws/utils.py#L624-L655

It expects the exact amount of logs but due to the varying number of flow log IDs that may be obtained from the AWS environment, there is no certainty that the passed value will be the one corresponding to the number of logs that the module will print when iterating to find every log for the available flow log IDs.

The test_cloudwatch_multiple_calls test fails due to how the test case process is and a bug we found in the put_log_events method used to upload events into a defined log stream already mentioned in wazuh/qa-integration-framework#133 (comment). The test first checks that no log is fetched, not setting the only_logs_after parameter. However, since the framework method always uploads events for the execution date, the number of expected_results events is fetched generating the failure.
The test was marked as expected to fail and the corresponding issue to analyze possible modifications will be opened.

test_parser

Adapted the test_parser tests, since these do not create resources, the changes were straightforward.

========================================================================================= short test summary info =========================================================================================
FAILED wazuh/tests/integration/test_aws/test_parser.py::test_invalid_values_in_bucket[parser_invalid_name_in_bucket] - AssertionError: The AWS module did not show the expected message about invalid value
================================================================================ 1 failed, 25 passed in 259.16s (0:04:19) =================================================================================

The failing test is being analyzed. When running it isolated or as part of the test_invalid_values_in_bucket case it does not fail:

/# pytest /wazuh/tests/integration/test_aws/test_parser.py::test_invalid_values_in_bucket[parser_invalid_name_in_bucket]
=========================================================================================== test session starts ===========================================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 1 item                                                                                                                                                                                          

wazuh/tests/integration/test_aws/test_parser.py .                                                                                                                                                   [100%]

=========================================================================================== 1 passed in 16.74s ============================================================================================
# pytest /wazuh/tests/integration/test_aws/test_parser.py::test_invalid_values_in_bucket
=========================================================================================== test session starts ===========================================================================================
platform linux -- Python 3.10.12, pytest-7.1.2, pluggy-1.4.0
rootdir: /wazuh/tests/integration, configfile: pytest.ini
plugins: metadata-3.1.1, html-3.1.1
collected 7 items                                                                                                                                                                                         

wazuh/tests/integration/test_aws/test_parser.py .......                                                                                                                                             [100%]

====================================================================================== 7 passed in 88.46s (0:01:28) =======================================================================================

@fdalmaup
Copy link
Member

fdalmaup commented May 13, 2024

Issue Update

  • Fixed the failing test from test_parser. Some conditions related to the order of the test cases made it fail when executing the whole set.

  • Finished adapting test_path, test_path_suffix, test_regions and test_remove_from_bucket.

    • The test cases from test_regions that try to generate resources for unexistent regions are failing because the expected_results value is equal to 0 so the module does not find even the folder that should contain the unexistent region. A possible solution is being analyzed for these cases.
    • The test cases for VPC related to the normal execution of the module with the region parameter are failing. The investigation on the reason is being carried out.
  • A solution for the test_only_logs_after.py::test_bucket_multiple_calls[vpc_only_logs_after_multiple_calls] test case is being developed to ignore the multiple logs returned by the module when using the vpcflow type.

  • Every test was launched to get the total time required for them. The results were:

========================================================================================= short test summary info =========================================================================================
FAILED wazuh/tests/integration/test_aws/test_only_logs_after.py::test_bucket_multiple_calls[vpc_only_logs_after_multiple_calls] - wazuh_testing.modules.aws.utils.OutputAnalysisError: Some logs may hav...

FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[cloudtrail_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[vpc_region_with_data] - AssertionError: The AWS module did not process the expected number of events
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[vpc_regions_with_data] - AssertionError: The AWS module did not process the expected number of events
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[vpc_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[config_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[alb_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[clb_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[nlb_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region


========================================================== 9 failed, 185 passed, 3 skipped, 2 xfailed, 1 warning in 5315.33s (1:28:35) ===========================================================

@fdalmaup
Copy link
Member

fdalmaup commented May 14, 2024

Issue Update

  • Fixed the problem with wazuh/tests/integration/test_aws/test_only_logs_after.py::test_bucket_multiple_calls[vpc_only_logs_after_multiple_calls]. In the process it was found that the created flow logs for the VPC tests were not being deleted as expected when deleting the VPC. This was fixed in wazuh/qa-integration-framework@9615ed8 and the extra flow logs were deleted from the register. Since this scenario may happen in the future, the modifications to the test_bucket_multiple_calls test are necessary and make the test more complex because it adds multiple conditionals depending on the bucket type test case being executed.
  • The test docstrings were enhanced, adding the descriptions to the added fixtures.
  • Once again, every test was launched giving the errors seen in the last update for the test_regions[<type>_inexistent_region] test cases. After discussing these with the team, issue Add mechanism to check the regions parameter for AWS buckets #23431 was opened to add validation for the input region when the module is executed for the buckets, at least verifying that the region is available, and the tests will be updated accordingly once said development is done.
========================================================================================= short test summary info =========================================================================================
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[cloudtrail_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[vpc_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[config_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[alb_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[clb_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
FAILED wazuh/tests/integration/test_aws/test_regions.py::test_regions[nlb_inexistent_region] - AssertionError: The AWS module did not show correct message about non-existent region
=============================================================== 6 failed, 188 passed, 3 skipped, 2 xfailed, 1 warning in 5265.21s (1:27:45) ===============================================================

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
level/subtask type/bug Something isn't working
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants