Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retain previous finding if check fails #364

Open
tcoliver opened this issue Feb 23, 2024 · 2 comments
Open

Retain previous finding if check fails #364

tcoliver opened this issue Feb 23, 2024 · 2 comments

Comments

@tcoliver
Copy link

Problem to solve

There are a number of rules which rely on a user being logged into the device or other factors which can cause the rule to fail or return an unpredictable result.

Intended users

Users who are running the compliance script under automated conditions such as an MDM where the script run is initiated by a LaunchDaemon and may run prior to user login.

Further details

In my use case, we run the compliance script as part of a daily policy in Jamf Pro. Policies run in this way are run via a LaunchDaemon as root and do not depend on a user to be logged in. Other MDM's work similarly as this ensures inventory collection can occur even on in frequently used machines.

My current workaround for this is to create custom version of each of the affected rules. For example, my custom os_show_filename_extensions_enable check is as follows

check: |
  if [[ -z "$CURRENT_USER" ]]; then
    if [[ $($plb -c "print os_show_filename_extensions_enable:finding" $audit_plist 2>/dev/null) == 'false' ]]; then
      echo 1
    fi
  else
    /usr/bin/sudo -u "$CURRENT_USER" /usr/bin/defaults read .GlobalPreferences AppleShowAllExtensions 2>/dev/null
  fi

This works well but it requires a separate implementation for each rule and complicates the "important" logic portion of the rule.

Proposal

Ideally, I would like the script to retrieve the previous run's finding result from the generated plist if the check portion returns any sort of error (i.e., non 0 exit code). This might require a slight rework of multiline check scripts as the final return code would need to be non-0.

Documentation

I don't believe there is much documentation around the specifics of how rules are checked a successful execution. At the moment, it does not appear that they are.

Testing

Introducing this change could potentially cause rules (especially custom rules) which return a non-0 exit code even when run successfully to fail. This would be a benefit long term as it would provide for a more reliable experience and possibly even an additional metric when testing rules.

What does success look like, and how can we measure that?

Success would be for rules which require special conditions to fail gracefully and refrain from reporting inaccurate information.

Examples of such rules are:

Links / references

@robertgendler
Copy link
Collaborator

This makes sense potentially on the rules with $CURRENT_USER. But not other rules.

The potential issue that comes up, what if the first compliance scan is done when no user is logged in. Now it doesn't have a state to fail into.

Possible fix is read from /Library/Preferences/com.apple.loginwindow lastUserName

@tcoliver
Copy link
Author

That's a good point, and I like the idea of reading the lastUserName value as a second try. Failing that, I think ideally the result of a failed check - where there is no previous value - would be a user defined choice.

So in my example above, I implicitly am choosing that any time a previous run is not available to consider it a finding. But I could also see someone thinking the other way and wanting to only be warned of noncompliance if a finding is legitimate.

Would it make sense to make the default behavior an optional flag passed to the generate_guidance.py script? I'd suggest a default value of counting failures as a finding in order to better match the current behavior. Then if someone wished for a more lax behavior they would have an option to override.

All that said, if there is any interest in giving this idea a shot, I would be happy to work on a proof of concept and make a pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants