Skip to content

Commit

Permalink
Splunk OnCall migration tool (#4267)
Browse files Browse the repository at this point in the history
# What this PR does

Refactors the PagerDuty migration script to be a bit more generic + adds
a migration script to migrate from Splunk OnCall (VictorOps)

tldr;
```bash
❯ docker build -t oncall-migrator .
[+] Building 0.4s (10/10) FINISHED
❯ docker run --rm \
-e MIGRATING_FROM="pagerduty" \
-e MODE="plan" \
-e ONCALL_API_URL="http://localhost:8080" \
-e ONCALL_API_TOKEN="<ONCALL_API_TOKEN>" \
-e PAGERDUTY_API_TOKEN="<PAGERDUTY_API_TOKEN>" \
oncall-migrator
running pagerduty migration script...

❯ docker run --rm \
-e MIGRATING_FROM="splunk" \
-e MODE="plan" \
-e ONCALL_API_URL="http://localhost:8080" \
-e ONCALL_API_TOKEN="<ONCALL_API_TOKEN>" \
-e SPLUNK_API_ID="<SPLUNK_API_ID>" \
-e SPLUNK_API_KEY="<SPLUNK_API_KEY>" \
oncall-migrator
migrating from splunk oncall...
```

https://www.loom.com/share/a855062d436a4ef79f030e22528d8c71

## Checklist

- [x] Unit, integration, and e2e (if applicable) tests updated
- [x] Documentation added (or `pr:no public docs` PR label added if not
required)
- [x] Added the relevant release notes label (see labels prefixed w/
`release:`). These labels dictate how your PR will
    show up in the autogenerated release notes.
  • Loading branch information
joeyorlando committed May 14, 2024
1 parent 978d7c5 commit c46dff0
Show file tree
Hide file tree
Showing 61 changed files with 3,342 additions and 382 deletions.
10 changes: 5 additions & 5 deletions .github/workflows/linting-and-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -279,18 +279,18 @@ jobs:
uv pip sync --system requirements.txt requirements-dev.txt
pytest -x
unit-test-pd-migrator:
name: "Unit tests - PagerDuty Migrator"
unit-test-migrators:
name: "Unit tests - Migrators"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.11.4"
cache: "pip"
cache-dependency-path: tools/pagerduty-migrator/requirements.txt
- name: Unit Test PD Migrator
working-directory: tools/pagerduty-migrator
cache-dependency-path: tools/migrators/requirements.txt
- name: Unit Test Migrators
working-directory: tools/migrators
run: |
pip install uv
uv pip sync --system requirements.txt
Expand Down
15 changes: 7 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,9 @@ repos:
files: ^engine
args: [--settings-file=engine/pyproject.toml, --filter-files]
- id: isort
name: isort - pd-migrator
files: ^tools/pagerduty-migrator
args:
[--settings-file=tools/pagerduty-migrator/.isort.cfg, --filter-files]
name: isort - migrators
files: ^tools/migrators
args: [--settings-file=tools/migrators/.isort.cfg, --filter-files]
- id: isort
name: isort - dev/scripts
files: ^dev/scripts
Expand All @@ -22,8 +21,8 @@ repos:
files: ^engine
args: [--config=engine/pyproject.toml]
- id: black
name: black - pd-migrator
files: ^tools/pagerduty-migrator
name: black - migrators
files: ^tools/migrators
- id: black
name: black - dev/scripts
files: ^dev/scripts
Expand All @@ -38,8 +37,8 @@ repos:
- flake8-bugbear
- flake8-tidy-imports
- id: flake8
name: flake8 - pd-migrator
files: ^tools/pagerduty-migrator
name: flake8 - migrators
files: ^tools/migrators
# Make sure config is compatible with black
# https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#flake8
args: ["--max-line-length=88", "--extend-ignore=E203,E501"]
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ Have a question, comment or feedback? Don't be afraid to [open an issue](https:/

## Further Reading

- _Migration from PagerDuty_ - [Migrator](https://github.com/grafana/oncall/tree/dev/tools/pagerduty-migrator)
- _Automated migration from other on-call tools_ - [Migrator](https://github.com/grafana/oncall/tree/dev/tools/migrators)
- _Documentation_ - [Grafana OnCall](https://grafana.com/docs/oncall/latest/)
- _Overview Webinar_ - [YouTube](https://www.youtube.com/watch?v=7uSe1pulgs8)
- _How To Add Integration_ - [How to Add Integration](https://github.com/grafana/oncall/tree/dev/engine/config_integrations/README.md)
Expand Down
10 changes: 7 additions & 3 deletions docs/sources/set-up/migration-from-other-tools/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ keywords:
- OnCall
- Migration
- Pagerduty
- Splunk OnCall
- VictorOps
- on-call tools
canonical: https://grafana.com/docs/oncall/latest/set-up/migration-from-other-tools/
aliases:
Expand All @@ -17,7 +19,9 @@ aliases:

# Migration from other tools

## Migration from PagerDuty to Grafana OnCall
We currently support automated migration from the following on-call tools:

Migration from PagerDuty to Grafana OnCall could be performed in automated way using
[OSS Migrator](https://github.com/grafana/oncall/tree/dev/tools/pagerduty-migrator).
- PagerDuty
- Splunk OnCall (VictorOps)

See our [OSS Migrator](https://github.com/grafana/oncall/tree/dev/tools/migrators) for more details.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ COPY requirements.txt requirements.txt
RUN python3 -m pip install -r requirements.txt

COPY . .
CMD ["python3", "-m" , "migrator"]
CMD ["python3", "main.py"]
254 changes: 209 additions & 45 deletions tools/pagerduty-migrator/README.md → tools/migrators/README.md

Large diffs are not rendered by default.

58 changes: 58 additions & 0 deletions tools/migrators/add_users_to_grafana.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import os
import sys

from pdpyras import APISession

from lib.grafana.api_client import GrafanaAPIClient
from lib.splunk.api_client import SplunkOnCallAPIClient

MIGRATING_FROM = os.environ["MIGRATING_FROM"]
PAGERDUTY = "pagerduty"
SPLUNK = "splunk"

PAGERDUTY_API_TOKEN = os.environ.get("PAGERDUTY_API_TOKEN")
SPLUNK_API_ID = os.environ.get("SPLUNK_API_ID")
SPLUNK_API_KEY = os.environ.get("SPLUNK_API_KEY")

GRAFANA_URL = os.environ["GRAFANA_URL"] # Example: http://localhost:3000
GRAFANA_USERNAME = os.environ["GRAFANA_USERNAME"]
GRAFANA_PASSWORD = os.environ["GRAFANA_PASSWORD"]

SUCCESS_SIGN = "✅"
ERROR_SIGN = "❌"

grafana_client = GrafanaAPIClient(GRAFANA_URL, GRAFANA_USERNAME, GRAFANA_PASSWORD)


def migrate_pagerduty_users():
session = APISession(PAGERDUTY_API_TOKEN)
for user in session.list_all("users"):
create_grafana_user(user["name"], user["email"])


def migrate_splunk_users():
client = SplunkOnCallAPIClient(SPLUNK_API_ID, SPLUNK_API_KEY)
for user in client.fetch_users(include_paging_policies=False):
create_grafana_user(f"{user['firstName']} {user['lastName']}", user["email"])


def create_grafana_user(name: str, email: str):
response = grafana_client.create_user_with_random_password(name, email)

if response.status_code == 200:
print(SUCCESS_SIGN + " User created: " + email)
elif response.status_code == 401:
sys.exit(ERROR_SIGN + " Invalid username or password.")
elif response.status_code == 412:
print(ERROR_SIGN + " User " + email + " already exists.")
else:
print("{} {}".format(ERROR_SIGN, response.text))


if __name__ == "__main__":
if MIGRATING_FROM == PAGERDUTY:
migrate_pagerduty_users()
elif MIGRATING_FROM == SPLUNK:
migrate_splunk_users()
else:
raise ValueError("Invalid value for MIGRATING_FROM")
File renamed without changes.
25 changes: 25 additions & 0 deletions tools/migrators/lib/base_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import os
from urllib.parse import urljoin

PAGERDUTY = "pagerduty"
SPLUNK = "splunk"
MIGRATING_FROM = os.getenv("MIGRATING_FROM")
assert MIGRATING_FROM in (PAGERDUTY, SPLUNK)

MODE_PLAN = "plan"
MODE_MIGRATE = "migrate"
MODE = os.getenv("MODE", default=MODE_PLAN)
assert MODE in (MODE_PLAN, MODE_MIGRATE)

ONCALL_API_TOKEN = os.environ["ONCALL_API_TOKEN"]
ONCALL_API_URL = urljoin(
os.environ["ONCALL_API_URL"].removesuffix("/") + "/",
"api/v1/",
)
ONCALL_DELAY_OPTIONS = [1, 5, 15, 30, 60]

SCHEDULE_MIGRATION_MODE_ICAL = "ical"
SCHEDULE_MIGRATION_MODE_WEB = "web"
SCHEDULE_MIGRATION_MODE = os.getenv(
"SCHEDULE_MIGRATION_MODE", SCHEDULE_MIGRATION_MODE_ICAL
)
4 changes: 4 additions & 0 deletions tools/migrators/lib/common/report.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
TAB = " " * 4
SUCCESS_SIGN = "✅"
ERROR_SIGN = "❌"
WARNING_SIGN = "⚠️" # TODO: warning sign does not renders properly
16 changes: 16 additions & 0 deletions tools/migrators/lib/common/resources/teams.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import typing


class MatchTeam(typing.TypedDict):
name: str
oncall_team: typing.Optional[typing.Dict[str, typing.Any]]


def match_team(team: MatchTeam, oncall_teams: typing.List[MatchTeam]) -> None:
oncall_team = None
for candidate_team in oncall_teams:
if team["name"].lower() == candidate_team["name"].lower():
oncall_team = candidate_team
break

team["oncall_team"] = oncall_team
16 changes: 16 additions & 0 deletions tools/migrators/lib/common/resources/users.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import typing


class MatchUser(typing.TypedDict):
email: str
oncall_user: typing.Optional[typing.Dict[str, typing.Any]]


def match_user(user: MatchUser, oncall_users: typing.List[MatchUser]) -> None:
oncall_user = None
for candidate_user in oncall_users:
if user["email"].lower() == candidate_user["email"].lower():
oncall_user = candidate_user
break

user["oncall_user"] = oncall_user
File renamed without changes.
83 changes: 83 additions & 0 deletions tools/migrators/lib/grafana/api_client.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import secrets
from urllib.parse import urljoin

import requests


class GrafanaAPIClient:
def __init__(self, base_url, username, password):
self.base_url = base_url
self.username = username
self.password = password

def _api_call(self, method: str, path: str, **kwargs):
return requests.request(
method,
urljoin(self.base_url, path),
auth=(self.username, self.password),
**kwargs,
)

def create_user_with_random_password(self, name: str, email: str):
return self._api_call(
"POST",
"/api/admin/users",
json={
"name": name,
"email": email,
"login": email.split("@")[0],
"password": secrets.token_urlsafe(15),
},
)

def get_all_users(self):
"""
https://grafana.com/docs/grafana/v10.3/developers/http_api/user/#search-users
"""
return self._api_call("GET", "/api/users").json()

def idemopotently_create_team_and_add_users(
self, team_name: str, user_emails: list[str]
) -> int:
"""
Get team by name
https://grafana.com/docs/grafana/v10.3/developers/http_api/team/#using-the-name-parameter
Create team
https://grafana.com/docs/grafana/v10.3/developers/http_api/team/#add-team
Add team members
https://grafana.com/docs/grafana/v10.3/developers/http_api/team/#add-team-member
"""
existing_team = self._api_call(
"GET", "/api/teams/search", params={"name": team_name}
).json()

if existing_team["teams"]:
# team already exists
team_id = existing_team["teams"][0]["id"]
else:
# team doesn't exist create it
response = self._api_call("POST", "/api/teams", json={"name": team_name})

if response.status_code == 200:
team_id = response.json()["teamId"]
else:
raise Exception(f"Failed to fetch/create Grafana team '{team_name}'")

grafana_users = self.get_all_users()
grafana_user_id_to_email_map = {}

for user_email in user_emails:
for grafana_user in grafana_users:
if grafana_user["email"] == user_email:
grafana_user_id_to_email_map[grafana_user["id"]] = user_email
break

for user_id in grafana_user_id_to_email_map.keys():
self._api_call(
"POST", f"/api/teams/{team_id}/members", json={"userId": user_id}
)

return team_id
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,17 @@
from requests import HTTPError
from requests.adapters import HTTPAdapter, Retry

from migrator.config import ONCALL_API_TOKEN, ONCALL_API_URL


def api_call(method: str, path: str, **kwargs) -> requests.Response:
url = urljoin(ONCALL_API_URL, path)
def api_call(method: str, base_url: str, path: str, **kwargs) -> requests.Response:
url = urljoin(base_url, path)

# Retry on network errors
session = requests.Session()
retries = Retry(total=5, backoff_factor=0.1)
session.mount("http://", HTTPAdapter(max_retries=retries))
session.mount("https://", HTTPAdapter(max_retries=retries))

response = session.request(
method, url, headers={"Authorization": ONCALL_API_TOKEN}, **kwargs
)
response = session.request(method, url, **kwargs)

try:
response.raise_for_status()
Expand Down Expand Up @@ -50,37 +46,3 @@ def api_call(method: str, path: str, **kwargs) -> requests.Response:
raise

return response


def list_all(path: str) -> list[dict]:
response = api_call("get", path)

data = response.json()
results = data["results"]

while data["next"]:
response = api_call("get", data["next"])

data = response.json()
results += data["results"]

return results


def create(path: str, payload: dict) -> dict:
response = api_call("post", path, json=payload)
return response.json()


def delete(path: str) -> None:
try:
api_call("delete", path)
except requests.exceptions.HTTPError as e:
# ignore 404s on delete so deleting resources manually while running the script doesn't break it
if e.response.status_code != 404:
raise


def update(path: str, payload: dict) -> dict:
response = api_call("put", path, json=payload)
return response.json()
Empty file.

0 comments on commit c46dff0

Please sign in to comment.