Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ CI Improvements tracker 📃 #1814

Open
2 of 7 tasks
EmilienM opened this issue Jan 9, 2024 · 10 comments
Open
2 of 7 tasks

✨ CI Improvements tracker 📃 #1814

EmilienM opened this issue Jan 9, 2024 · 10 comments

Comments

@EmilienM
Copy link
Contributor

EmilienM commented Jan 9, 2024

This list the items that we want to fix or improve in our CI.

Bump Ubuntu & OpenStack versions

Improve Logging

There are a lot of red herring that can confuse the developers when reading CI logs.

Simplify how devstack is configured and run to deploy OpenStack

Right now we use custom shell scripts to configure and run Devstack, which we have to maintain based on OpenStack versions, etc. It'll be nice if we could just consume how the OpenStack community does in upstream CI so we reduce the cost of maintenance.

  • Investigate how we could deploy a snapshotted devstack to reduce waiting time.
  • Investigate Ansible roles for Devstack
  • Switch our CI to use these roles

Artifact gathering

  • Collect OpenStack configuration files
@lentzi90
Copy link
Contributor

lentzi90 commented Jan 9, 2024

I have been dreaming about a way to reduce the time it takes to set up the devstack. When working locally I create the devstack once and then reuse it for multiple tests to reduce the waiting time, but the CI does it from scratch every time.

I found out that at some point we used to have ready made devstack images and I imagine that the idea was to make the setup faster. Unfortunately I have not had much luck snapshotting a devstack and then start a new from the snapshot though...
If we could find a way to do that though, it would be awesome!

@EmilienM
Copy link
Contributor Author

EmilienM commented Jan 9, 2024

I have been dreaming about a way to reduce the time it takes to set up the devstack. When working locally I create the devstack once and then reuse it for multiple tests to reduce the waiting time, but the CI does it from scratch every time.

I found out that at some point we used to have ready made devstack images and I imagine that the idea was to make the setup faster. Unfortunately I have not had much luck snapshotting a devstack and then start a new from the snapshot though... If we could find a way to do that though, it would be awesome!

I've added it to the list. @mdbooth had the same idea!

@dulek
Copy link
Contributor

dulek commented Jan 9, 2024

We could think about converging on the CPO and CAPO CIs. Currently they use totally different way of setting up everything.

I'd also love to set up DevStack using upstream Ansible playbooks: https://opendev.org/openstack/devstack/src/branch/master/playbooks/devstack.yaml

@EmilienM
Copy link
Contributor Author

EmilienM commented Jan 9, 2024

We could think about converging on the CPO and CAPO CIs. Currently they use totally different way of setting up everything.

I'd also love to set up DevStack using upstream Ansible playbooks: https://opendev.org/openstack/devstack/src/branch/master/playbooks/devstack.yaml

Yes! Definitely, I was about to reach you on that topic :)

@mdbooth
Copy link
Contributor

mdbooth commented Jan 9, 2024

@EmilienM I think a bunch of folks will buy you many beers if you fix this.

@mandre
Copy link
Contributor

mandre commented Jan 11, 2024

In addition to this great list, the CI also sometimes fails to provision nodes due to insufficient resource available on the hypervisor. When that happens, the node are in ERROR state and CAPO stops reconciling the machine.

Since we apparently can't make the hypervisor bigger, so we should find a way to mark the machine heavy tests exclusive one to another. Alternatively we may look into machine healthchecks so that the failed nodes can be reprovisioned.

@mandre
Copy link
Contributor

mandre commented Jan 11, 2024

I've fixed the gcloud errors with e71d3ae, which I might split in its own separate PR.

@mandre mandre mentioned this issue Jan 11, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 14, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 14, 2024
@EmilienM
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants