Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge 3.5 into 3.6 #17379

Merged
merged 68 commits into from
May 16, 2024
Merged

Merge 3.5 into 3.6 #17379

merged 68 commits into from
May 16, 2024

Conversation

Aflynn50
Copy link
Contributor

jack-w-shaw and others added 30 commits March 20, 2024 10:53
A fix was previously made here juju#16692
to increase compatibility with 2.9 controllers and 3.x clients by adding
a compatibility layer

However, there is an edge case with charm url which we see when
deploying some local charms with pylibjuju, causing deploys to be
impossible

Sometimes pylibjuju uploads local charms in such a way that the
resulting charm url in Mongo does not include a series. Since the
compatibility layer ensures a series is present in curls when we deploy,
this charm becomes impossible to deploy

We considered a few approaches to this. The best is to drop this
mutation to the charm url, restoring the behavior partially to 2.9.46
Series in mongo charm urls, even in 2.9 is a very legacy feature which
we do not think is actually required anywhere.

However, the consequences of this need to be thoroughly investigated in QA
steps

Resolves: https://bugs.launchpad.net/juju/+bug/2058311
…ylibjuju

juju#17061

Fix a regression in charm deploy

A fix was previously made here juju#16692
to increase compatibility with 2.9 controllers and 3.x clients by adding
a compatibility layer

However, there is an edge case with charm url which we see when
deploying some local charms with pylibjuju, causing deploys to be
impossible

Sometimes pylibjuju uploads local charms in such a way that the
resulting charm url in Mongo does not include a series. Since the
compatibility layer ensures a series is present in curls when we deploy,
this charm becomes impossible to deploy

We considered a few approaches to this. The best is to drop this
mutation to the charm url, restoring the behavior partially to 2.9.46
Series in mongo charm urls, even in 2.9 is a very legacy feature which
we do not think is actually required anywhere.

To prevent such a bug happening again, we will add pylibjuju smoke tests to the github actions suite.

## QA steps

This PR partially restores the functionality to 2.9.46, so the most likely place for something to break is incompatibility with 3.x clients. However, the most important support contract is with 2.9.x versions. Both are covered approximately equally in QA steps

### Matching client + controller (up to minor version)

Ensure a local charm, with no series in it's metadata.yaml exists at `/home/jack/charms/ubuntu`

Deploy a controller from this PR

#### Deploy charms with python-libjuju 
```
$ pip install juju=2.9.46.1
$ python -m asyncio
>>> from juju.model import Model
>>> m = Model()
>>> await m.connect()
>>> await m.deploy("/home/jack/charms/ubuntu")
<Application entity_id="ubuntu">
>>> await m.deploy("/home/jack/charms/ubuntu", series="jammy", application_name="ubuntu-jammy")
<Application entity_id="ubuntu-jammy">
>>> await m.deploy("ubuntu", application_name="ubuntu-ch")
<Application entity_id="ubuntu-ch">
>>> await m.deploy("ubuntu", series="jammy", application_name="ubuntu-ch-jammy")
<Application entity_id="ubuntu-ch-jammy">
>>> 
exiting asyncio REPL...

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd localhost/localhost 2.9.48.1 unsupported 11:27:04Z

App Version Status Scale Charm Channel Rev Exposed Message
ubuntu 20.04 active 1 ubuntu 0 no 
ubuntu-ch 20.04 active 1 ubuntu latest/stable 24 no 
ubuntu-ch-jammy 22.04 active 1 ubuntu latest/stable 24 no 
ubuntu-jammy 22.04 active 1 ubuntu 1 no 

Unit Workload Agent Machine Public address Ports Message
ubuntu-ch-jammy/0* active idle 3 10.219.211.187 
ubuntu-ch/0* active idle 2 10.219.211.153 
ubuntu-jammy/0* active idle 1 10.219.211.175 
ubuntu/0* active idle 0 10.219.211.196 

Machine State Address Inst id Series AZ Message
0 started 10.219.211.196 juju-cdcf11-0 focal Running
1 started 10.219.211.175 juju-cdcf11-1 jammy Running
2 started 10.219.211.153 juju-cdcf11-2 focal Running
3 started 10.219.211.187 juju-cdcf11-3 jammy Running
```

#### Juju cli client can deploy

```
$ juju deploy ubuntu ubu
Located charm "ubuntu" in charm-hub, revision 24
Deploying "ubu" from charm-hub charm "ubuntu", revision 24 in channel stable on focal

$ juju deploy ubuntu ubu2 --series jammy
Located charm "ubuntu" in charm-hub, revision 24
Deploying "ubu2" from charm-hub charm "ubuntu", revision 24 in channel stable on jammy

$ juju deploy ~/charms/ubuntu ubu-local
Located local charm "ubuntu", revision 0
Deploying "ubu-local" from local charm "ubuntu", revision 0 on focal

$ juju deploy ~/charms/ubuntu ubu-local2 --series jammy
Located local charm "ubuntu", revision 0
Deploying "ubu-local2" from local charm "ubuntu", revision 0 on jammy

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd localhost/localhost 2.9.48.1 unsupported 11:18:09Z

App Version Status Scale Charm Channel Rev Exposed Message
ubu 20.04 active 1 ubuntu stable 24 no 
ubu2 22.04 active 1 ubuntu stable 24 no 
ubu-local 20.04 active 1 ubuntu 0 no 
ubu-local2 22.04 active 1 ubuntu 0 no 

Unit Workload Agent Machine Public address Ports Message
ubu2/0* active idle 1 10.219.211.106 
ubu-local2/0* active idle 3 10.219.211.87 
ubu-local/0* active idle 2 10.219.211.184 
ubu/0* active idle 0 10.219.211.101 

Machine State Address Inst id Series AZ Message
0 started 10.219.211.101 juju-7dcde0-0 focal Running
1 started 10.219.211.106 juju-7dcde0-1 jammy Running
2 started 10.219.211.184 juju-7dcde0-2 focal Running
3 started 10.219.211.87 juju-7dcde0-3 jammy Running
```

### Compatibility with 3.x

QA steps copied from juju#16692

- `juju` refers to the compiled code under test; `juju_34` is the juju snap with channel 3.4/stable.

```bash
#
# Bootstrap a juju 3.4 controller for later upgrade and migration of a 2.9 model.
# 
$ juju_34 bootstrap localhost destination

# 
# Bootstrap a controller to test the changes
# 
$ juju bootstrap localhost testing

# 
# Run various commands requiring series and base, including charms with and without 
# 
$ juju_34 add-model moveme
$ juju_34 deploy juju-qa-test --revision 23 --channel edge
$ juju_34 deploy ./tests/suites/deploy/charms/lxd-profile-alt
$ juju_34 deploy postgresql
$ juju_34 add-machine --base [email protected] 
$ juju_34 refresh lxd-profile-alt --path ./tests/suites/deploy/charms/lxd-profile-alt

# Verify the juju-qa-test resource was correctly downloaded, check the application status for an output change.
$ juju_34 config juju-qa-test foo-file=true

# Update the base of an application for the next unit added.
$ juju_34 set-application-base juju-qa-test [email protected]

# Migrate to the 3.3. controller & upgrade.
$ juju_34 migrate moveme destination
$ juju_34 switch destination
$ juju_34 upgrade-model

# Ensure base change successful, validate the new unit is using [email protected]
$ juju_34 add-unit juju-qa-test
```

## Links

**Launchpad bug:** https://bugs.launchpad.net/juju/+bug/2058311
We found that certain old clients (juju 2.8, pylibjuju 3.0) would deploy
an application without a base or series in the charm origin. Instead,
the series would only be provided with the 'series' param (and the
charm url)

In this case, we would attempt to parse the base in the origin,
expecting it to be there, and fail.

We resolve this by taking another approach. We assume nothing about the
arg provided, look for all the series provided, and return them if
they're equal

This ensure that no matter what client we use, so long as it sends an OS
(if it doesn't there's nothing we can do, this is a legitmate failure)
and if multiple series/bases are present, so long as they match
(non-matching series would also be a legitmate failure)
Instead of c.Assert where failing the check will not impact code further
on. We built up a bad habit of using c.Assert un-necessarily. This stops
test execution when it fails, meaning we do not see future failures.

Assert an error is nil before Check on it's value. In a table
driven test, Check the error inside of an if statement containing a
Check on the value. Therefore the test will continue, but not panic on
check a value which does not exist, failing at the end.
…_with_deploy

juju#17079

We found that certain old clients would deploy an application without a base or series in the charm origin (juju 2.8 does not understand the concept of a charm origin, pylibjuju 3.0, does not fill in the series or base attributes). Instead, the series would only be provided with the 'series' param (and the charm url)

In this case, we would attempt to parse the base in the origin, expecting it to be there, and fail.

We resolve this by taking another approach. We assume nothing about the
arg provided, look for all the series provided, and return them if
they're equal

This ensure that no matter what client we use, so long as it sends an OS
(if it doesn't there's nothing we can do, this is a legitmate failure)
and if multiple series/bases are present, so long as they match
(non-matching series would also be a legitmate failure).

As a flyby, use `c.Check` in out `application_unit_test` test file.

## Checklist

- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing
- ~[ ] [Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing~
- ~[ ] [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~

## QA steps

Ensure a local charm, with no series in it's metadata.yaml exists at /home/jack/charms/ubuntu

Ensure a bundle exists a `/home/jack/juju/bundle.yaml` with content:
```
applications:
 ubuntu:
 charm: /home/jack/charms/ubuntu
 num_units: 1
```

Deploy a controller from this PR

### Verify pylibjuju 2.9.46.1 can deploy a bundle with a local charm
```
$ pip install juju=2.9.46.1
$ python -m asyncio
>>> from juju.model import Model
>>> m = Model()
>>> await m.connect()
>>> await m.deploy("/home/jack/juju/bundle.yaml")
[<Application entity_id="ubuntu">]
>>> 
exiting asyncio REPL...

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd-2.9.48 localhost/localhost 2.9.48.1 unsupported 11:14:03Z

App Version Status Scale Charm Channel Rev Exposed Message
ubuntu 20.04 active 1 ubuntu stable 0 no 

Unit Workload Agent Machine Public address Ports Message
ubuntu/0* active idle 0 10.219.211.30 

Machine State Address Inst id Series AZ Message
0 started 10.219.211.30 juju-dbc02a-0 focal Running
```

### Verify pylibjuju 3.0.4 can deploy applications

```
$ pip install juju==3.0.4
$ python -m asyncio
>>> from juju.model import Model
>>> m = Model()
>>> await m.connect()
>>> await m.deploy("/home/jack/charms/ubuntu")
<Application entity_id="ubuntu">
>>> 
exiting asyncio REPL...

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd-2.9.48 localhost/localhost 2.9.48.1 unsupported 17:33:59Z

App Version Status Scale Charm Channel Rev Exposed Message
ubuntu 20.04 active 1 ubuntu 4 no 

Unit Workload Agent Machine Public address Ports Message
ubuntu/0* active idle 0 10.219.211.158 

Machine State Address Inst id Series AZ Message
0 started 10.219.211.158 juju-fe6ef8-0 focal Running
```

The following QA steps are 'borrowed' from juju#17061

### Matching client + controller (up to minor version)

Ensure a local charm, with no series in it's metadata.yaml exists at `/home/jack/charms/ubuntu`

Deploy a controller from this PR

#### Deploy charms with python-libjuju 
```
$ pip install juju=2.9.46.1
$ python -m asyncio
>>> from juju.model import Model
>>> m = Model()
>>> await m.connect()
>>> await m.deploy("/home/jack/charms/ubuntu")
<Application entity_id="ubuntu">
>>> await m.deploy("/home/jack/charms/ubuntu", series="jammy", application_name="ubuntu-jammy")
<Application entity_id="ubuntu-jammy">
>>> await m.deploy("ubuntu", application_name="ubuntu-ch")
<Application entity_id="ubuntu-ch">
>>> await m.deploy("ubuntu", series="jammy", application_name="ubuntu-ch-jammy")
<Application entity_id="ubuntu-ch-jammy">
>>> 
exiting asyncio REPL...

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd localhost/localhost 2.9.48.1 unsupported 11:27:04Z

App Version Status Scale Charm Channel Rev Exposed Message
ubuntu 20.04 active 1 ubuntu 0 no 
ubuntu-ch 20.04 active 1 ubuntu latest/stable 24 no 
ubuntu-ch-jammy 22.04 active 1 ubuntu latest/stable 24 no 
ubuntu-jammy 22.04 active 1 ubuntu 1 no 

Unit Workload Agent Machine Public address Ports Message
ubuntu-ch-jammy/0* active idle 3 10.219.211.187 
ubuntu-ch/0* active idle 2 10.219.211.153 
ubuntu-jammy/0* active idle 1 10.219.211.175 
ubuntu/0* active idle 0 10.219.211.196 

Machine State Address Inst id Series AZ Message
0 started 10.219.211.196 juju-cdcf11-0 focal Running
1 started 10.219.211.175 juju-cdcf11-1 jammy Running
2 started 10.219.211.153 juju-cdcf11-2 focal Running
3 started 10.219.211.187 juju-cdcf11-3 jammy Running
```

#### Juju cli client can deploy

```
$ juju deploy ubuntu ubu
Located charm "ubuntu" in charm-hub, revision 24
Deploying "ubu" from charm-hub charm "ubuntu", revision 24 in channel stable on focal

$ juju deploy ubuntu ubu2 --series jammy
Located charm "ubuntu" in charm-hub, revision 24
Deploying "ubu2" from charm-hub charm "ubuntu", revision 24 in channel stable on jammy

$ juju deploy ~/charms/ubuntu ubu-local
Located local charm "ubuntu", revision 0
Deploying "ubu-local" from local charm "ubuntu", revision 0 on focal

$ juju deploy ~/charms/ubuntu ubu-local2 --series jammy
Located local charm "ubuntu", revision 0
Deploying "ubu-local2" from local charm "ubuntu", revision 0 on jammy

$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd localhost/localhost 2.9.48.1 unsupported 11:18:09Z

App Version Status Scale Charm Channel Rev Exposed Message
ubu 20.04 active 1 ubuntu stable 24 no 
ubu2 22.04 active 1 ubuntu stable 24 no 
ubu-local 20.04 active 1 ubuntu 0 no 
ubu-local2 22.04 active 1 ubuntu 0 no 

Unit Workload Agent Machine Public address Ports Message
ubu2/0* active idle 1 10.219.211.106 
ubu-local2/0* active idle 3 10.219.211.87 
ubu-local/0* active idle 2 10.219.211.184 
ubu/0* active idle 0 10.219.211.101 

Machine State Address Inst id Series AZ Message
0 started 10.219.211.101 juju-7dcde0-0 focal Running
1 started 10.219.211.106 juju-7dcde0-1 jammy Running
2 started 10.219.211.184 juju-7dcde0-2 focal Running
3 started 10.219.211.87 juju-7dcde0-3 jammy Running
```

### Compatibility with 3.x

QA steps copied from juju#16692

- `juju` refers to the compiled code under test; `juju_34` is the juju snap with channel 3.4/stable.

```bash
#
# Bootstrap a juju 3.4 controller for later upgrade and migration of a 2.9 model.
# 
$ juju_34 bootstrap localhost destination

# 
# Bootstrap a controller to test the changes
# 
$ juju bootstrap localhost testing

# 
# Run various commands requiring series and base, including charms with and without 
# 
$ juju_34 add-model moveme
$ juju_34 deploy juju-qa-test --revision 23 --channel edge
$ juju_34 deploy ./tests/suites/deploy/charms/lxd-profile-alt
$ juju_34 deploy postgresql
$ juju_34 add-machine --base [email protected] 
$ juju_34 refresh lxd-profile-alt --path ./tests/suites/deploy/charms/lxd-profile-alt

# Verify the juju-qa-test resource was correctly downloaded, check the application status for an output change.
$ juju_34 config juju-qa-test foo-file=true

# Update the base of an application for the next unit added.
$ juju_34 set-application-base juju-qa-test [email protected]

# Migrate to the 3.3. controller & upgrade.
$ juju_34 migrate moveme destination
$ juju_34 switch destination
$ juju_34 upgrade-model

# Ensure base change successful, validate the new unit is using [email protected]
$ juju_34 add-unit juju-qa-test
```
This includes the fix to lock down the files "pull" (read) API to
requires admin:

canonical/pebble@cd32622

Fixes CVE-2024-3250
juju#17136

This includes the fix to lock down the files "pull" (read) API to requires admin: canonical/pebble@cd32622

Pebble diff: canonical/pebble@5842ea6...v1.1.1

This is a fix for CVE-2024-3250
This was still using 1.7.1 but according to Maksim we should be moving to the 'v1' tag:
https://discourse.canonical.com/t/canonical-contributor-license-agreement-cla-incident-investigation/3518
…org/x/net-0.23.0

Bump golang.org/x/net from 0.22.0 to 0.23.0
If a new version is found and it is not in ubuntuSeries then don't mark
it as supported even if it appears as valid in the distroinfo.
In github's MacOs runner images, mongodb is no
longer present in versions of toolsets 13 and beyond
juju#17283

<!-- Why this change is needed and what it does. -->
Terraform tests require a new variable to indicate the version of juju that they are operating with. This is added to the actions yaml.

This PR also brings in the fix from juju/os#49 which stops juju 2.9 supporting new distro versions it finds in distro info. The tests are updated to account for `noble`.
## Checklist

<!-- If an item is not applicable, use `~strikethrough~`. -->

- [x] Code style: imports ordered, good names, simple structure, etc

## QA steps

CI passes
ycliuhw and others added 27 commits May 6, 2024 17:04
…r labels set to peer units because they always use the app owner label;
juju#17340

This PR fixes a regression introduced in 3.3 recently.

So the problem described [here](https://bugs.launchpad.net/juju/+bug/2064772) is peer units created the consumer label `database-peers.mysql.app` for the application-owned secret, and then the leader unit tries to create the application label `database-peers.mysql.app`. Now Juju complains the label `database-peers.mysql.app` has already been used for a peer unit as the consumer label. Juju requires that the label has to be unique for both unit-owned and app-owned secrets.

This fix ensures that peer units should never have their own consumer labels for the application-owned secrets. Both leader and other peer units should just use the owner label to access the application-owned secrets. Obviously, the leader unit is responsible for setting the owner label.


## Checklist

- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [x] Go unit tests, with comments saying what you're testing
- [x] [Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing
- [x] [doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages

## QA steps

```
juju deploy mysql --config profile=testing -n 3
juju wait-for application mysql
juju run mysql/leader pre-upgrade-check
juju refresh mysql --channel 8.0/beta`
```

## Documentation changes

No

## Links

**Launchpad bug:** https://bugs.launchpad.net/juju/+bug/2064772

**Jira card:** JUJU-5989
Signed-off-by: Babak K. Shandiz <[email protected]>
Signed-off-by: Babak K. Shandiz <[email protected]>
Adds a wait into test code after unit and relation removals to wait
until applications settle down before health checks are performed.
…heus-test

juju#17353

Currently the [ci gating tests are failing](https://jenkins.juju.canonical.com/view/ci-runs/job/ci-gating-tests/2577/) because of the [test-controllercharm-test-prometheus-microk8s](https://jenkins.juju.canonical.com/job/test-controllercharm-test-prometheus-microk8s/289/) job. (which prevents us from making further releases on 3.3, 3.4, etc.)

The failing line is [this one](https://github.com/juju/juju/blob/439fd0aabaa1c86a485d016bf33576ef247251ce/tests/suites/controllercharm/prometheus.sh#L68), where the `check "p1"` is catching that the p1 is not getting passed the `"$(active_condition "p1" 0)"` query, hence 

```
"Expected": p1
"Received": 
```

is seen in the [job output](https://jenkins.juju.canonical.com/job/test-controllercharm-test-prometheus-microk8s/289/consoleText).

I carried out the test manually on my local microk8s and the test itself seems to be fine. One reason might be that the [health checks ](https://github.com/juju/juju/blob/439fd0aabaa1c86a485d016bf33576ef247251ce/tests/suites/controllercharm/prometheus.sh#L67-L68) are being performed a little too fast before the application settles after the `juju remove-unit p1 --num-units 1` call, which briefly sets the application status to `waiting` until the scaling is done.

So this adds wait calls into test code after unit and relation removals to wait until applications settle down before health checks are performed.

I'm not entirely sure if the second and third wait (`wait_for "p2" "$(active_condition "p2" 1)"` ) calls are redundant since remove-relation shouldn't affect the application status if I'm not mistaken.
juju#17365

The secret revision watcher takes the secret uris and looks up the secret to hydrate the result. If a secret is deleted, the not found error was escaping and killing workers which used the watcher. If a secret was deleted in the relation broken hook, this broke the relation tear down workflow and the relation was left behind.

This PR tweaks the watcher to skip over secrets which are not found.

## QA steps

See the reproducer script in the bug.

```
juju add-model $MODEL1

juju deploy ./*.charm -m $MODEL1 app1 
juju deploy ./*.charm -m $MODEL2 app2

juju offer ${MODEL1}.app1:a-relation a-relation
juju switch ${MODEL2}
juju consume ${MODEL1}.a-relation

juju relate a-relation app2:b-relation

juju remove-relation a-relation app2:b-relation
```

## Links

https://bugs.launchpad.net/bugs/2065284

**Jira card:** [JUJU-6007](https://warthogs.atlassian.net/browse/JUJU-6007)



[JUJU-6007]: https://warthogs.atlassian.net/browse/JUJU-6007?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
…imple-connector

juju#17309

**(Note that this PR is relevant to JIMM controllers which support login with client credentials. So whenever we say *controller*, it's a *JIMM controller*.)**

When Juju Terraform Provider uses client credentials (as a service account) to communicate with the API using a `SimpleConnector`, it leaves `SimpleConfig.Username` empty and assigns the client credentials to `SimpleConfig.ClientID` and `SimpleConfig.ClientSecret`.

The problem is the `NewSimple` function does not take this into account and panics with the following message whenever the `Username` is empty:

```
invalid user tag ""
```

This PR adds a simple check to prevent the panic.

## Checklist

<!-- If an item is not applicable, use `~strikethrough~`. -->

- [x] Code style: imports ordered, good names, simple structure, etc
- [x] Comments saying why design decisions were made
- [ ] ~Go unit tests, with comments saying what you're testing~
- [ ] ~[Integration tests](https://github.com/juju/juju/tree/main/tests), with comments saying what you're testing~
- [ ] ~[doc.go](https://discourse.charmhub.io/t/readme-in-packages/451) added or updated in changed packages~

## QA steps

QA-ing this change is a bit complicated because of two reasons:

1. The `NewSimple` function is meant to be used by users of the `juju/juju` module, and hence there's no internal usage in Juju (so we can verify the changes without spinning up other things). To invoke the method, we can use Juju Terraform Provider. Note that, for this purpose, we need to build the provider with the new Juju changes.
2. We'd need a controller that provides the newly introduced login method, `LoginWithClientCredentials`. If the controller The only controller that supports this method at the moment, is JIMM. Therefore, we need to spin up a JIMM controller. If we try with a non-JIMM controller, instead of a panic we get this error, which is a bit misleading but converts the unsupported state:
 ```
 this version of Juju does not support login from old clients (not supported) (not supported)
 ```

We can, of course, help out with QA-ing this to make sure it works as expected.

Anyway, the whole QA process should look like this:

0. Build the Juju Terraform Provider with the changes in Juju.
1. Spin up a JIMM controller.
2. Create a Terraform plan from this template:
 ```tf
 terraform {
 required_providers {
 juju = {
 source = "registry.terraform.io/juju/juju" # (uses local provider repository)
 version = "=0.12.0"
 }
 }
 }

 provider "juju" {
 controller_addresses = "jimm.localhost:443"

 client_id = "some-client-id" # Value not important
 client_secret = "some-secret" # Value not important

 ca_certificate = <<EOT
 -----BEGIN CERTIFICATE-----
 JIMM TLS termination CERT
 -----END CERTIFICATE-----
 EOT

 }

 resource "juju_model" "qa" {
 name = "qa"

 cloud {
 name = "localhost"
 }
 }

 resource "juju_application" "qa" {
 name = "qa"

 model = juju_model.qa.name

 charm {
 name = "juju-qa-test"
 }

 units = 1
 }
 ```
3. Run `terraform init` and `terraform apply`.
4. No Panic (i.e., `invalid user tag ""`) should happen.

## Documentation changes

<!-- How it affects user workflow (CLI or API). -->

## Links

<!-- Link to all relevant specification, documentation, bug, issue or JIRA card. -->

**Launchpad bug:** https://bugs.launchpad.net/juju/+bug/

**Jira card:** JUJU-
juju#17368

Merge 3.3

juju#17340 [from ycliuhw/fix/lp-2064772](juju@322312b)
juju#17353 [from cderici/fix-controllercharm-prometheus…](juju@a5c211d)
juju#17365 [from wallyworld/fix-secret-watcher](juju@804e5b0)
juju#17373

Merge up from 3.4. The only conflicts were in the cla Github action which was upgraded, and go.mod which was resolved in favor of the higher versions.

Commits merged up in this PR are:
- juju#17368 from wallyworld
 - juju#17340 from ycliuhw
 - juju#17353 from cderici
 - juju#17365 from wallyworld
- juju#17313 from cderici
- juju#17321 from Aflynn50
 - juju#17283 from Aflynn50
 - juju#17236 from juju/dependabot/go_modules/golang.org/x
 - juju#17221 from jameinel
Remove the non numeric (tag) part of the juju version when setting
$JUJU_AGENT_VERSION. There is currently a terraform test which fails
becuase it tried to parse the version but only looks for numbers of the
form "X.Y.Z" and it errors on anything else.

We intend to update the terraform test to stop it relying on
this enviromental variable and determining what is supported another
way.
@wallyworld
Copy link
Member

/merge

@jujubot jujubot merged commit 03b5c0b into juju:3.6 May 16, 2024
21 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants