s3 bucket rest api automation api #7866
-
We are working on the service for our internal team which are looking to provision s3 buckets without having an access to the main.go code . We are thinking to provide user rest interface where user wont have access to the main.go code and they can select the project and stack using the rest interface and then create a s3 bucket by specifying the name of the bucket . There will a possibility where multiple user may be able to create multiple s3 buckets on the same or new stack . And also looking for the ability to delete the specific s3 bucket which user has created on the specific stack without effecting other s3 buckets on the stack. We are following below example , but facing the issue where user can only create single s3 bucket and as soon we add a new bucket using the rest api it over writes the existing bucket. https://github.com/pulumi/automation-api-examples/tree/main/go/pulumi_over_http Having read about the documentation, I am taking impression that main.go file need to be updated with new S3 bucket detail every time user needs to add a new bucket, due to the pulumi nature of IAC. What would be the best way to achieve above without having give access to the code to end user and dynamically create or delete s3 resources without the need to update main.go file ? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I believe in order to do this, you'd need to store the list of buckets under management for a given stack somewhere outside of the Pulumi program -- e.g., in a database accessible to your Go program, an external file, or something similar. Then, at runtime, your wrapper program would fetch the list of buckets from that database, reconcile it with whatever your user's trying to do (create a new bucket, delete one, etc.), and then your Pulumi program would iterate over that list to determine what to create, update, or delete. Here's a complete example in TypeScript that shows how you might do this: import { InlineProgramArgs, LocalWorkspace } from "@pulumi/pulumi/automation";
import { s3 } from "@pulumi/aws";
import * as fs from "fs";
// Read the list of existing buckets from a local file, say.
const existingBuckets = getBucketList();
// Accept new buckets as command-line arguments.
const newBuckets = process.argv.slice(2);
// Combine the list of existing and known buckets.
const combinedListOfBuckets = [ ...existingBuckets, ...newBuckets ];
const runProgram = async () => {
const pulumiProgram = async () => {
// Iterate over the combined list to determine what to create.
combinedListOfBuckets.forEach(bucketName => {
new s3.Bucket(bucketName);
});
// Return the combined list of buckets as an Output.
return {
buckets: combinedListOfBuckets,
};
};
const args: InlineProgramArgs = {
stackName: "dev",
projectName: "inlineNode",
program: pulumiProgram,
};
// Run the program.
const stack = await LocalWorkspace.createOrSelectStack(args);
await stack.workspace.installPlugin("aws", "v4.0.0");
await stack.setConfig("aws:region", { value: "us-west-2" });
await stack.up({ onOutput: console.info });
};
// Read the list of existing buckets from a local file.
function getBucketList(): string[] {
return JSON.parse(fs.readFileSync("./buckets.json").toString());
}
// Write the buckets back to the same local file.
function saveBucketList() {
fs.writeFileSync("./buckets.json", JSON.stringify(combinedListOfBuckets), "utf8");
}
runProgram()
.then(() => saveBucketList())
.catch(error => console.error(error)); Run the program and pass in the name of the bucket you want to create: $ npm run start "my-first-bucket"
...
Outputs:
~ buckets: [
+ [0]: "my-first-bucket"
]
Resources:
+ 1 created
1 unchanged Add a second bucket: $ npm run start "my-second-bucket"
...
Outputs:
~ buckets: [
[0]: "my-first-bucket"
+ [1]: "my-second-bucket"
]
Resources:
+ 1 created
2 unchanged ☝️ Note that the first bucket's still there; it hasn't been deleted. Run the program without passing any bucket -- i.e., don't add or remove anything: $ npm run start
...
Outputs:
buckets: [
[0]: "my-first-bucket"
[1]: "my-second-bucket"
]
Resources:
3 unchanged At this point, the list of buckets under management looks like this: ["my-first-bucket","my-second-bucket"] So if you wanted to delete a bucket, you'd just remove it from the file, save, and run the program again: $ npm run start
...
Outputs:
~ buckets: [
[0]: "my-first-bucket"
- [1]: "my-second-bucket"
]
Resources:
- 1 deleted
2 unchanged Ultimately this works because there's a backing list of buckets, and Pulumi uses that list as the expression of "desired state". The most relevant bits of the program are therefore these: // Combine the list of existing and known buckets.
const combinedListOfBuckets = [ ...existingBuckets, ...newBuckets ];
// Iterate over the combined list to determine what to create.
combinedListOfBuckets.forEach(bucketName => {
new s3.Bucket(bucketName);
}); Hope that helps! There may be other ways to do this, but that's probably how I'd handle it. 😄 |
Beta Was this translation helpful? Give feedback.
I believe in order to do this, you'd need to store the list of buckets under management for a given stack somewhere outside of the Pulumi program -- e.g., in a database accessible to your Go program, an external file, or something similar. Then, at runtime, your wrapper program would fetch the list of buckets from that database, reconcile it with whatever your user's trying to do (create a new bucket, delete one, etc.), and then your Pulumi program would iterate over that list to determine what to create, update, or delete.
Here's a complete example in TypeScript that shows how you might do this: